WorldWideScience

Sample records for widespread parallel evolution

  1. Kinetic-Monte-Carlo-Based Parallel Evolution Simulation Algorithm of Dust Particles

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available The evolution simulation of dust particles provides an important way to analyze the impact of dust on the environment. KMC-based parallel algorithm is proposed to simulate the evolution of dust particles. In the parallel evolution simulation algorithm of dust particles, data distribution way and communication optimizing strategy are raised to balance the load of every process and reduce the communication expense among processes. The experimental results show that the simulation of diffusion, sediment, and resuspension of dust particles in virtual campus is realized and the simulation time is shortened by parallel algorithm, which makes up for the shortage of serial computing and makes the simulation of large-scale virtual environment possible.

  2. Parallel vs. Convergent Evolution in Domestication and Diversification of Crops in the Americas

    Directory of Open Access Journals (Sweden)

    Barbara Pickersgill

    2018-05-01

    Full Text Available Domestication involves changes in various traits of the phenotype in response to human selection. Diversification may accompany or follow domestication, and results in variants within the crop adapted to different uses by humans or different agronomic conditions. Similar domestication and diversification traits may be shared by closely related species (parallel evolution or by distantly related species (convergent evolution. Many of these traits are produced by complex genetic networks or long biosynthetic pathways that are extensively conserved even in distantly related species. Similar phenotypic changes in different species may be controlled by homologous genes (parallel evolution at the genetic level or non-homologous genes (convergent evolution at the genetic level. It has been suggested that parallel evolution may be more frequent among closely related species, or among diversification rather than domestication traits, or among traits produced by simple metabolic pathways. Crops domesticated in the Americas span a spectrum of genetic relatedness, have been domesticated for diverse purposes, and have responded to human selection by changes in many different traits, so provide examples of both parallel and convergent evolution at various levels. However, despite the current explosion in relevant information, data are still insufficient to provide quantitative or conclusive assessments of the relative roles of these two processes in domestication and diversification

  3. Molecular pathways to parallel evolution: I. Gene nexuses and their morphological correlates.

    Science.gov (United States)

    Zuckerkandl, E

    1994-12-01

    Aspects of the regulatory interactions among genes are probably as old as most genes are themselves. Correspondingly, similar predispositions to changes in such interactions must have existed for long evolutionary periods. Features of the structure and the evolution of the system of gene regulation furnish the background necessary for a molecular understanding of parallel evolution. Patently "unrelated" organs, such as the fat body of a fly and the liver of a mammal, can exhibit fractional homology, a fraction expected to become subject to quantitation. This also seems to hold for different organs in the same organism, such as wings and legs of a fly. In informational macromolecules, on the other hand, homology is indeed all or none. In the quite different case of organs, analogy is expected usually to represent attenuated homology. Many instances of putative convergence are likely to turn out to be predominantly parallel evolution, presumably including the case of the vertebrate and cephalopod eyes. Homology in morphological features reflects a similarity in networks of active genes. Similar nexuses of active genes can be established in cells of different embryological origins. Thus, parallel development can be considered a counterpart to parallel evolution. Specific macromolecular interactions leading to the regulation of the c-fos gene are given as an example of a "controller node" defined as a regulatory unit. Quantitative changes in gene control are distinguished from relational changes, and frequent parallelism in quantitative changes is noted in Drosophila enzymes. Evolutionary reversions in quantitative gene expression are also expected. The evolution of relational patterns is attributed to several distinct mechanisms, notably the shuffling of protein domains. The growth of such patterns may in part be brought about by a particular process of compensation for "controller gene diseases," a process that would spontaneously tend to lead to increased regulatory

  4. Academic training: From Evolution Theory to Parallel and Distributed Genetic Programming

    CERN Multimedia

    2007-01-01

    2006-2007 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 15, 16 March From 11:00 to 12:00 - Main Auditorium, bldg. 500 From Evolution Theory to Parallel and Distributed Genetic Programming F. FERNANDEZ DE VEGA / Univ. of Extremadura, SP Lecture No. 1: From Evolution Theory to Evolutionary Computation Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture No. 2: Parallel and Distributed Genetic Programming The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an ...

  5. Parallel Evolution of Sperm Hyper-Activation Ca2+ Channels.

    Science.gov (United States)

    Cooper, Jacob C; Phadnis, Nitin

    2017-07-01

    Sperm hyper-activation is a dramatic change in sperm behavior where mature sperm burst into a final sprint in the race to the egg. The mechanism of sperm hyper-activation in many metazoans, including humans, consists of a jolt of Ca2+ into the sperm flagellum via CatSper ion channels. Surprisingly, all nine CatSper genes have been independently lost in several animal lineages. In Drosophila, sperm hyper-activation is performed through the cooption of the polycystic kidney disease 2 (pkd2) Ca2+ channel. The parallels between CatSpers in primates and pkd2 in Drosophila provide a unique opportunity to examine the molecular evolution of the sperm hyper-activation machinery in two independent, nonhomologous calcium channels separated by > 500 million years of divergence. Here, we use a comprehensive phylogenomic approach to investigate the selective pressures on these sperm hyper-activation channels. First, we find that the entire CatSper complex evolves rapidly under recurrent positive selection in primates. Second, we find that pkd2 has parallel patterns of adaptive evolution in Drosophila. Third, we show that this adaptive evolution of pkd2 is driven by its role in sperm hyper-activation. These patterns of selection suggest that the evolution of the sperm hyper-activation machinery is driven by sexual conflict with antagonistic ligands that modulate channel activity. Together, our results add sperm hyper-activation channels to the class of fast evolving reproductive proteins and provide insights into the mechanisms used by the sexes to manipulate sperm behavior. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  6. Widespread correlations between climatic niche evolution and species diversification in birds.

    Science.gov (United States)

    Cooney, Christopher R; Seddon, Nathalie; Tobias, Joseph A

    2016-07-01

    The adaptability of species' climatic niches can influence the dynamics of colonization and gene flow across climatic gradients, potentially increasing the likelihood of speciation or reducing extinction in the face of environmental change. However, previous comparative studies have tested these ideas using geographically, taxonomically and ecologically restricted samples, yielding mixed results, and thus the processes linking climatic niche evolution with diversification remain poorly understood. Focusing on birds, the largest and most widespread class of terrestrial vertebrates, we test whether variation in species diversification among clades is correlated with rates of climatic niche evolution and the extent to which these patterns are modified by underlying gradients in biogeography and species' ecology. We quantified climatic niches, latitudinal distribution and ecological traits for 7657 (˜75%) bird species based on geographical range polygons and then used Bayesian phylogenetic analyses to test whether niche evolution was related to species richness and rates of diversification across genus- and family-level clades. We found that the rate of climatic niche evolution has a positive linear relationship with both species richness and diversification rate at two different taxonomic levels (genus and family). Furthermore, this positive association between labile climatic niches and diversification was detected regardless of variation in clade latitude or key ecological traits. Our findings suggest either that rapid adaptation to unoccupied areas of climatic niche space promotes avian diversification, or that diversification promotes adaptation. Either way, we propose that climatic niche evolution is a fundamental process regulating the link between climate and biodiversity at global scales, irrespective of the geographical and ecological context of speciation and extinction. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.

  7. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  8. Parallel electric fields in a simulation of magnetotail reconnection and plasmoid evolution

    International Nuclear Information System (INIS)

    Hesse, M.; Birn, J.

    1990-01-01

    Properties of the electric field component parallel to the magnetic field are investigate in a 3D MHD simulation of plasmoid formation and evolution in the magnetotail, in the presence of a net dawn-dusk magnetic field component. The spatial localization of E-parallel, and the concept of a diffusion zone and the role of E-parallel in accelerating electrons are discussed. A localization of the region of enhanced E-parallel in all space directions is found, with a strong concentration in the z direction. This region is identified as the diffusion zone, which plays a crucial role in reconnection theory through the local break-down of magnetic flux conservation. 12 refs

  9. From evolution theory to parallel and distributed genetic

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    Lecture #1: From Evolution Theory to Evolutionary Computation. Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture #2: Parallel and Distributed Genetic Programming. The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an increasing number of researchers to apply these techniques to a large set of problems. Given the difficulty of some problems, much effort has been applied to improving the efficiency of GP during the last few years. Among the available proposals,...

  10. Parallel Evolution of Copy-Number Variation across Continents in Drosophila melanogaster.

    Science.gov (United States)

    Schrider, Daniel R; Hahn, Matthew W; Begun, David J

    2016-05-01

    Genetic differentiation across populations that is maintained in the presence of gene flow is a hallmark of spatially varying selection. In Drosophila melanogaster, the latitudinal clines across the eastern coasts of Australia and North America appear to be examples of this type of selection, with recent studies showing that a substantial portion of the D. melanogaster genome exhibits allele frequency differentiation with respect to latitude on both continents. As of yet there has been no genome-wide examination of differentiated copy-number variants (CNVs) in these geographic regions, despite their potential importance for phenotypic variation in Drosophila and other taxa. Here, we present an analysis of geographic variation in CNVs in D. melanogaster. We also present the first genomic analysis of geographic variation for copy-number variation in the sister species, D. simulans, in order to investigate patterns of parallel evolution in these close relatives. In D. melanogaster we find hundreds of CNVs, many of which show parallel patterns of geographic variation on both continents, lending support to the idea that they are influenced by spatially varying selection. These findings support the idea that polymorphic CNVs contribute to local adaptation in D. melanogaster In contrast, we find very few CNVs in D. simulans that are geographically differentiated in parallel on both continents, consistent with earlier work suggesting that clinal patterns are weaker in this species. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Pursuing Darwin's curious parallel: Prospects for a science of cultural evolution.

    Science.gov (United States)

    Mesoudi, Alex

    2017-07-24

    In the past few decades, scholars from several disciplines have pursued the curious parallel noted by Darwin between the genetic evolution of species and the cultural evolution of beliefs, skills, knowledge, languages, institutions, and other forms of socially transmitted information. Here, I review current progress in the pursuit of an evolutionary science of culture that is grounded in both biological and evolutionary theory, but also treats culture as more than a proximate mechanism that is directly controlled by genes. Both genetic and cultural evolution can be described as systems of inherited variation that change over time in response to processes such as selection, migration, and drift. Appropriate differences between genetic and cultural change are taken seriously, such as the possibility in the latter of nonrandomly guided variation or transformation, blending inheritance, and one-to-many transmission. The foundation of cultural evolution was laid in the late 20th century with population-genetic style models of cultural microevolution, and the use of phylogenetic methods to reconstruct cultural macroevolution. Since then, there have been major efforts to understand the sociocognitive mechanisms underlying cumulative cultural evolution, the consequences of demography on cultural evolution, the empirical validity of assumed social learning biases, the relative role of transformative and selective processes, and the use of quantitative phylogenetic and multilevel selection models to understand past and present dynamics of society-level change. I conclude by highlighting the interdisciplinary challenges of studying cultural evolution, including its relation to the traditional social sciences and humanities.

  12. The role of Bh4 in parallel evolution of hull colour in domesticated and weedy rice.

    Science.gov (United States)

    Vigueira, C C; Li, W; Olsen, K M

    2013-08-01

    The two independent domestication events in the genus Oryza that led to African and Asian rice offer an extremely useful system for studying the genetic basis of parallel evolution. This system is also characterized by parallel de-domestication events, with two genetically distinct weedy rice biotypes in the US derived from the Asian domesticate. One important trait that has been altered by rice domestication and de-domestication is hull colour. The wild progenitors of the two cultivated rice species have predominantly black-coloured hulls, as does one of the two U.S. weed biotypes; both cultivated species and one of the US weedy biotypes are characterized by straw-coloured hulls. Using Black hull 4 (Bh4) as a hull colour candidate gene, we examined DNA sequence variation at this locus to study the parallel evolution of hull colour variation in the domesticated and weedy rice system. We find that independent Bh4-coding mutations have arisen in African and Asian rice that are correlated with the straw hull phenotype, suggesting that the same gene is responsible for parallel trait evolution. For the U.S. weeds, Bh4 haplotype sequences support current hypotheses on the phylogenetic relationship between the two biotypes and domesticated Asian rice; straw hull weeds are most similar to indica crops, and black hull weeds are most similar to aus crops. Tests for selection indicate that Asian crops and straw hull weeds deviate from neutrality at this gene, suggesting possible selection on Bh4 during both rice domestication and de-domestication. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.

  13. Parsing parallel evolution: ecological divergence and differential gene expression in the adaptive radiations of thick-lipped Midas cichlid fishes from Nicaragua.

    Science.gov (United States)

    Manousaki, Tereza; Hull, Pincelli M; Kusche, Henrik; Machado-Schiaffino, Gonzalo; Franchini, Paolo; Harrod, Chris; Elmer, Kathryn R; Meyer, Axel

    2013-02-01

    The study of parallel evolution facilitates the discovery of common rules of diversification. Here, we examine the repeated evolution of thick lips in Midas cichlid fishes (the Amphilophus citrinellus species complex)-from two Great Lakes and two crater lakes in Nicaragua-to assess whether similar changes in ecology, phenotypic trophic traits and gene expression accompany parallel trait evolution. Using next-generation sequencing technology, we characterize transcriptome-wide differential gene expression in the lips of wild-caught sympatric thick- and thin-lipped cichlids from all four instances of repeated thick-lip evolution. Six genes (apolipoprotein D, myelin-associated glycoprotein precursor, four-and-a-half LIM domain protein 2, calpain-9, GTPase IMAP family member 8-like and one hypothetical protein) are significantly underexpressed in the thick-lipped morph across all four lakes. However, other aspects of lips' gene expression in sympatric morphs differ in a lake-specific pattern, including the magnitude of differentially expressed genes (97-510). Generally, fewer genes are differentially expressed among morphs in the younger crater lakes than in those from the older Great Lakes. Body shape, lower pharyngeal jaw size and shape, and stable isotopes (δ(13)C and δ(15)N) differ between all sympatric morphs, with the greatest differentiation in the Great Lake Nicaragua. Some ecological traits evolve in parallel (those related to foraging ecology; e.g. lip size, body and head shape) but others, somewhat surprisingly, do not (those related to diet and food processing; e.g. jaw size and shape, stable isotopes). Taken together, this case of parallelism among thick- and thin-lipped cichlids shows a mosaic pattern of parallel and nonparallel evolution. © 2012 Blackwell Publishing Ltd.

  14. Molecular bases for parallel evolution of translucent bracts in an alpine "glasshouse" plant Rheum alexandrae (Polygonaceae)

    Czech Academy of Sciences Publication Activity Database

    Liu, B. B.; Opgenoorth, L.; Miehe, G.; Zhang, D.-Y.; Wan, D.-S.; Zhao, C.-M.; Jia, Dong-Rui; Liu, J.-Q.

    2013-01-01

    Roč. 51, č. 2 (2013), s. 134-141 ISSN 1674-4918 Institutional support: RVO:67985939 Keywords : cDNA-AFLPs * parallel evolution * adaptations, mutations, diversity Subject RIV: EF - Botanics Impact factor: 1.648, year: 2013

  15. Parallel evolution of mound-building and grass-feeding in Australian nasute termites.

    Science.gov (United States)

    Arab, Daej A; Namyatova, Anna; Evans, Theodore A; Cameron, Stephen L; Yeates, David K; Ho, Simon Y W; Lo, Nathan

    2017-02-01

    Termite mounds built by representatives of the family Termitidae are among the most spectacular constructions in the animal kingdom, reaching 6-8 m in height and housing millions of individuals. Although functional aspects of these structures are well studied, their evolutionary origins remain poorly understood. Australian representatives of the termitid subfamily Nasutitermitinae display a wide variety of nesting habits, making them an ideal group for investigating the evolution of mound building. Because they feed on a variety of substrates, they also provide an opportunity to illuminate the evolution of termite diets. Here, we investigate the evolution of termitid mound building and diet, through a comprehensive molecular phylogenetic analysis of Australian Nasutitermitinae. Molecular dating analysis indicates that the subfamily has colonized Australia on three occasions over the past approximately 20 Myr. Ancestral-state reconstruction showed that mound building arose on multiple occasions and from diverse ancestral nesting habits, including arboreal and wood or soil nesting. Grass feeding appears to have evolved from wood feeding via ancestors that fed on both wood and leaf litter. Our results underscore the adaptability of termites to ancient environmental change, and provide novel examples of parallel evolution of extended phenotypes. © 2017 The Author(s).

  16. Pursuing Darwin’s curious parallel: Prospects for a science of cultural evolution

    Science.gov (United States)

    2017-01-01

    In the past few decades, scholars from several disciplines have pursued the curious parallel noted by Darwin between the genetic evolution of species and the cultural evolution of beliefs, skills, knowledge, languages, institutions, and other forms of socially transmitted information. Here, I review current progress in the pursuit of an evolutionary science of culture that is grounded in both biological and evolutionary theory, but also treats culture as more than a proximate mechanism that is directly controlled by genes. Both genetic and cultural evolution can be described as systems of inherited variation that change over time in response to processes such as selection, migration, and drift. Appropriate differences between genetic and cultural change are taken seriously, such as the possibility in the latter of nonrandomly guided variation or transformation, blending inheritance, and one-to-many transmission. The foundation of cultural evolution was laid in the late 20th century with population-genetic style models of cultural microevolution, and the use of phylogenetic methods to reconstruct cultural macroevolution. Since then, there have been major efforts to understand the sociocognitive mechanisms underlying cumulative cultural evolution, the consequences of demography on cultural evolution, the empirical validity of assumed social learning biases, the relative role of transformative and selective processes, and the use of quantitative phylogenetic and multilevel selection models to understand past and present dynamics of society-level change. I conclude by highlighting the interdisciplinary challenges of studying cultural evolution, including its relation to the traditional social sciences and humanities. PMID:28739929

  17. Convergent, Parallel and Correlated Evolution of Trophic Morphologies in the Subfamily Schizothoracinae from the Qinghai-Tibetan Plateau

    Science.gov (United States)

    Qi, Delin; Chao, Yan; Guo, Songchang; Zhao, Lanying; Li, Taiping; Wei, Fulei; Zhao, Xinquan

    2012-01-01

    Schizothoracine fishes distributed in the water system of the Qinghai-Tibetan plateau (QTP) and adjacent areas are characterized by being highly adaptive to the cold and hypoxic environment of the plateau, as well as by a high degree of diversity in trophic morphology due to resource polymorphisms. Although convergent and parallel evolution are prevalent in the organisms of the QTP, it remains unknown whether similar evolutionary patterns have occurred in the schizothoracine fishes. Here, we constructed for the first time a tentative molecular phylogeny of the schizothoracine fishes based on the complete sequences of the cytochrome b gene. We employed this molecular phylogenetic framework to examine the evolution of trophic morphologies. We used Pagel's maximum likelihood method to estimate the evolutionary associations of trophic morphologies and food resource use. Our results showed that the molecular and published morphological phylogenies of Schizothoracinae are partially incongruent with respect to some intergeneric relationships. The phylogenetic results revealed that four character states of five trophic morphologies and of food resource use evolved at least twice during the diversification of the subfamily. State transitions are the result of evolutionary patterns including either convergence or parallelism or both. Furthermore, our analyses indicate that some characters of trophic morphologies in the Schizothoracinae have undergone correlated evolution, which are somewhat correlated with different food resource uses. Collectively, our results reveal new examples of convergent and parallel evolution in the organisms of the QTP. The adaptation to different trophic niches through the modification of trophic morphologies and feeding behaviour as found in the schizothoracine fishes may account for the formation and maintenance of the high degree of diversity and radiations in fish communities endemic to QTP. PMID:22470515

  18. Parallel Evolution of Copy-Number Variation across Continents in Drosophila melanogaster

    Science.gov (United States)

    Schrider, Daniel R.; Hahn, Matthew W.; Begun, David J.

    2016-01-01

    Genetic differentiation across populations that is maintained in the presence of gene flow is a hallmark of spatially varying selection. In Drosophila melanogaster, the latitudinal clines across the eastern coasts of Australia and North America appear to be examples of this type of selection, with recent studies showing that a substantial portion of the D. melanogaster genome exhibits allele frequency differentiation with respect to latitude on both continents. As of yet there has been no genome-wide examination of differentiated copy-number variants (CNVs) in these geographic regions, despite their potential importance for phenotypic variation in Drosophila and other taxa. Here, we present an analysis of geographic variation in CNVs in D. melanogaster. We also present the first genomic analysis of geographic variation for copy-number variation in the sister species, D. simulans, in order to investigate patterns of parallel evolution in these close relatives. In D. melanogaster we find hundreds of CNVs, many of which show parallel patterns of geographic variation on both continents, lending support to the idea that they are influenced by spatially varying selection. These findings support the idea that polymorphic CNVs contribute to local adaptation in D. melanogaster. In contrast, we find very few CNVs in D. simulans that are geographically differentiated in parallel on both continents, consistent with earlier work suggesting that clinal patterns are weaker in this species. PMID:26809315

  19. Evidence for widespread convergent evolution around human microsatellites.

    Directory of Open Access Journals (Sweden)

    Edward J Vowles

    2004-08-01

    Full Text Available Microsatellites are a major component of the human genome, and their evolution has been much studied. However, the evolution of microsatellite flanking sequences has received less attention, with reports of both high and low mutation rates and of a tendency for microsatellites to cluster. From the human genome we generated a database of many thousands of (AC(n flanking sequences within which we searched for common characteristics. Sequences flanking microsatellites of similar length show remarkable levels of convergent evolution, indicating shared mutational biases. These biases extend 25-50 bases either side of the microsatellite and may therefore affect more than 30% of the entire genome. To explore the extent and absolute strength of these effects, we quantified the observed convergence. We also compared homologous human and chimpanzee loci to look for evidence of changes in mutation rate around microsatellites. Most models of DNA sequence evolution assume that mutations are independent and occur randomly. Allowances may be made for sites mutating at different rates and for general mutation biases such as the faster rate of transitions over transversions. Our analysis suggests that these models may be inadequate, in that proximity to even very short microsatellites may alter the rate and distribution of mutations that occur. The elevated local mutation rate combined with sequence convergence, both of which we find evidence for, also provide a possible resolution for the apparently contradictory inferences of mutation rates in microsatellite flanking sequences.

  20. Darwin's concepts in a test tube: parallels between organismal and in vitro evolution.

    Science.gov (United States)

    Díaz Arenas, Carolina; Lehman, Niles

    2009-02-01

    The evolutionary process as imagined by Darwin 150 years ago is evident not only in nature but also in the manner in which naked nucleic acids and proteins experience the "survival of the fittest" in the test tube during in vitro evolution. This review highlights some of the most apparent evolutionary patterns, such as directional selection, purifying selection, disruptive selection, and iterative evolution (recurrence), and draws parallels between what happens in the wild with whole organisms and what happens in the lab with molecules. Advances in molecular selection techniques, particularly with catalytic RNAs and DNAs, have accelerated in the last 20 years to the point where soon any sort of complex differential hereditary event that one can ascribe to natural populations will be observable in molecular populations, and exploitation of these events can even lead to practical applications in some cases.

  1. Mixed-time parallel evolution in multiple quantum NMR experiments: sensitivity and resolution enhancement in heteronuclear NMR

    International Nuclear Information System (INIS)

    Ying Jinfa; Chill, Jordan H.; Louis, John M.; Bax, Ad

    2007-01-01

    A new strategy is demonstrated that simultaneously enhances sensitivity and resolution in three- or higher-dimensional heteronuclear multiple quantum NMR experiments. The approach, referred to as mixed-time parallel evolution (MT-PARE), utilizes evolution of chemical shifts of the spins participating in the multiple quantum coherence in parallel, thereby reducing signal losses relative to sequential evolution. The signal in a given PARE dimension, t 1 , is of a non-decaying constant-time nature for a duration that depends on the length of t 2 , and vice versa, prior to the onset of conventional exponential decay. Line shape simulations for the 1 H- 15 N PARE indicate that this strategy significantly enhances both sensitivity and resolution in the indirect 1 H dimension, and that the unusual signal decay profile results in acceptable line shapes. Incorporation of the MT-PARE approach into a 3D HMQC-NOESY experiment for measurement of H N -H N NOEs in KcsA in SDS micelles at 50 o C was found to increase the experimental sensitivity by a factor of 1.7±0.3 with a concomitant resolution increase in the indirectly detected 1 H dimension. The method is also demonstrated for a situation in which homonuclear 13 C- 13 C decoupling is required while measuring weak H3'-2'OH NOEs in an RNA oligomer

  2. Rapid parallel evolution overcomes global honey bee parasite.

    Science.gov (United States)

    Oddie, Melissa; Büchler, Ralph; Dahle, Bjørn; Kovacic, Marin; Le Conte, Yves; Locke, Barbara; de Miranda, Joachim R; Mondet, Fanny; Neumann, Peter

    2018-05-16

    In eusocial insect colonies nestmates cooperate to combat parasites, a trait called social immunity. However, social immunity failed for Western honey bees (Apis mellifera) when the ectoparasitic mite Varroa destructor switched hosts from Eastern honey bees (Apis cerana). This mite has since become the most severe threat to A. mellifera world-wide. Despite this, some isolated A. mellifera populations are known to survive infestations by means of natural selection, largely by supressing mite reproduction, but the underlying mechanisms of this are poorly understood. Here, we show that a cost-effective social immunity mechanism has evolved rapidly and independently in four naturally V. destructor-surviving A. mellifera populations. Worker bees of all four 'surviving' populations uncapped/recapped worker brood cells more frequently and targeted mite-infested cells more effectively than workers in local susceptible colonies. Direct experiments confirmed the ability of uncapping/recapping to reduce mite reproductive success without sacrificing nestmates. Our results provide striking evidence that honey bees can overcome exotic parasites with simple qualitative and quantitative adaptive shifts in behaviour. Due to rapid, parallel evolution in four host populations this appears to be a key mechanism explaining survival of mite infested colonies.

  3. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  4. Silencing, positive selection and parallel evolution: busy history of primate cytochromes C.

    Science.gov (United States)

    Pierron, Denis; Opazo, Juan C; Heiske, Margit; Papper, Zack; Uddin, Monica; Chand, Gopi; Wildman, Derek E; Romero, Roberto; Goodman, Morris; Grossman, Lawrence I

    2011-01-01

    Cytochrome c (cyt c) participates in two crucial cellular processes, energy production and apoptosis, and unsurprisingly is a highly conserved protein. However, previous studies have reported for the primate lineage (i) loss of the paralogous testis isoform, (ii) an acceleration and then a deceleration of the amino acid replacement rate of the cyt c somatic isoform, and (iii) atypical biochemical behavior of human cyt c. To gain insight into the cause of these major evolutionary events, we have retraced the history of cyt c loci among primates. For testis cyt c, all primate sequences examined carry the same nonsense mutation, which suggests that silencing occurred before the primates diversified. For somatic cyt c, maximum parsimony, maximum likelihood, and Bayesian phylogenetic analyses yielded the same tree topology. The evolutionary analyses show that a fast accumulation of non-synonymous mutations (suggesting positive selection) occurred specifically on the anthropoid lineage root and then continued in parallel on the early catarrhini and platyrrhini stems. Analysis of evolutionary changes using the 3D structure suggests they are focused on the respiratory chain rather than on apoptosis or other cyt c functions. In agreement with previous biochemical studies, our results suggest that silencing of the cyt c testis isoform could be linked with the decrease of primate reproduction rate. Finally, the evolution of cyt c in the two sister anthropoid groups leads us to propose that somatic cyt c evolution may be related both to COX evolution and to the convergent brain and body mass enlargement in these two anthropoid clades.

  5. Silencing, positive selection and parallel evolution: busy history of primate cytochromes C.

    Directory of Open Access Journals (Sweden)

    Denis Pierron

    Full Text Available Cytochrome c (cyt c participates in two crucial cellular processes, energy production and apoptosis, and unsurprisingly is a highly conserved protein. However, previous studies have reported for the primate lineage (i loss of the paralogous testis isoform, (ii an acceleration and then a deceleration of the amino acid replacement rate of the cyt c somatic isoform, and (iii atypical biochemical behavior of human cyt c. To gain insight into the cause of these major evolutionary events, we have retraced the history of cyt c loci among primates. For testis cyt c, all primate sequences examined carry the same nonsense mutation, which suggests that silencing occurred before the primates diversified. For somatic cyt c, maximum parsimony, maximum likelihood, and Bayesian phylogenetic analyses yielded the same tree topology. The evolutionary analyses show that a fast accumulation of non-synonymous mutations (suggesting positive selection occurred specifically on the anthropoid lineage root and then continued in parallel on the early catarrhini and platyrrhini stems. Analysis of evolutionary changes using the 3D structure suggests they are focused on the respiratory chain rather than on apoptosis or other cyt c functions. In agreement with previous biochemical studies, our results suggest that silencing of the cyt c testis isoform could be linked with the decrease of primate reproduction rate. Finally, the evolution of cyt c in the two sister anthropoid groups leads us to propose that somatic cyt c evolution may be related both to COX evolution and to the convergent brain and body mass enlargement in these two anthropoid clades.

  6. PARALLEL EVOLUTION OF QUASI-SEPARATRIX LAYERS AND ACTIVE REGION UPFLOWS

    Energy Technology Data Exchange (ETDEWEB)

    Mandrini, C. H.; Cristiani, G. D.; Nuevo, F. A.; Vásquez, A. M. [Instituto de Astronomía y Física del Espacio (IAFE), UBA-CONICET, CC. 67, Suc. 28 Buenos Aires, 1428 (Argentina); Baker, D.; Driel-Gesztelyi, L. van [UCL-Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT (United Kingdom); Démoulin, P.; Pick, M. [Observatoire de Paris, LESIA, UMR 8109 (CNRS), F-92195 Meudon Principal Cedex (France); Vargas Domínguez, S. [Observatorio Astronómico Nacional, Universidad Nacional de Colombia, Bogotá (Colombia)

    2015-08-10

    Persistent plasma upflows were observed with Hinode’s EUV Imaging Spectrometer (EIS) at the edges of active region (AR) 10978 as it crossed the solar disk. We analyze the evolution of the photospheric magnetic and velocity fields of the AR, model its coronal magnetic field, and compute the location of magnetic null-points and quasi-sepratrix layers (QSLs) searching for the origin of EIS upflows. Magnetic reconnection at the computed null points cannot explain all of the observed EIS upflow regions. However, EIS upflows and QSLs are found to evolve in parallel, both temporarily and spatially. Sections of two sets of QSLs, called outer and inner, are found associated to EIS upflow streams having different characteristics. The reconnection process in the outer QSLs is forced by a large-scale photospheric flow pattern, which is present in the AR for several days. We propose a scenario in which upflows are observed, provided that a large enough asymmetry in plasma pressure exists between the pre-reconnection loops and lasts as long as a photospheric forcing is at work. A similar mechanism operates in the inner QSLs; in this case, it is forced by the emergence and evolution of the bipoles between the two main AR polarities. Our findings provide strong support for the results from previous individual case studies investigating the role of magnetic reconnection at QSLs as the origin of the upflowing plasma. Furthermore, we propose that persistent reconnection along QSLs does not only drive the EIS upflows, but is also responsible for the continuous metric radio noise-storm observed in AR 10978 along its disk transit by the Nançay Radio Heliograph.

  7. Parallel evolution of TCP and B-class genes in Commelinaceae flower bilateral symmetry

    Directory of Open Access Journals (Sweden)

    Preston Jill C

    2012-03-01

    Full Text Available Abstract Background Flower bilateral symmetry (zygomorphy has evolved multiple times independently across angiosperms and is correlated with increased pollinator specialization and speciation rates. Functional and expression analyses in distantly related core eudicots and monocots implicate independent recruitment of class II TCP genes in the evolution of flower bilateral symmetry. Furthermore, available evidence suggests that monocot flower bilateral symmetry might also have evolved through changes in B-class homeotic MADS-box gene function. Methods In order to test the non-exclusive hypotheses that changes in TCP and B-class gene developmental function underlie flower symmetry evolution in the monocot family Commelinaceae, we compared expression patterns of teosinte branched1 (TB1-like, DEFICIENS (DEF-like, and GLOBOSA (GLO-like genes in morphologically distinct bilaterally symmetrical flowers of Commelina communis and Commelina dianthifolia, and radially symmetrical flowers of Tradescantia pallida. Results Expression data demonstrate that TB1-like genes are asymmetrically expressed in tepals of bilaterally symmetrical Commelina, but not radially symmetrical Tradescantia, flowers. Furthermore, DEF-like genes are expressed in showy inner tepals, staminodes and stamens of all three species, but not in the distinct outer tepal-like ventral inner tepals of C. communis. Conclusions Together with other studies, these data suggest parallel recruitment of TB1-like genes in the independent evolution of flower bilateral symmetry at early stages of Commelina flower development, and the later stage homeotic transformation of C. communis inner tepals into outer tepals through the loss of DEF-like gene expression.

  8. Repeated and Widespread Evolution of Bioluminescence in Marine Fishes.

    Directory of Open Access Journals (Sweden)

    Matthew P Davis

    Full Text Available Bioluminescence is primarily a marine phenomenon with 80% of metazoan bioluminescent genera occurring in the world's oceans. Here we show that bioluminescence has evolved repeatedly and is phylogenetically widespread across ray-finned fishes. We recover 27 independent evolutionary events of bioluminescence, all among marine fish lineages. This finding indicates that bioluminescence has evolved many more times than previously hypothesized across fishes and the tree of life. Our exploration of the macroevolutionary patterns of bioluminescent lineages indicates that the present day diversity of some inshore and deep-sea bioluminescent fish lineages that use bioluminescence for communication, feeding, and reproduction exhibit exceptional species richness given clade age. We show that exceptional species richness occurs particularly in deep-sea fishes with intrinsic bioluminescent systems and both shallow water and deep-sea lineages with luminescent systems used for communication.

  9. Improvement of remote monitoring on water quality in a subtropical reservoir by incorporating grammatical evolution with parallel genetic algorithms into satellite imagery.

    Science.gov (United States)

    Chen, Li; Tan, Chih-Hung; Kao, Shuh-Ji; Wang, Tai-Sheng

    2008-01-01

    Parallel GEGA was constructed by incorporating grammatical evolution (GE) into the parallel genetic algorithm (GA) to improve reservoir water quality monitoring based on remote sensing images. A cruise was conducted to ground-truth chlorophyll-a (Chl-a) concentration longitudinally along the Feitsui Reservoir, the primary water supply for Taipei City in Taiwan. Empirical functions with multiple spectral parameters from the Landsat 7 Enhanced Thematic Mapper (ETM+) data were constructed. The GE, an evolutionary automatic programming type system, automatically discovers complex nonlinear mathematical relationships among observed Chl-a concentrations and remote-sensed imageries. A GA was used afterward with GE to optimize the appropriate function type. Various parallel subpopulations were processed to enhance search efficiency during the optimization procedure with GA. Compared with a traditional linear multiple regression (LMR), the performance of parallel GEGA was found to be better than that of the traditional LMR model with lower estimating errors.

  10. A software for parameter optimization with Differential Evolution Entirely Parallel method

    Directory of Open Access Journals (Sweden)

    Konstantin Kozlov

    2016-08-01

    Full Text Available Summary. Differential Evolution Entirely Parallel (DEEP package is a software for finding unknown real and integer parameters in dynamical models of biological processes by minimizing one or even several objective functions that measure the deviation of model solution from data. Numerical solutions provided by the most efficient global optimization methods are often problem-specific and cannot be easily adapted to other tasks. In contrast, DEEP allows a user to describe both mathematical model and objective function in any programming language, such as R, Octave or Python and others. Being implemented in C, DEEP demonstrates as good performance as the top three methods from CEC-2014 (Competition on evolutionary computation benchmark and was successfully applied to several biological problems. Availability. DEEP method is an open source and free software distributed under the terms of GPL licence version 3. The sources are available at http://deepmethod.sourceforge.net/ and binary packages for Fedora GNU/Linux are provided for RPM package manager at https://build.opensuse.org/project/repositories/home:mackoel:compbio.

  11. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  12. Parallel evolution of tetrodotoxin resistance in three voltage-gated sodium channel genes in the garter snake Thamnophis sirtalis.

    Science.gov (United States)

    McGlothlin, Joel W; Chuckalovcak, John P; Janes, Daniel E; Edwards, Scott V; Feldman, Chris R; Brodie, Edmund D; Pfrender, Michael E; Brodie, Edmund D

    2014-11-01

    Members of a gene family expressed in a single species often experience common selection pressures. Consequently, the molecular basis of complex adaptations may be expected to involve parallel evolutionary changes in multiple paralogs. Here, we use bacterial artificial chromosome library scans to investigate the evolution of the voltage-gated sodium channel (Nav) family in the garter snake Thamnophis sirtalis, a predator of highly toxic Taricha newts. Newts possess tetrodotoxin (TTX), which blocks Nav's, arresting action potentials in nerves and muscle. Some Thamnophis populations have evolved resistance to extremely high levels of TTX. Previous work has identified amino acid sites in the skeletal muscle sodium channel Nav1.4 that confer resistance to TTX and vary across populations. We identify parallel evolution of TTX resistance in two additional Nav paralogs, Nav1.6 and 1.7, which are known to be expressed in the peripheral nervous system and should thus be exposed to ingested TTX. Each paralog contains at least one TTX-resistant substitution identical to a substitution previously identified in Nav1.4. These sites are fixed across populations, suggesting that the resistant peripheral nerves antedate resistant muscle. In contrast, three sodium channels expressed solely in the central nervous system (Nav1.1-1.3) showed no evidence of TTX resistance, consistent with protection from toxins by the blood-brain barrier. We also report the exon-intron structure of six Nav paralogs, the first such analysis for snake genes. Our results demonstrate that the molecular basis of adaptation may be both repeatable across members of a gene family and predictable based on functional considerations. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  13. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  14. Convergent Evolution of Hemoglobin Function in High-Altitude Andean Waterfowl Involves Limited Parallelism at the Molecular Sequence Level.

    Directory of Open Access Journals (Sweden)

    Chandrasekhar Natarajan

    2015-12-01

    Full Text Available A fundamental question in evolutionary genetics concerns the extent to which adaptive phenotypic convergence is attributable to convergent or parallel changes at the molecular sequence level. Here we report a comparative analysis of hemoglobin (Hb function in eight phylogenetically replicated pairs of high- and low-altitude waterfowl taxa to test for convergence in the oxygenation properties of Hb, and to assess the extent to which convergence in biochemical phenotype is attributable to repeated amino acid replacements. Functional experiments on native Hb variants and protein engineering experiments based on site-directed mutagenesis revealed the phenotypic effects of specific amino acid replacements that were responsible for convergent increases in Hb-O2 affinity in multiple high-altitude taxa. In six of the eight taxon pairs, high-altitude taxa evolved derived increases in Hb-O2 affinity that were caused by a combination of unique replacements, parallel replacements (involving identical-by-state variants with independent mutational origins in different lineages, and collateral replacements (involving shared, identical-by-descent variants derived via introgressive hybridization. In genome scans of nucleotide differentiation involving high- and low-altitude populations of three separate species, function-altering amino acid polymorphisms in the globin genes emerged as highly significant outliers, providing independent evidence for adaptive divergence in Hb function. The experimental results demonstrate that convergent changes in protein function can occur through multiple historical paths, and can involve multiple possible mutations. Most cases of convergence in Hb function did not involve parallel substitutions and most parallel substitutions did not affect Hb-O2 affinity, indicating that the repeatability of phenotypic evolution does not require parallelism at the molecular level.

  15. Parallel evolution under chemotherapy pressure in 29 breast cancer cell lines results in dissimilar mechanisms of resistance.

    Directory of Open Access Journals (Sweden)

    Bálint Tegze

    Full Text Available BACKGROUND: Developing chemotherapy resistant cell lines can help to identify markers of resistance. Instead of using a panel of highly heterogeneous cell lines, we assumed that truly robust and convergent pattern of resistance can be identified in multiple parallel engineered derivatives of only a few parental cell lines. METHODS: Parallel cell populations were initiated for two breast cancer cell lines (MDA-MB-231 and MCF-7 and these were treated independently for 18 months with doxorubicin or paclitaxel. IC50 values against 4 chemotherapy agents were determined to measure cross-resistance. Chromosomal instability and karyotypic changes were determined by cytogenetics. TaqMan RT-PCR measurements were performed for resistance-candidate genes. Pgp activity was measured by FACS. RESULTS: All together 16 doxorubicin- and 13 paclitaxel-treated cell lines were developed showing 2-46 fold and 3-28 fold increase in resistance, respectively. The RT-PCR and FACS analyses confirmed changes in tubulin isofom composition, TOP2A and MVP expression and activity of transport pumps (ABCB1, ABCG2. Cytogenetics showed less chromosomes but more structural aberrations in the resistant cells. CONCLUSION: We surpassed previous studies by parallel developing a massive number of cell lines to investigate chemoresistance. While the heterogeneity caused evolution of multiple resistant clones with different resistance characteristics, the activation of only a few mechanisms were sufficient in one cell line to achieve resistance.

  16. Molecular and morphological systematics of the Ellisellidae (Coelenterata: Octocorallia): Parallel evolution in a globally distributed family of octocorals

    KAUST Repository

    Bilewitch, Jaret P.

    2014-04-01

    The octocorals of the Ellisellidae constitute a diverse and widely distributed family with subdivisions into genera based on colonial growth forms. Branching patterns are repeated in several genera and congeners often display region-specific variations in a given growth form. We examined the systematic patterns of ellisellid genera and the evolution of branching form diversity using molecular phylogenetic and ancestral morphological reconstructions. Six of eight included genera were found to be polyphyletic due to biogeographical incompatibility with current taxonomic assignments and the creation of at least six new genera plus several reassignments among existing genera is necessary. Phylogenetic patterns of diversification of colony branching morphology displayed a similar transformation order in each of the two primary ellisellid clades, with a sea fan form estimated as the most-probable common ancestor with likely origins in the Indo-Pacific region. The observed parallelism in evolution indicates the existence of a constraint on the genetic elements determining ellisellid colonial morphology. However, the lack of correspondence between levels of genetic divergence and morphological diversity among genera suggests that future octocoral studies should focus on the role of changes in gene regulation in the evolution of branching patterns. © 2014 Elsevier Inc.

  17. Molecular and morphological systematics of the Ellisellidae (Coelenterata: Octocorallia): Parallel evolution in a globally distributed family of octocorals

    KAUST Repository

    Bilewitch, Jaret P.; Ekins, Merrick; Hooper, John; Degnan, Sandie M.

    2014-01-01

    The octocorals of the Ellisellidae constitute a diverse and widely distributed family with subdivisions into genera based on colonial growth forms. Branching patterns are repeated in several genera and congeners often display region-specific variations in a given growth form. We examined the systematic patterns of ellisellid genera and the evolution of branching form diversity using molecular phylogenetic and ancestral morphological reconstructions. Six of eight included genera were found to be polyphyletic due to biogeographical incompatibility with current taxonomic assignments and the creation of at least six new genera plus several reassignments among existing genera is necessary. Phylogenetic patterns of diversification of colony branching morphology displayed a similar transformation order in each of the two primary ellisellid clades, with a sea fan form estimated as the most-probable common ancestor with likely origins in the Indo-Pacific region. The observed parallelism in evolution indicates the existence of a constraint on the genetic elements determining ellisellid colonial morphology. However, the lack of correspondence between levels of genetic divergence and morphological diversity among genera suggests that future octocoral studies should focus on the role of changes in gene regulation in the evolution of branching patterns. © 2014 Elsevier Inc.

  18. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  19. Argentina's experience with parallel exchange markets: 1981-1990

    OpenAIRE

    Steven B. Kamin

    1991-01-01

    This paper surveys the development and operation of the parallel exchange market in Argentina during the 1980s, and evaluates its impact upon macroeconomic performance and policy. The historical evolution of Argentina's exchange market policies is reviewed in order to understand the government's motives for imposing exchange controls. The parallel exchange market engendered by these controls is then analyzed, and econometric methods are used to evaluate the behavior of the parallel exchange r...

  20. Parallel evolution of a type IV secretion system in radiating lineages of the host-restricted bacterial pathogen Bartonella.

    Science.gov (United States)

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C; Dehio, Christoph

    2011-02-10

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens

  1. Curious parallels and curious connections--phylogenetic thinking in biology and historical linguistics.

    Science.gov (United States)

    Atkinson, Quentin D; Gray, Russell D

    2005-08-01

    In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.

  2. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  3. Parallel and convergent evolution of the dim-light vision gene RH1 in bats (Order: Chiroptera).

    Science.gov (United States)

    Shen, Yong-Yi; Liu, Jie; Irwin, David M; Zhang, Ya-Ping

    2010-01-21

    Rhodopsin, encoded by the gene Rhodopsin (RH1), is extremely sensitive to light, and is responsible for dim-light vision. Bats are nocturnal mammals that inhabit poor light environments. Megabats (Old-World fruit bats) generally have well-developed eyes, while microbats (insectivorous bats) have developed echolocation and in general their eyes were degraded, however, dramatic differences in the eyes, and their reliance on vision, exist in this group. In this study, we examined the rod opsin gene (RH1), and compared its evolution to that of two cone opsin genes (SWS1 and M/LWS). While phylogenetic reconstruction with the cone opsin genes SWS1 and M/LWS generated a species tree in accord with expectations, the RH1 gene tree united Pteropodidae (Old-World fruit bats) and Yangochiroptera, with very high bootstrap values, suggesting the possibility of convergent evolution. The hypothesis of convergent evolution was further supported when nonsynonymous sites or amino acid sequences were used to construct phylogenies. Reconstructed RH1 sequences at internal nodes of the bat species phylogeny showed that: (1) Old-World fruit bats share an amino acid change (S270G) with the tomb bat; (2) Miniopterus share two amino acid changes (V104I, M183L) with Rhinolophoidea; (3) the amino acid replacement I123V occurred independently on four branches, and the replacements L99M, L266V and I286V occurred each on two branches. The multiple parallel amino acid replacements that occurred in the evolution of bat RH1 suggest the possibility of multiple convergences of their ecological specialization (i.e., various photic environments) during adaptation for the nocturnal lifestyle, and suggest that further attention is needed on the study of the ecology and behavior of bats.

  4. The tad locus: postcards from the widespread colonization island.

    Science.gov (United States)

    Tomich, Mladen; Planet, Paul J; Figurski, David H

    2007-05-01

    The Tad (tight adherence) macromolecular transport system, which is present in many bacterial and archaeal species, represents an ancient and major new subtype of type II secretion. The tad genes are present on a genomic island named the widespread colonization island (WCI), and encode the machinery that is required for the assembly of adhesive Flp (fimbrial low-molecular-weight protein) pili. The tad genes are essential for biofilm formation, colonization and pathogenesis in the genera Aggregatibacter (Actinobacillus), Haemophilus, Pasteurella, Pseudomonas, Yersinia, Caulobacter and perhaps others. Here we review the structure, function and evolution of the Tad secretion system.

  5. Parallel Evolution of a Type IV Secretion System in Radiating Lineages of the Host-Restricted Bacterial Pathogen Bartonella

    Science.gov (United States)

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C.; Dehio, Christoph

    2011-01-01

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens

  6. Parallel evolution of a type IV secretion system in radiating lineages of the host-restricted bacterial pathogen Bartonella.

    Directory of Open Access Journals (Sweden)

    Philipp Engel

    2011-02-01

    Full Text Available Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS, and thereby translocated Bartonella effector proteins (Beps, evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial

  7. Outburst flood evolution at Russell Glacier, western Greenland

    DEFF Research Database (Denmark)

    Carrivick, Jonathan L.; Turner, Andy G.D.; Russell, Andrew J.

    2013-01-01

    Glacial lake outburst floods have produced a distinctive and widespread Quaternary record both onshore and offshore via widespread and intense geomorphological impacts, yet these impacts remain poorly understood due to a lack of modern analogues. This study therefore makes a systematic quantifica...... of including intermediary lakes. Modern hazard mitigation studies could usefully note the potential use of reservoirs as an outburst flood alleviation resource.......Glacial lake outburst floods have produced a distinctive and widespread Quaternary record both onshore and offshore via widespread and intense geomorphological impacts, yet these impacts remain poorly understood due to a lack of modern analogues. This study therefore makes a systematic...... quantification of the evolution of a bedrock-channelled outburst flood. Channel topography was obtained from digitised aerial photographs, a 5 m grid resolution DEM and bathymetric surveys. Flood inundation was measured in the field from dGPS measurements. Flood evolution was analysed with application...

  8. Identification of Novel Betaherpesviruses in Iberian Bats Reveals Parallel Evolution.

    Directory of Open Access Journals (Sweden)

    Francisco Pozo

    Full Text Available A thorough search for bat herpesviruses was carried out in oropharyngeal samples taken from most of the bat species present in the Iberian Peninsula from the Vespertilionidae, Miniopteridae, Molossidae and Rhinolophidae families, in addition to a colony of captive fruit bats from the Pteropodidae family. By using two degenerate consensus PCR methods targeting two conserved genes, distinct and previously unrecognized bat-hosted herpesviruses were identified for the most of the tested species. All together a total of 42 potentially novel bat herpesviruses were partially characterized. Thirty-two of them were tentatively assigned to the Betaherpesvirinae subfamily while the remaining 10 were allocated into the Gammaherpesvirinae subfamily. Significant diversity was observed among the novel sequences when compared with type herpesvirus species of the ICTV-approved genera. The inferred phylogenetic relationships showed that most of the betaherpesviruses sequences fell into a well-supported unique monophyletic clade and support the recognition of a new betaherpesvirus genus. This clade is subdivided into three major clades, corresponding to the families of bats studied. This supports the hypothesis of a species-specific parallel evolution process between the potentially new betaherpesviruses and their bat hosts. Interestingly, two of the betaherpesviruses' sequences detected in rhinolophid bats clustered together apart from the rest, closely related to viruses that belong to the Roseolovirus genus. This suggests a putative third roseolo lineage. On the contrary, no phylogenetic structure was detected among several potentially novel bat-hosted gammaherpesviruses found in the study. Remarkably, all of the possible novel bat herpesviruses described in this study are linked to a unique bat species.

  9. Identification of Novel Betaherpesviruses in Iberian Bats Reveals Parallel Evolution.

    Science.gov (United States)

    Pozo, Francisco; Juste, Javier; Vázquez-Morón, Sonia; Aznar-López, Carolina; Ibáñez, Carlos; Garin, Inazio; Aihartza, Joxerra; Casas, Inmaculada; Tenorio, Antonio; Echevarría, Juan Emilio

    2016-01-01

    A thorough search for bat herpesviruses was carried out in oropharyngeal samples taken from most of the bat species present in the Iberian Peninsula from the Vespertilionidae, Miniopteridae, Molossidae and Rhinolophidae families, in addition to a colony of captive fruit bats from the Pteropodidae family. By using two degenerate consensus PCR methods targeting two conserved genes, distinct and previously unrecognized bat-hosted herpesviruses were identified for the most of the tested species. All together a total of 42 potentially novel bat herpesviruses were partially characterized. Thirty-two of them were tentatively assigned to the Betaherpesvirinae subfamily while the remaining 10 were allocated into the Gammaherpesvirinae subfamily. Significant diversity was observed among the novel sequences when compared with type herpesvirus species of the ICTV-approved genera. The inferred phylogenetic relationships showed that most of the betaherpesviruses sequences fell into a well-supported unique monophyletic clade and support the recognition of a new betaherpesvirus genus. This clade is subdivided into three major clades, corresponding to the families of bats studied. This supports the hypothesis of a species-specific parallel evolution process between the potentially new betaherpesviruses and their bat hosts. Interestingly, two of the betaherpesviruses' sequences detected in rhinolophid bats clustered together apart from the rest, closely related to viruses that belong to the Roseolovirus genus. This suggests a putative third roseolo lineage. On the contrary, no phylogenetic structure was detected among several potentially novel bat-hosted gammaherpesviruses found in the study. Remarkably, all of the possible novel bat herpesviruses described in this study are linked to a unique bat species.

  10. The Voltage-Gated Potassium Channel Subfamily KQT Member 4 (KCNQ4) Displays Parallel Evolution in Echolocating Bats

    Science.gov (United States)

    Liu, Yang; Han, Naijian; Franchini, Lucía F.; Xu, Huihui; Pisciottano, Francisco; Elgoyhen, Ana Belén; Rajan, Koilmani Emmanuvel; Zhang, Shuyi

    2012-01-01

    Bats are the only mammals that use highly developed laryngeal echolocation, a sensory mechanism based on the ability to emit laryngeal sounds and interpret the returning echoes to identify objects. Although this capability allows bats to orientate and hunt in complete darkness, endowing them with great survival advantages, the genetic bases underlying the evolution of bat echolocation are still largely unknown. Echolocation requires high-frequency hearing that in mammals is largely dependent on somatic electromotility of outer hair cells. Then, understanding the molecular evolution of outer hair cell genes might help to unravel the evolutionary history of echolocation. In this work, we analyzed the molecular evolution of two key outer hair cell genes: the voltage-gated potassium channel gene KCNQ4 and CHRNA10, the gene encoding the α10 nicotinic acetylcholine receptor subunit. We reconstructed the phylogeny of bats based on KCNQ4 and CHRNA10 protein and nucleotide sequences. A phylogenetic tree built using KCNQ4 amino acid sequences showed that two paraphyletic clades of laryngeal echolocating bats grouped together, with eight shared substitutions among particular lineages. In addition, our analyses indicated that two of these parallel substitutions, M388I and P406S, were probably fixed under positive selection and could have had a strong functional impact on KCNQ4. Moreover, our results indicated that KCNQ4 evolved under positive selection in the ancestral lineage leading to mammals, suggesting that this gene might have been important for the evolution of mammalian hearing. On the other hand, we found that CHRNA10, a gene that evolved adaptively in the mammalian lineage, was under strong purifying selection in bats. Thus, the CHRNA10 amino acid tree did not show echolocating bat monophyly and reproduced the bat species tree. These results suggest that only a subset of hearing genes could underlie the evolution of echolocation. The present work continues to

  11. Evolution of Parallel Spindles Like genes in plants and highlight of unique domain architecture#

    Directory of Open Access Journals (Sweden)

    Consiglio Federica M

    2011-03-01

    Full Text Available Abstract Background Polyploidy has long been recognized as playing an important role in plant evolution. In flowering plants, the major route of polyploidization is suggested to be sexual through gametes with somatic chromosome number (2n. Parallel Spindle1 gene in Arabidopsis thaliana (AtPS1 was recently demonstrated to control spindle orientation in the 2nd division of meiosis and, when mutated, to induce 2n pollen. Interestingly, AtPS1 encodes a protein with a FHA domain and PINc domain putatively involved in RNA decay (i.e. Nonsense Mediated mRNA Decay. In potato, 2n pollen depending on parallel spindles was described long time ago but the responsible gene has never been isolated. The knowledge derived from AtPS1 as well as the availability of genome sequences makes it possible to isolate potato PSLike (PSL and to highlight the evolution of PSL family in plants. Results Our work leading to the first characterization of PSLs in potato showed a greater PSL complexity in this species respect to Arabidopsis thaliana. Indeed, a genomic PSL locus and seven cDNAs affected by alternative splicing have been cloned. In addition, the occurrence of at least two other PSL loci in potato was suggested by the sequence comparison of alternatively spliced transcripts. Phylogenetic analysis on 20 Viridaeplantae showed the wide distribution of PSLs throughout the species and the occurrence of multiple copies only in potato and soybean. The analysis of PSLFHA and PSLPINc domains evidenced that, in terms of secondary structure, a major degree of variability occurred in PINc domain respect to FHA. In terms of specific active sites, both domains showed diversification among plant species that could be related to a functional diversification among PSL genes. In addition, some specific active sites were strongly conserved among plants as supported by sequence alignment and by evidence of negative selection evaluated as difference between non-synonymous and

  12. Parallel or convergent evolution in human population genomic data revealed by genotype networks.

    Science.gov (United States)

    R Vahdati, Ali; Wagner, Andreas

    2016-08-02

    Genotype networks are representations of genetic variation data that are complementary to phylogenetic trees. A genotype network is a graph whose nodes are genotypes (DNA sequences) with the same broadly defined phenotype. Two nodes are connected if they differ in some minimal way, e.g., in a single nucleotide. We analyze human genome variation data from the 1,000 genomes project, and construct haploid genotype (haplotype) networks for 12,235 protein coding genes. The structure of these networks varies widely among genes, indicating different patterns of variation despite a shared evolutionary history. We focus on those genes whose genotype networks show many cycles, which can indicate homoplasy, i.e., parallel or convergent evolution, on the sequence level. For 42 genes, the observed number of cycles is so large that it cannot be explained by either chance homoplasy or recombination. When analyzing possible explanations, we discovered evidence for positive selection in 21 of these genes and, in addition, a potential role for constrained variation and purifying selection. Balancing selection plays at most a small role. The 42 genes with excess cycles are enriched in functions related to immunity and response to pathogens. Genotype networks are representations of genetic variation data that can help understand unusual patterns of genomic variation.

  13. Parallel evolution of the glycogen synthase 1 (muscle) gene Gys1 between Old World and New World fruit bats (Order: Chiroptera).

    Science.gov (United States)

    Fang, Lu; Shen, Bin; Irwin, David M; Zhang, Shuyi

    2014-10-01

    Glycogen synthase, which catalyzes the synthesis of glycogen, is especially important for Old World (Pteropodidae) and New World (Phyllostomidae) fruit bats that ingest high-carbohydrate diets. Glycogen synthase 1, encoded by the Gys1 gene, is the glycogen synthase isozyme that functions in muscles. To determine whether Gys1 has undergone adaptive evolution in bats with carbohydrate-rich diets, in comparison to insect-eating sister bat taxa, we sequenced the coding region of the Gys1 gene from 10 species of bats, including two Old World fruit bats (Pteropodidae) and a New World fruit bat (Phyllostomidae). Our results show no evidence for positive selection in the Gys1 coding sequence on the ancestral Old World and the New World Artibeus lituratus branches. Tests for convergent evolution indicated convergence of the sequences and one parallel amino acid substitution (T395A) was detected on these branches, which was likely driven by natural selection.

  14. Parallel PDE-Based Simulations Using the Common Component Architecture

    International Nuclear Information System (INIS)

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-01-01

    The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications

  15. Complex dynamics underlie the evolution of imperfect wing pattern convergence in butterflies.

    Science.gov (United States)

    Finkbeiner, Susan D; Briscoe, Adriana D; Mullen, Sean P

    2017-04-01

    Adaptive radiation is characterized by rapid diversification that is strongly associated with ecological specialization. However, understanding the evolutionary mechanisms fueling adaptive diversification requires a detailed knowledge of how natural selection acts at multiple life-history stages. Butterflies within the genus Adelpha represent one of the largest and most diverse butterfly lineages in the Neotropics. Although Adelpha species feed on an extraordinary diversity of larval hosts, convergent evolution is widespread in this group, suggesting that selection for mimicry may contribute to adaptive divergence among species. To investigate this hypothesis, we conducted predation studies in Costa Rica using artificial butterfly facsimiles. Specifically, we predicted that nontoxic, palatable Adelpha species that do not feed on host plants in the family Rubiaceae would benefit from sharing a locally convergent wing pattern with the presumably toxic Rubiaceae-feeding species via reduced predation. Contrary to expectations, we found that the presumed mimic was attacked significantly more than its locally convergent model at a frequency paralleling attack rates on both novel and palatable prey. Although these data reveal the first evidence for protection from avian predators by the supposed toxic, Rubiaceae-feeding Adelpha species, we conclude that imprecise mimetic patterns have high costs for Batesian mimics in the tropics. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  16. Deep sequencing of amplicons reveals widespread intraspecific hybridization and multiple origins of polyploidy in big sagebrush (Artemisia tridentata, Asteraceae)

    Science.gov (United States)

    Bryce A. Richardson; Justin T. Page; Prabin Bajgain; Stewart C. Sanderson; Joshua A. Udall

    2012-01-01

    Premise of the study: Hybridization has played an important role in the evolution and ecological adaptation of diploid and polyploid plants. Artemisia tridentata (Asteraceae) tetraploids are extremely widespread and of great ecological importance. These tetraploids are often taxonomically identified as A. tridentata subsp. wyomingensis or as autotetraploids of diploid...

  17. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  18. Collective Landmarks for Deep Time: A New Tool for Evolution Education

    Science.gov (United States)

    Delgado, Cesar

    2014-01-01

    Evolution is a fundamental, organising concept in biology, yet there is widespread resistance to evolution among US students and there are rising creationist challenges in Europe. Resistance to evolution is linked to lack of understanding of the age of the Earth. An understanding of deep time is thus essential for effective biology education.…

  19. Experimental evolution and the dynamics of adaptation and genome evolution in microbial populations.

    Science.gov (United States)

    Lenski, Richard E

    2017-10-01

    Evolution is an on-going process, and it can be studied experimentally in organisms with rapid generations. My team has maintained 12 populations of Escherichia coli in a simple laboratory environment for >25 years and 60 000 generations. We have quantified the dynamics of adaptation by natural selection, seen some of the populations diverge into stably coexisting ecotypes, described changes in the bacteria's mutation rate, observed the new ability to exploit a previously untapped carbon source, characterized the dynamics of genome evolution and used parallel evolution to identify the genetic targets of selection. I discuss what the future might hold for this particular experiment, briefly highlight some other microbial evolution experiments and suggest how the fields of experimental evolution and microbial ecology might intersect going forward.

  20. Parallel computing solution of Boltzmann neutron transport equation

    International Nuclear Information System (INIS)

    Ansah-Narh, T.

    2010-01-01

    The focus of the research was on developing parallel computing algorithm for solving Eigen-values of the Boltzmam Neutron Transport Equation (BNTE) in a slab geometry using multi-grid approach. In response to the problem of slow execution of serial computing when solving large problems, such as BNTE, the study was focused on the design of parallel computing systems which was an evolution of serial computing that used multiple processing elements simultaneously to solve complex physical and mathematical problems. Finite element method (FEM) was used for the spatial discretization scheme, while angular discretization was accomplished by expanding the angular dependence in terms of Legendre polynomials. The eigenvalues representing the multiplication factors in the BNTE were determined by the power method. MATLAB Compiler Version 4.1 (R2009a) was used to compile the MATLAB codes of BNTE. The implemented parallel algorithms were enabled with matlabpool, a Parallel Computing Toolbox function. The option UseParallel was set to 'always' and the default value of the option was 'never'. When those conditions held, the solvers computed estimated gradients in parallel. The parallel computing system was used to handle all the bottlenecks in the matrix generated from the finite element scheme and each domain of the power method generated. The parallel algorithm was implemented on a Symmetric Multi Processor (SMP) cluster machine, which had Intel 32 bit quad-core x 86 processors. Convergence rates and timings for the algorithm on the SMP cluster machine were obtained. Numerical experiments indicated the designed parallel algorithm could reach perfect speedup and had good stability and scalability. (au)

  1. The genetic architecture of parallel armor plate reduction in threespine sticklebacks.

    Directory of Open Access Journals (Sweden)

    Pamela F Colosimo

    2004-05-01

    Full Text Available How many genetic changes control the evolution of new traits in natural populations? Are the same genetic changes seen in cases of parallel evolution? Despite long-standing interest in these questions, they have been difficult to address, particularly in vertebrates. We have analyzed the genetic basis of natural variation in three different aspects of the skeletal armor of threespine sticklebacks (Gasterosteus aculeatus: the pattern, number, and size of the bony lateral plates. A few chromosomal regions can account for variation in all three aspects of the lateral plates, with one major locus contributing to most of the variation in lateral plate pattern and number. Genetic mapping and allelic complementation experiments show that the same major locus is responsible for the parallel evolution of armor plate reduction in two widely separated populations. These results suggest that a small number of genetic changes can produce major skeletal alterations in natural populations and that the same major locus is used repeatedly when similar traits evolve in different locations.

  2. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    Science.gov (United States)

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  3. The Transformation of Cyavana: A Case Study in Narrative Evolution

    Directory of Open Access Journals (Sweden)

    Emily West

    2017-03-01

    Full Text Available The assessment of possible genetic relationships between pairs of proposed narrative parallels currently relies on subjective conventional wisdom-based criteria. This essay presents an attempt at categorizing patterns of narrative evolution through the comparison of variants of orally-composed, fixed-text Sanskrit tales. Systematic examination of the changes that took place over the developmental arc of _The Tale of Cyavana_ offers a number of insights that may be applied to the understanding of the evolution of oral narratives in general. An evidence-based exposition of the principles that govern the process of narrative evolution could provide more accurate diagnostic tools for evaluating narrative parallels.

  4. Animal personalities : consequences for ecology and evolution

    NARCIS (Netherlands)

    Wolf, Max; Weissing, Franz J.

    Personality differences are a widespread phenomenon throughout the animal kingdom. Past research has focused on the characterization of such differences and a quest for their proximate and ultimate causation. However, the consequences of these differences for ecology and evolution received much less

  5. Linguistics: evolution and language change.

    Science.gov (United States)

    Bowern, Claire

    2015-01-05

    Linguists have long identified sound changes that occur in parallel. Now novel research shows how Bayesian modeling can capture complex concerted changes, revealing how evolution of sounds proceeds. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Rapid sequencing of the bamboo mitochondrial genome using Illumina technology and parallel episodic evolution of organelle genomes in grasses.

    Science.gov (United States)

    Ma, Peng-Fei; Guo, Zhen-Hua; Li, De-Zhu

    2012-01-01

    Compared to their counterparts in animals, the mitochondrial (mt) genomes of angiosperms exhibit a number of unique features. However, unravelling their evolution is hindered by the few completed genomes, of which are essentially Sanger sequenced. While next-generation sequencing technologies have revolutionized chloroplast genome sequencing, they are just beginning to be applied to angiosperm mt genomes. Chloroplast genomes of grasses (Poaceae) have undergone episodic evolution and the evolutionary rate was suggested to be correlated between chloroplast and mt genomes in Poaceae. It is interesting to investigate whether correlated rate change also occurred in grass mt genomes as expected under lineage effects. A time-calibrated phylogenetic tree is needed to examine rate change. We determined a largely completed mt genome from a bamboo, Ferrocalamus rimosivaginus (Poaceae), through Illumina sequencing of total DNA. With combination of de novo and reference-guided assembly, 39.5-fold coverage Illumina reads were finally assembled into scaffolds totalling 432,839 bp. The assembled genome contains nearly the same genes as the completed mt genomes in Poaceae. For examining evolutionary rate in grass mt genomes, we reconstructed a phylogenetic tree including 22 taxa based on 31 mt genes. The topology of the well-resolved tree was almost identical to that inferred from chloroplast genome with only minor difference. The inconsistency possibly derived from long branch attraction in mtDNA tree. By calculating absolute substitution rates, we found significant rate change (∼4-fold) in mt genome before and after the diversification of Poaceae both in synonymous and nonsynonymous terms. Furthermore, the rate change was correlated with that of chloroplast genomes in grasses. Our result demonstrates that it is a rapid and efficient approach to obtain angiosperm mt genome sequences using Illumina sequencing technology. The parallel episodic evolution of mt and chloroplast

  7. Parallel sites implicate functional convergence of the hearing gene prestin among echolocating mammals.

    Science.gov (United States)

    Liu, Zhen; Qi, Fei-Yan; Zhou, Xin; Ren, Hai-Qing; Shi, Peng

    2014-09-01

    Echolocation is a sensory system whereby certain mammals navigate and forage using sound waves, usually in environments where visibility is limited. Curiously, echolocation has evolved independently in bats and whales, which occupy entirely different environments. Based on this phenotypic convergence, recent studies identified several echolocation-related genes with parallel sites at the protein sequence level among different echolocating mammals, and among these, prestin seems the most promising. Although previous studies analyzed the evolutionary mechanism of prestin, the functional roles of the parallel sites in the evolution of mammalian echolocation are not clear. By functional assays, we show that a key parameter of prestin function, 1/α, is increased in all echolocating mammals and that the N7T parallel substitution accounted for this functional convergence. Moreover, another parameter, V1/2, was shifted toward the depolarization direction in a toothed whale, the bottlenose dolphin (Tursiops truncatus) and a constant-frequency (CF) bat, the Stoliczka's trident bat (Aselliscus stoliczkanus). The parallel site of I384T between toothed whales and CF bats was responsible for this functional convergence. Furthermore, the two parameters (1/α and V1/2) were correlated with mammalian high-frequency hearing, suggesting that the convergent changes of the prestin function in echolocating mammals may play important roles in mammalian echolocation. To our knowledge, these findings present the functional patterns of echolocation-related genes in echolocating mammals for the first time and rigorously demonstrate adaptive parallel evolution at the protein sequence level, paving the way to insights into the molecular mechanism underlying mammalian echolocation. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Towards Interactive Visual Exploration of Parallel Programs using a Domain-Specific Language

    KAUST Repository

    Klein, Tobias; Bruckner, Stefan; Grö ller, M. Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    The use of GPUs and the massively parallel computing paradigm have become wide-spread. We describe a framework for the interactive visualization and visual analysis of the run-time behavior of massively parallel programs, especially OpenCL kernels. This facilitates understanding a program's function and structure, finding the causes of possible slowdowns, locating program bugs, and interactively exploring and visually comparing different code variants in order to improve performance and correctness. Our approach enables very specific, user-centered analysis, both in terms of the recording of the run-time behavior and the visualization itself. Instead of having to manually write instrumented code to record data, simple code annotations tell the source-to-source compiler which code instrumentation to generate automatically. The visualization part of our framework then enables the interactive analysis of kernel run-time behavior in a way that can be very specific to a particular problem or optimization goal, such as analyzing the causes of memory bank conflicts or understanding an entire parallel algorithm.

  9. Towards Interactive Visual Exploration of Parallel Programs using a Domain-Specific Language

    KAUST Repository

    Klein, Tobias

    2016-04-19

    The use of GPUs and the massively parallel computing paradigm have become wide-spread. We describe a framework for the interactive visualization and visual analysis of the run-time behavior of massively parallel programs, especially OpenCL kernels. This facilitates understanding a program\\'s function and structure, finding the causes of possible slowdowns, locating program bugs, and interactively exploring and visually comparing different code variants in order to improve performance and correctness. Our approach enables very specific, user-centered analysis, both in terms of the recording of the run-time behavior and the visualization itself. Instead of having to manually write instrumented code to record data, simple code annotations tell the source-to-source compiler which code instrumentation to generate automatically. The visualization part of our framework then enables the interactive analysis of kernel run-time behavior in a way that can be very specific to a particular problem or optimization goal, such as analyzing the causes of memory bank conflicts or understanding an entire parallel algorithm.

  10. Position-dependent termination and widespread obligatory frameshifting in Euplotes translation

    Energy Technology Data Exchange (ETDEWEB)

    Lobanov, Alexei V.; Heaphy, Stephen M.; Turanov, Anton A.; Gerashchenko, Maxim V.; Pucciarelli, Sandra; Devaraj, Raghul R.; Xie, Fang; Petyuk, Vladislav A.; Smith, Richard D.; Klobutcher, Lawrence A.; Atkins, John F.; Miceli, Cristina; Hatfield, Dolph L.; Baranov, Pavel V.; Gladyshev, Vadim N.

    2016-11-21

    The ribosome can change its reading frame during translation in a process known as programmed ribosomal frameshifting. These rare events are supported by complex mRNA signals. However, we found that the ciliates Euplotes crassus and Euplotes focardii exhibit widespread frameshifting at stop codons. 47 different codons preceding stop signals resulted in either +1 or +2 frameshifts, and +1 frameshifting at AAA was the most frequent. The frameshifts showed unusual plasticity and rapid evolution, and had little influence on translation rates. The proximity of a stop codon to the 3' mRNA end, rather than its occurrence or sequence context, appeared to designate termination. Thus, a ‘stop codon’ is not a sufficient signal for translation termination, and the default function of stop codons in Euplotes is frameshifting, whereas termination is specific to certain mRNA positions and probably requires additional factors.

  11. Parallel computing in plasma physics: Nonlinear instabilities

    International Nuclear Information System (INIS)

    Pohn, E.; Kamelander, G.; Shoucri, M.

    2000-01-01

    A Vlasov-Poisson-system is used for studying the time evolution of the charge-separation at a spatial one- as well as a two-dimensional plasma-edge. Ions are advanced in time using the Vlasov-equation. The whole three-dimensional velocity-space is considered leading to very time-consuming four-resp. five-dimensional fully kinetic simulations. In the 1D simulations electrons are assumed to behave adiabatic, i.e. they are Boltzmann-distributed, leading to a nonlinear Poisson-equation. In the 2D simulations a gyro-kinetic approximation is used for the electrons. The plasma is assumed to be initially neutral. The simulations are performed at an equidistant grid. A constant time-step is used for advancing the density-distribution function in time. The time-evolution of the distribution function is performed using a splitting scheme. Each dimension (x, y, υ x , υ y , υ z ) of the phase-space is advanced in time separately. The value of the distribution function for the next time is calculated from the value of an - in general - interstitial point at the present time (fractional shift). One-dimensional cubic-spline interpolation is used for calculating the interstitial function values. After the fractional shifts are performed for each dimension of the phase-space, a whole time-step for advancing the distribution function is finished. Afterwards the charge density is calculated, the Poisson-equation is solved and the electric field is calculated before the next time-step is performed. The fractional shift method sketched above was parallelized for p processors as follows. Considering first the shifts in y-direction, a proper parallelization strategy is to split the grid into p disjoint υ z -slices, which are sub-grids, each containing a different 1/p-th part of the υ z range but the whole range of all other dimensions. Each processor is responsible for performing the y-shifts on a different slice, which can be done in parallel without any communication between

  12. Widespread pain: is an improved classification possible?

    Science.gov (United States)

    MacFarlane, G J; Croft, P R; Schollum, J; Silman, A J

    1996-09-01

    The classification of widespread pain, proposed by the American College of Rheumatology (ACR) for use in the clinic as a screen for fibromyalgia, as described, does not require truly widespread pain. Studies considering the epidemiology of widespread pain per se may therefore require a definition with greater face validity, which might also show enhanced associations with other physical and psychological measures. We aimed to develop a more coherent definition of widespread pain for use in epidemiological studies and to compare performance in identifying individuals with significant morbidity. A group of 172 subjects who had participated in a community based study on the occurrence of pain were identified and categorized by their pain experience as indicated on line drawings of the body according to ACR definition and to a new, more stringent definition that required the presence of more diffuse limb pain. A number of other clinical and psychological measures were recorded for these individuals and the association between their pain status measures and these other variables was assessed and compared. Persons satisfying the newly proposed definition for chronic widespread pain, in comparison with those who satisfied only the present ACR definition, had a significantly higher score on the General Health Questionnaire [median difference (MD) 7.95% CI 1.13], a higher score on the Health and Fatigue Questionnaire (MD 10.95% CI 0.15), and greater problems with sleep (sleep problem score MD 4.95% CI 0.9). Those satisfying the new definition also had a greater number of tender points on examination (MD 3.95% CI -1.7). The morbidity of those satisfying only the present ACR definition was closer to persons who had regional pain. A redefinition of widespread pain has produced a group of subjects whose pain is (a) likely to be more "widespread" and (b) is associated more strongly with factors such as psychological disturbance, fatigue, sleep problems, and tender points, and

  13. Reflecting on the philosophical implications of evolution

    Directory of Open Access Journals (Sweden)

    I.H. Horn

    2003-08-01

    Full Text Available Evolution as paradigm is a prescribed topic in contemporary South African education. This means that macro-evolution – the idea that life evolved progressively from inert matter to humankind’s coming into being – must form the foundation of South African education. The aim of this article is to reflect, in a spirit of respectful yet critical enquiry, on three issues with regard to macro-evolution: First, the theory of macro-evolution is placed in its historical context which indicates that although this theory owes its widespread acceptance to Charles Darwin, it did not originate with him. Second, the scientific status of the theory of macro-evolution is scrutinised. Karl Popper’s view of this theory as a metaphysical framework for research is given, accompanied by a brief discussion. Third, three evolutionary worldviews are identified and discussed.

  14. Personality disparity in chronic regional and widespread pain.

    Science.gov (United States)

    Chang, Mei-Chung; Chen, Po-Fei; Lung, For-Wey

    2017-08-01

    Chronic pain has high comorbidity with psychiatric disorders, therefore, better understanding of the relationship between chronic pain and mental illness is needed. This study aimed to investigate the pathway relationships among parental attachment, personality characteristics, alexithymic trait and mental health in patients with chronic widespread pain, those with chronic regional pain, and controls. Two hundred and thirty participants were recruited. The parental Bonding Inventory, Eysenck Personality Inventory (EPI), 20-item Toronto Alexithymia Scale (TAS-20), Chinese Health Questionnaire, and Short-Form 36 were filled out. The pathway relationships revealed that patients of mothers who were more protective were more neurotic, had more difficulty identifying feelings (DIF), worse mental health, and a higher association with chronic widespread pain. No differences were found between patients with chronic regional pain and the controls. The predisposing factors for chronic widespread pain, when compared with chronic regional pain, may be more closely related to psychiatric disorders. The pathways to chronic regional pain and chronic widespread pain differ, with neuroticism and the alexithymic DIF trait being the main factors defining chronic widespread pain. Therefore, besides therapies targeting pain symptoms, psychiatric consultation, medication and psychotherapy are also recommended for those with chronic widespread pain to alleviate their mental health conditions. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  15. Musical emotions: Functions, origins, evolution

    Science.gov (United States)

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  16. Local and Nonlocal Parallel Heat Transport in General Magnetic Fields

    International Nuclear Information System (INIS)

    Castillo-Negrete, D. del; Chacon, L.

    2011-01-01

    A novel approach for the study of parallel transport in magnetized plasmas is presented. The method avoids numerical pollution issues of grid-based formulations and applies to integrable and chaotic magnetic fields with local or nonlocal parallel closures. In weakly chaotic fields, the method gives the fractal structure of the devil's staircase radial temperature profile. In fully chaotic fields, the temperature exhibits self-similar spatiotemporal evolution with a stretched-exponential scaling function for local closures and an algebraically decaying one for nonlocal closures. It is shown that, for both closures, the effective radial heat transport is incompatible with the quasilinear diffusion model.

  17. Massively Parallel Single-Molecule Manipulation Using Centrifugal Force

    Science.gov (United States)

    Wong, Wesley; Halvorsen, Ken

    2011-03-01

    Precise manipulation of single molecules has led to remarkable insights in physics, chemistry, biology, and medicine. However, two issues that have impeded the widespread adoption of these techniques are equipment cost and the laborious nature of making measurements one molecule at a time. To meet these challenges, we have developed an approach that enables massively parallel single- molecule force measurements using centrifugal force. This approach is realized in the centrifuge force microscope, an instrument in which objects in an orbiting sample are subjected to a calibration-free, macroscopically uniform force- field while their micro-to-nanoscopic motions are observed. We demonstrate high- throughput single-molecule force spectroscopy with this technique by performing thousands of rupture experiments in parallel, characterizing force-dependent unbinding kinetics of an antibody-antigen pair in minutes rather than days. Currently, we are taking steps to integrate high-resolution detection, fluorescence, temperature control and a greater dynamic range in force. With significant benefits in efficiency, cost, simplicity, and versatility, single-molecule centrifugation has the potential to expand single-molecule experimentation to a wider range of researchers and experimental systems.

  18. Evidence for widespread degradation of gene control regions in hominid genomes.

    Directory of Open Access Journals (Sweden)

    Peter D Keightley

    2005-02-01

    Full Text Available Although sequences containing regulatory elements located close to protein-coding genes are often only weakly conserved during evolution, comparisons of rodent genomes have implied that these sequences are subject to some selective constraints. Evolutionary conservation is particularly apparent upstream of coding sequences and in first introns, regions that are enriched for regulatory elements. By comparing the human and chimpanzee genomes, we show here that there is almost no evidence for conservation in these regions in hominids. Furthermore, we show that gene expression is diverging more rapidly in hominids than in murids per unit of neutral sequence divergence. By combining data on polymorphism levels in human noncoding DNA and the corresponding human-chimpanzee divergence, we show that the proportion of adaptive substitutions in these regions in hominids is very low. It therefore seems likely that the lack of conservation and increased rate of gene expression divergence are caused by a reduction in the effectiveness of natural selection against deleterious mutations because of the low effective population sizes of hominids. This has resulted in the accumulation of a large number of deleterious mutations in sequences containing gene control elements and hence a widespread degradation of the genome during the evolution of humans and chimpanzees.

  19. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  20. Five Misunderstandings About Cultural Evolution.

    Science.gov (United States)

    Henrich, Joseph; Boyd, Robert; Richerson, Peter J

    2008-06-01

    Recent debates about memetics have revealed some widespread misunderstandings about Darwinian approaches to cultural evolution. Drawing from these debates, this paper disputes five common claims: (1) mental representations are rarely discrete, and therefore models that assume discrete, gene-like particles (i.e., replicators) are useless; (2) replicators are necessary for cumulative, adaptive evolution; (3) content-dependent psychological biases are the only important processes that affect the spread of cultural representations; (4) the "cultural fitness" of a mental representation can be inferred from its successful transmission; and (5) selective forces only matter if the sources of variation are random. We close by sketching the outlines of a unified evolutionary science of culture.

  1. The evolution of concepts of vestibular peripheral information processing: toward the dynamic, adaptive, parallel processing macular model

    Science.gov (United States)

    Ross, Muriel D.

    2003-01-01

    In a letter to Robert Hooke, written on 5 February, 1675, Isaac Newton wrote "If I have seen further than certain other men it is by standing upon the shoulders of giants." In his context, Newton was referring to the work of Galileo and Kepler, who preceded him. However, every field has its own giants, those men and women who went before us and, often with few tools at their disposal, uncovered the facts that enabled later researchers to advance knowledge in a particular area. This review traces the history of the evolution of views from early giants in the field of vestibular research to modern concepts of vestibular organ organization and function. Emphasis will be placed on the mammalian maculae as peripheral processors of linear accelerations acting on the head. This review shows that early, correct findings were sometimes unfortunately disregarded, impeding later investigations into the structure and function of the vestibular organs. The central themes are that the macular organs are highly complex, dynamic, adaptive, distributed parallel processors of information, and that historical references can help us to understand our own place in advancing knowledge about their complicated structure and functions.

  2. A novel role for Mc1r in the parallel evolution of depigmentation in independent populations of the cavefish Astyanax mexicanus.

    Directory of Open Access Journals (Sweden)

    Joshua B Gross

    2009-01-01

    Full Text Available The evolution of degenerate characteristics remains a poorly understood phenomenon. Only recently has the identification of mutations underlying regressive phenotypes become accessible through the use of genetic analyses. Focusing on the Mexican cave tetra Astyanax mexicanus, we describe, here, an analysis of the brown mutation, which was first described in the literature nearly 40 years ago. This phenotype causes reduced melanin content, decreased melanophore number, and brownish eyes in convergent cave forms of A. mexicanus. Crosses demonstrate non-complementation of the brown phenotype in F(2 individuals derived from two independent cave populations: Pachón and the linked Yerbaniz and Japonés caves, indicating the same locus is responsible for reduced pigmentation in these fish. While the brown mutant phenotype arose prior to the fixation of albinism in Pachón cave individuals, it is unclear whether the brown mutation arose before or after the fixation of albinism in the linked Yerbaniz/Japonés caves. Using a QTL approach combined with sequence and functional analyses, we have discovered that two distinct genetic alterations in the coding sequence of the gene Mc1r cause reduced pigmentation associated with the brown mutant phenotype in these caves. Our analysis identifies a novel role for Mc1r in the evolution of degenerative phenotypes in blind Mexican cavefish. Further, the brown phenotype has arisen independently in geographically separate caves, mediated through different mutations of the same gene. This example of parallelism indicates that certain genes are frequent targets of mutation in the repeated evolution of regressive phenotypes in cave-adapted species.

  3. Cognition and the evolution of camouflage.

    Science.gov (United States)

    Skelhorn, John; Rowe, Candy

    2016-02-24

    Camouflage is one of the most widespread forms of anti-predator defence and prevents prey individuals from being detected or correctly recognized by would-be predators. Over the past decade, there has been a resurgence of interest in both the evolution of prey camouflage patterns, and in understanding animal cognition in a more ecological context. However, these fields rarely collide, and the role of cognition in the evolution of camouflage is poorly understood. Here, we review what we currently know about the role of both predator and prey cognition in the evolution of prey camouflage, outline why cognition may be an important selective pressure driving the evolution of camouflage and consider how studying the cognitive processes of animals may prove to be a useful tool to study the evolution of camouflage, and vice versa. In doing so, we highlight that we still have a lot to learn about the role of cognition in the evolution of camouflage and identify a number of avenues for future research. © 2016 The Author(s).

  4. Towards physical principles of biological evolution

    Science.gov (United States)

    Katsnelson, Mikhail I.; Wolf, Yuri I.; Koonin, Eugene V.

    2018-03-01

    Biological systems reach organizational complexity that far exceeds the complexity of any known inanimate objects. Biological entities undoubtedly obey the laws of quantum physics and statistical mechanics. However, is modern physics sufficient to adequately describe, model and explain the evolution of biological complexity? Detailed parallels have been drawn between statistical thermodynamics and the population-genetic theory of biological evolution. Based on these parallels, we outline new perspectives on biological innovation and major transitions in evolution, and introduce a biological equivalent of thermodynamic potential that reflects the innovation propensity of an evolving population. Deep analogies have been suggested to also exist between the properties of biological entities and processes, and those of frustrated states in physics, such as glasses. Such systems are characterized by frustration whereby local state with minimal free energy conflict with the global minimum, resulting in ‘emergent phenomena’. We extend such analogies by examining frustration-type phenomena, such as conflicts between different levels of selection, in biological evolution. These frustration effects appear to drive the evolution of biological complexity. We further address evolution in multidimensional fitness landscapes from the point of view of percolation theory and suggest that percolation at level above the critical threshold dictates the tree-like evolution of complex organisms. Taken together, these multiple connections between fundamental processes in physics and biology imply that construction of a meaningful physical theory of biological evolution might not be a futile effort. However, it is unrealistic to expect that such a theory can be created in one scoop; if it ever comes to being, this can only happen through integration of multiple physical models of evolutionary processes. Furthermore, the existing framework of theoretical physics is unlikely to suffice

  5. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  6. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  7. Teaching and Learning: Highlighting the Parallels between Education and Participatory Evaluation.

    Science.gov (United States)

    Vanden Berk, Eric J.; Cassata, Jennifer Coyne; Moye, Melinda J.; Yarbrough, Donald B.; Siddens, Stephanie K.

    As an evaluation team trained in educational psychology and committed to participatory evaluation and its evolution, the researchers have found the parallel between evaluator-stakeholder roles in the participatory evaluation process and educator-student roles in educational psychology theory to be important. One advantage then is that the theories…

  8. Efficient parallel implicit methods for rotary-wing aerodynamics calculations

    Science.gov (United States)

    Wissink, Andrew M.

    Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good

  9. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  10. A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS

    International Nuclear Information System (INIS)

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A.

    2013-01-01

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N ∼ 10 7 particles. Our code is based on the Hénon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10 5 to 10 7 . We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within ∼ 5 , 128 for N = 10 6 and 256 for N = 10 7 . The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60×, 100×, and 220×, respectively.

  11. Towards a Universal Biology: Is the Origin and Evolution of Life Predictable?

    Science.gov (United States)

    Rothschild, Lynn J.

    2017-01-01

    The origin and evolution of life seems an unpredictable oddity, based on the quirks of contingency. Celebrated by the late Stephen Jay Gould in several books, "evolution by contingency" has all the adventure of a thriller, but lacks the predictive power of the physical sciences. Not necessarily so, replied Simon Conway Morris, for convergence reassures us that certain evolutionary responses are replicable. The outcome of this debate is critical to Astrobiology. How can we understand where we came from on Earth without prophesy? Further, we cannot design a rational strategy for the search for life elsewhere - or to understand what the future will hold for life on Earth and beyond - without extrapolating from pre-biotic chemistry and evolution. There are several indirect approaches to understanding, and thus describing, what life must be. These include philosophical approaches to defining life (is there even a satisfactory definition of life?), using what we know of physics, chemistry and life to imagine alternate scenarios, using different approaches that life takes as pseudoreplicates (e.g., ribosomal vs non-ribosomal protein synthesis), and experimental approaches to understand the art of the possible. Given that: (1) Life is a process based on physical components rather than simply an object; (2). Life is likely based on organic carbon and needs a solvent for chemistry, most likely water, and (3) Looking for convergence in terrestrial evolution we can predict certain tendencies, if not quite "laws", that provide predictive power. Biological history must obey the laws of physics and chemistry, the principles of natural selection, the constraints of an evolutionary past, genetics, and developmental biology. This amalgam creates a surprising amount of predictive power in the broad outline. Critical is the apparent prevalence of organic chemistry, and uniformity in the universe of the laws of chemistry and physics. Instructive is the widespread occurrence of

  12. Parallel computing in cluster of GPU applied to a problem of nuclear engineering

    International Nuclear Information System (INIS)

    Moraes, Sergio Ricardo S.; Heimlich, Adino; Resende, Pedro

    2013-01-01

    Cluster computing has been widely used as a low cost alternative for parallel processing in scientific applications. With the use of Message-Passing Interface (MPI) protocol development became even more accessible and widespread in the scientific community. A more recent trend is the use of Graphic Processing Unit (GPU), which is a powerful co-processor able to perform hundreds of instructions in parallel, reaching a capacity of hundreds of times the processing of a CPU. However, a standard PC does not allow, in general, more than two GPUs. Hence, it is proposed in this work development and evaluation of a hybrid low cost parallel approach to the solution to a nuclear engineering typical problem. The idea is to use clusters parallelism technology (MPI) together with GPU programming techniques (CUDA - Compute Unified Device Architecture) to simulate neutron transport through a slab using Monte Carlo method. By using a cluster comprised by four quad-core computers with 2 GPU each, it has been developed programs using MPI and CUDA technologies. Experiments, applying different configurations, from 1 to 8 GPUs has been performed and results were compared with the sequential (non-parallel) version. A speed up of about 2.000 times has been observed when comparing the 8-GPU with the sequential version. Results here presented are discussed and analyzed with the objective of outlining gains and possible limitations of the proposed approach. (author)

  13. Study on parallel-channel asymmetry in supercritical flow instability experiment

    International Nuclear Information System (INIS)

    Xiong Ting; Yu Junchong; Yan Xiao; Huang Yanping; Xiao Zejun; Huang Shanfang

    2013-01-01

    Due to the urgent need for experimental study on supercritical water flow instability, the parallel-channel asymmetry which determines the feasibility of such experiments was studied with the experimental and numerical results in parallel dual channel. The evolution of flow rates in the experiments was analyzed, and the steady-state characteristics as well as transient characteristics of the system were obtained by self-developed numerical code. The results show that the asymmetry of the parallel dual channel would reduce the feasibility of experiments. The asymmetry of flow rates is aroused by geometrical asymmetry. Due to the property variation characteristics of supercritical water, the flow rate asymmetry is enlarged while rising beyond the pseudo critical point. The extent of flow rate asymmetry is affected by the bulk temperature and total flow rate; therefore the experimental feasibility can be enhanced by reducing the total flow rate. (authors)

  14. Nonlinear interaction of a parallel-flow relativistic electron beam with a plasma

    International Nuclear Information System (INIS)

    Jungwirth, K.; Koerbel, S.; Simon, P.; Vrba, P.

    1975-01-01

    Nonlinear evolution of single-mode high-frequency instabilities (ω approximately ksub(parallel)vsub(b)) excited by a parallel-flow high-current relativistic electron beam in a magnetized plasma is investigated. Fairly general dimensionless equations are derived. They describe both the temporal and the spatial evolution of amplitude and phase of the fundamental wave. Numerically, the special case of excitation of the linearly most unstable mode is solved in detail assuming that the wave energy dissipation is negligible. Then the strength of interaction and the relativistic properties of the beam are fully respected by a single parameter lambda. The value of lambda ensuring the optimum efficiency of the wave excitation as well as the efficiency of the self-acceleration of some beam electrons at higher values of lambda>1 are determined in the case of a fully compensated relativistic beam. Finally, the effect of the return current dissipation is also included (phenomenologically) into the theoretical model, its role for the beam-plasma interaction being checked numerically. (J.U.)

  15. Factorizing the time evolution operator

    International Nuclear Information System (INIS)

    Garcia Quijas, P C; Arevalo Aguilar, L M

    2007-01-01

    There is a widespread belief in the quantum physical community, and textbooks used to teach quantum mechanics, that it is a difficult task to apply the time evolution operator e itH-hat/h on an initial wavefunction. Because the Hamiltonian operator is, generally, the sum of two operators, then it is not possible to apply the time evolution operator on an initial wavefunction ψ(x, 0), for it implies using terms like (a-hat + b-hat). A possible solution is to factorize the time evolution operator and then apply successively the individual exponential operator on the initial wavefunction. However, the exponential operator does not directly factorize, i.e. e a-hat+b-hat ≠ e a-hat e b-hat . In this study we present a useful procedure for factorizing the time evolution operator when the argument of the exponential is a sum of two operators, which obey specific commutation relations. Then, we apply the exponential operator as an evolution operator for the case of elementary unidimensional potentials, like a particle subject to a constant force and a harmonic oscillator. Also, we discuss an apparent paradox concerning the time evolution operator and non-spreading wave packets addressed previously in the literature

  16. Art as A Playground for Evolution

    DEFF Research Database (Denmark)

    Beloff, Laura

    2016-01-01

    Art works which engage with the topic of human enhancement and evolution have begun appearing parallel to increased awareness about anthropogenic changes to our environment and acceleration of the speed of technological developments that impact us and our biological environment. The article...... and related topics is proposed as play activity for adults, which simultaneously experiments directly with ideas concerning evolution and human development. The author proposes that these kinds of experimental art projects support our mental adaptation to evolutionary changes....

  17. Parallel heat transport in integrable and chaotic magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Castillo-Negrete, D. del; Chacon, L. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-8071 (United States)

    2012-05-15

    The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion, space plasmas, and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), {chi}{sub ||} , and the perpendicular, {chi}{sub Up-Tack }, conductivities ({chi}{sub ||} /{chi}{sub Up-Tack} may exceed 10{sup 10} in fusion plasmas); (ii) Nonlocal parallel transport in the limit of small collisionality; and (iii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields in arbitrary geometry. The method avoids by construction the numerical pollution issues of grid-based algorithms. The potential of the approach is demonstrated with nontrivial applications to integrable (magnetic island), weakly chaotic (Devil's staircase), and fully chaotic magnetic field configurations. For the latter, numerical solutions of the parallel heat transport equation show that the effective radial transport, with local and non-local parallel closures, is non-diffusive, thus casting doubts on the applicability of quasilinear diffusion descriptions. General conditions for the existence of non-diffusive, multivalued flux-gradient relations in the temperature evolution are derived.

  18. The presentation of explicit analytical solutions of a class of nonlinear evolution equations

    International Nuclear Information System (INIS)

    Feng Jinshun; Guo Mingpu; Yuan Deyou

    2009-01-01

    In this paper, we introduce a function set Ω m . There is a conjecture that an arbitrary explicit travelling-wave analytical solution of a real constant coefficient nonlinear evolution equation is necessarily a linear (or nonlinear) combination of the product of some elements in Ω m . A widespread applicable approach for solving a class of nonlinear evolution equations is established. The new analytical solutions to two kinds of nonlinear evolution equations are described with the aid of the guess.

  19. Selectivity of Nanocrystalline IrO2-Based Catalysts in Parallel Chlorine and Oxygen Evolution

    Czech Academy of Sciences Publication Activity Database

    Kuznetsova, Elizaveta; Petrykin, Valery; Sunde, S.; Krtil, Petr

    2015-01-01

    Roč. 6, č. 2 (2015), s. 198-210 ISSN 1868-2529 EU Projects: European Commission(XE) 214936 Institutional support: RVO:61388955 Keywords : iridium dioxide * oxygen evolution * chlorine evolution Subject RIV: CG - Electrochemistry Impact factor: 2.347, year: 2015

  20. Transformation and diversification in early mammal evolution.

    Science.gov (United States)

    Luo, Zhe-Xi

    2007-12-13

    Evolution of the earliest mammals shows successive episodes of diversification. Lineage-splitting in Mesozoic mammals is coupled with many independent evolutionary experiments and ecological specializations. Classic scenarios of mammalian morphological evolution tend to posit an orderly acquisition of key evolutionary innovations leading to adaptive diversification, but newly discovered fossils show that evolution of such key characters as the middle ear and the tribosphenic teeth is far more labile among Mesozoic mammals. Successive diversifications of Mesozoic mammal groups multiplied the opportunities for many dead-end lineages to iteratively evolve developmental homoplasies and convergent ecological specializations, parallel to those in modern mammal groups.

  1. Population genomics of parallel adaptation in threespine stickleback using sequenced RAD tags.

    Directory of Open Access Journals (Sweden)

    Paul A Hohenlohe

    2010-02-01

    Full Text Available Next-generation sequencing technology provides novel opportunities for gathering genome-scale sequence data in natural populations, laying the empirical foundation for the evolving field of population genomics. Here we conducted a genome scan of nucleotide diversity and differentiation in natural populations of threespine stickleback (Gasterosteus aculeatus. We used Illumina-sequenced RAD tags to identify and type over 45,000 single nucleotide polymorphisms (SNPs in each of 100 individuals from two oceanic and three freshwater populations. Overall estimates of genetic diversity and differentiation among populations confirm the biogeographic hypothesis that large panmictic oceanic populations have repeatedly given rise to phenotypically divergent freshwater populations. Genomic regions exhibiting signatures of both balancing and divergent selection were remarkably consistent across multiple, independently derived populations, indicating that replicate parallel phenotypic evolution in stickleback may be occurring through extensive, parallel genetic evolution at a genome-wide scale. Some of these genomic regions co-localize with previously identified QTL for stickleback phenotypic variation identified using laboratory mapping crosses. In addition, we have identified several novel regions showing parallel differentiation across independent populations. Annotation of these regions revealed numerous genes that are candidates for stickleback phenotypic evolution and will form the basis of future genetic analyses in this and other organisms. This study represents the first high-density SNP-based genome scan of genetic diversity and differentiation for populations of threespine stickleback in the wild. These data illustrate the complementary nature of laboratory crosses and population genomic scans by confirming the adaptive significance of previously identified genomic regions, elucidating the particular evolutionary and demographic history of such

  2. Embodied Evolution in Collective Robotics: A Review

    Directory of Open Access Journals (Sweden)

    Nicolas Bredeche

    2018-02-01

    Full Text Available This article provides an overview of evolutionary robotics techniques applied to online distributed evolution for robot collectives, namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. This article also presents a comprehensive summary of research published in the field since its inception around the year 2000, providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots to embodied evolution as an online distributed learning method for designing collective behaviors in swarm-like collectives. This article concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.

  3. Influence of equilibrium shear flow in the parallel magnetic direction on edge localized mode crash

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y.; Xiong, Y. Y. [College of Physical Science and Technology, Sichuan University, 610064 Chengdu (China); Chen, S. Y., E-mail: sychen531@163.com [College of Physical Science and Technology, Sichuan University, 610064 Chengdu (China); Key Laboratory of High Energy Density Physics and Technology of Ministry of Education, Sichuan University, Chengdu 610064 (China); Southwestern Institute of Physics, Chengdu 610041 (China); Huang, J.; Tang, C. J. [College of Physical Science and Technology, Sichuan University, 610064 Chengdu (China); Key Laboratory of High Energy Density Physics and Technology of Ministry of Education, Sichuan University, Chengdu 610064 (China)

    2016-04-15

    The influence of the parallel shear flow on the evolution of peeling-ballooning (P-B) modes is studied with the BOUT++ four-field code in this paper. The parallel shear flow has different effects in linear simulation and nonlinear simulation. In the linear simulations, the growth rate of edge localized mode (ELM) can be increased by Kelvin-Helmholtz term, which can be caused by the parallel shear flow. In the nonlinear simulations, the results accord with the linear simulations in the linear phase. However, the ELM size is reduced by the parallel shear flow in the beginning of the turbulence phase, which is recognized as the P-B filaments' structure. Then during the turbulence phase, the ELM size is decreased by the shear flow.

  4. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  5. Constraints on the evolution of phenotypic plasticity

    DEFF Research Database (Denmark)

    Murren, Courtney J; Auld, Josh R.; Callahan, Hilary S

    2015-01-01

    Phenotypic plasticity is ubiquitous and generally regarded as a key mechanism for enabling organisms to survive in the face of environmental change. Because no organism is infinitely or ideally plastic, theory suggests that there must be limits (for example, the lack of ability to produce...... an optimal trait) to the evolution of phenotypic plasticity, or that plasticity may have inherent significant costs. Yet numerous experimental studies have not detected widespread costs. Explicitly differentiating plasticity costs from phenotype costs, we re-evaluate fundamental questions of the limits...... to the evolution of plasticity and of generalists vs specialists. We advocate for the view that relaxed selection and variable selection intensities are likely more important constraints to the evolution of plasticity than the costs of plasticity. Some forms of plasticity, such as learning, may be inherently...

  6. Spontaneous and Widespread Electricity Generation in Natural Deep-Sea Hydrothermal Fields.

    Science.gov (United States)

    Yamamoto, Masahiro; Nakamura, Ryuhei; Kasaya, Takafumi; Kumagai, Hidenori; Suzuki, Katsuhiko; Takai, Ken

    2017-05-15

    Deep-sea hydrothermal vents discharge abundant reductive energy into oxidative seawater. Herein, we demonstrated that in situ measurements of redox potentials on the surfaces of active hydrothermal mineral deposits were more negative than the surrounding seawater potential, driving electrical current generation. We also demonstrated that negative potentials in the surface of minerals were widespread in the hydrothermal fields, regardless of the proximity to hydrothermal fluid discharges. Lab experiments verified that the negative potential of the mineral surface was induced by a distant electron transfer from the hydrothermal fluid through the metallic and catalytic properties of minerals. These results indicate that electric current is spontaneously and widely generated in natural mineral deposits in deep-sea hydrothermal fields. Our discovery provides important insights into the microbial communities that are supported by extracellular electron transfer and the prebiotic chemical and metabolic evolution of the ocean hydrothermal systems. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Cognitive dissonance as an explanation of the genesis, evolution ...

    African Journals Online (AJOL)

    The steady erosion of support for this flagrant HIV denialism, together with the rise of neoliberal thinking in the ANC, would lead to the evolution of this biological denialism into a form of treatment denialism. This ideology argued against the widespread provision and use of antiretroviral treatment. Empirical evidence is ...

  8. Darwinism Extended - A survey of how the idea of cultural evolution evolved

    NARCIS (Netherlands)

    Buskes, C.J.J.

    2013-01-01

    In the past 150 years there have been many attempts to draw parallels between cultural and biological evolution. Most of these attempts were flawed due to lack of knowledge and false ideas about evolution. In recent decades these shortcomings have been cleared away, thus triggering a renewed

  9. Darwinism Extended - A Survey of How the Idea of Cultural Evolution Evolved

    NARCIS (Netherlands)

    Buskes, C.J.J.

    2013-01-01

    In the past 150 years there have been many attempts to draw parallels between cultural and biological evolution. Most of these attempts were flawed due to lack of knowledge and false ideas about evolution. In recent decades these shortcomings have been cleared away, thus triggering a renewed

  10. Chronic Widespread Pain after Motor Vehicle Collision Typically Occurs via Immediate Development and Non-Recovery: Results of an Emergency Department-Based Cohort Study

    Science.gov (United States)

    JunMei, Hu; Andrey V, Bortsov; Lauren, Ballina; Danielle C, Orrey; Robert A, Swor; David, Peak; Jeffrey, Jones; Niels, Rathlev; David C, Lee; Robert, Domeier; Phyllis, Hendry; Blair A, Parry; Samuel A, McLean

    2016-01-01

    Motor vehicle collision (MVC) can trigger chronic widespread pain (CWP) development in vulnerable individuals. Whether such CWP typically develops via the evolution of pain from regional to widespread or via the early development of widespread pain with non-recovery is currently unknown. We evaluated the trajectory of CWP development (American College of Rheumatology criteria) among 948 European-American individuals who presented to the emergency department (ED) for care in the early aftermath of MVC. Pain extent was assessed in the ED and 6 weeks, 6 months, and 1 year after MVC on 100%, 91%, 89%, and 91% of participants, respectively. Individuals who reported prior CWP at the time of ED evaluation (n = 53) were excluded. Trajectory modeling identified a two-group solution as optimal, with the Bayes Factor value (138) indicating strong model selection. Linear solution plots supported a non-recovery model. While the number of body regions with pain in the non-CWP group steadily declined, the number of body regions with pain in the CWP trajectory group (192/895, 22%) remained relatively constant over time. These data support the hypothesis that individuals who develop CWP after MVC develop widespread pain in the early aftermath of MVC which does not remit. PMID:26808013

  11. Experimental evolution in biofilm populations

    Science.gov (United States)

    Steenackers, Hans P.; Parijs, Ilse; Foster, Kevin R.; Vanderleyden, Jozef

    2016-01-01

    Biofilms are a major form of microbial life in which cells form dense surface associated communities that can persist for many generations. The long-life of biofilm communities means that they can be strongly shaped by evolutionary processes. Here, we review the experimental study of evolution in biofilm communities. We first provide an overview of the different experimental models used to study biofilm evolution and their associated advantages and disadvantages. We then illustrate the vast amount of diversification observed during biofilm evolution, and we discuss (i) potential ecological and evolutionary processes behind the observed diversification, (ii) recent insights into the genetics of adaptive diversification, (iii) the striking degree of parallelism between evolution experiments and real-life biofilms and (iv) potential consequences of diversification. In the second part, we discuss the insights provided by evolution experiments in how biofilm growth and structure can promote cooperative phenotypes. Overall, our analysis points to an important role of biofilm diversification and cooperation in bacterial survival and productivity. Deeper understanding of both processes is of key importance to design improved antimicrobial strategies and diagnostic techniques. PMID:26895713

  12. A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A., E-mail: bharath@u.northwestern.edu [Center for Interdisciplinary Exploration and Research in Astrophysics, Northwestern University, Evanston, IL (United States)

    2013-02-15

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N {approx} 10{sup 7} particles. Our code is based on the Henon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10{sup 5} to 10{sup 7}. We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within {approx}< 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N = 10{sup 5}, 128 for N = 10{sup 6} and 256 for N = 10{sup 7}. The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60 Multiplication-Sign , 100 Multiplication-Sign , and 220 Multiplication-Sign , respectively.

  13. Modern spandrels: the roles of genetic drift, gene flow and natural selection in the evolution of parallel clines.

    Science.gov (United States)

    Santangelo, James S; Johnson, Marc T J; Ness, Rob W

    2018-05-16

    Urban environments offer the opportunity to study the role of adaptive and non-adaptive evolutionary processes on an unprecedented scale. While the presence of parallel clines in heritable phenotypic traits is often considered strong evidence for the role of natural selection, non-adaptive evolutionary processes can also generate clines, and this may be more likely when traits have a non-additive genetic basis due to epistasis. In this paper, we use spatially explicit simulations modelled according to the cyanogenesis (hydrogen cyanide, HCN) polymorphism in white clover ( Trifolium repens ) to examine the formation of phenotypic clines along urbanization gradients under varying levels of drift, gene flow and selection. HCN results from an epistatic interaction between two Mendelian-inherited loci. Our results demonstrate that the genetic architecture of this trait makes natural populations susceptible to decreases in HCN frequencies via drift. Gradients in the strength of drift across a landscape resulted in phenotypic clines with lower frequencies of HCN in strongly drifting populations, giving the misleading appearance of deterministic adaptive changes in the phenotype. Studies of heritable phenotypic change in urban populations should generate null models of phenotypic evolution based on the genetic architecture underlying focal traits prior to invoking selection's role in generating adaptive differentiation. © 2018 The Author(s).

  14. Evolution of helping and harming in heterogeneous groups.

    Science.gov (United States)

    Rodrigues, António M M; Gardner, Andy

    2013-08-01

    Social groups are often composed of individuals who differ in many respects. Theoretical studies on the evolution of helping and harming behaviors have largely focused upon genetic differences between individuals. However, nongenetic variation between group members is widespread in natural populations, and may mediate differences in individuals' social behavior. Here, we develop a framework to study how variation in individual quality mediates the evolution of unconditional and conditional social traits. We investigate the scope for the evolution of social traits that are conditional on the quality of the actor and/or recipients. We find that asymmetries in individual quality can lead to the evolution of plastic traits with different individuals expressing helping and harming traits within the same group. In this context, population viscosity can mediate the evolution of social traits, and local competition can promote both helping and harming behaviors. Furthermore, asymmetries in individual quality can lead to the evolution of competition-like traits between clonal individuals. Overall, we highlight the importance of asymmetries in individual quality, including differences in reproductive value and the ability to engage in successful social interactions, in mediating the evolution of helping and harming behaviors. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  15. High-speed detection of emergent market clustering via an unsupervised parallel genetic algorithm

    Directory of Open Access Journals (Sweden)

    Dieter Hendricks

    2016-02-01

    Full Text Available We implement a master-slave parallel genetic algorithm with a bespoke log-likelihood fitness function to identify emergent clusters within price evolutions. We use graphics processing units (GPUs to implement a parallel genetic algorithm and visualise the results using disjoint minimal spanning trees. We demonstrate that our GPU parallel genetic algorithm, implemented on a commercially available general purpose GPU, is able to recover stock clusters in sub-second speed, based on a subset of stocks in the South African market. This approach represents a pragmatic choice for low-cost, scalable parallel computing and is significantly faster than a prototype serial implementation in an optimised C-based fourth-generation programming language, although the results are not directly comparable because of compiler differences. Combined with fast online intraday correlation matrix estimation from high frequency data for cluster identification, the proposed implementation offers cost-effective, near-real-time risk assessment for financial practitioners.

  16. Evolution of calculation methods taking into account severe accidents

    International Nuclear Information System (INIS)

    L'Homme, A.; Courtaud, J.M.

    1990-12-01

    During the first decade of PWRs operation in France the calculation methods used for design and operation have improved very much. This paper gives a general analysis of the calculation methods evolution in parallel with the evolution of safety approach concerning PWRs. Then a comprehensive presentation of principal calculation tools is presented as applied during the past decade. An effort is done to predict the improvements in near future

  17. Identifying hidden rate changes in the evolution of a binary morphological character: the evolution of plant habit in campanulid angiosperms.

    Science.gov (United States)

    Beaulieu, Jeremy M; O'Meara, Brian C; Donoghue, Michael J

    2013-09-01

    The growth of phylogenetic trees in scope and in size is promising from the standpoint of understanding a wide variety of evolutionary patterns and processes. With trees comprised of larger, older, and globally distributed clades, it is likely that the lability of a binary character will differ significantly among lineages, which could lead to errors in estimating transition rates and the associated inference of ancestral states. Here we develop and implement a new method for identifying different rates of evolution in a binary character along different branches of a phylogeny. We illustrate this approach by exploring the evolution of growth habit in Campanulidae, a flowering plant clade containing some 35,000 species. The distribution of woody versus herbaceous species calls into question the use of traditional models of binary character evolution. The recognition and accommodation of changes in the rate of growth form evolution in different lineages demonstrates, for the first time, a robust picture of growth form evolution across a very large, very old, and very widespread flowering plant clade.

  18. Evolution of the carabid ground beetles.

    Science.gov (United States)

    Osawa, S; Su, Z H; Kim, C G; Okamoto, M; Tominaga, O; Imura, Y

    1999-01-01

    The phylogenetic relationships of the carabid ground beetles have been estimated by analysing a large part of the ND5 gene sequences of more than 1,000 specimens consisting of the representative species and geographic races covering most of the genera and subgenera known in the world. From the phylogenetic analyses in conjunction with the mtDNA-based dating, a scenario of the establishment of the present habitats of the respective Japanese carabids has been constructed. The carabid diversification took place ca. 40 MYA as an explosive radiation of the major genera. During evolution, occasional small or single bangs also took place, sometimes accompanied by parallel morphological evolution in phylogenetically remote as well as close lineages. The existence of silent periods, in which few morphological changes took place, has been recognized during evolution. Thus, the carabid evolution is discontinuous, alternatively having a phase of rapid morphological change and a silent phase.

  19. Geometric phases for mixed states during cyclic evolutions

    International Nuclear Information System (INIS)

    Fu Libin; Chen Jingling

    2004-01-01

    The geometric phases of cyclic evolutions for mixed states are discussed in the framework of unitary evolution. A canonical 1-form is defined whose line integral gives the geometric phase, which is gauge invariant. It reduces to the Aharonov and Anandan phase in the pure state case. Our definition is consistent with the phase shift in the proposed experiment (Sjoeqvist et al 2000 Phys. Rev. Lett. 85 2845) for a cyclic evolution if the unitary transformation satisfies the parallel transport condition. A comprehensive geometric interpretation is also given. It shows that the geometric phases for mixed states share the same geometric sense with the pure states

  20. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  1. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  2. Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study

    Directory of Open Access Journals (Sweden)

    Hari Radhakrishnan

    2015-01-01

    Full Text Available This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were done using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.

  3. Phylogeography, population structure and evolution of coral-eating butterflyfishes (Family Chaetodontidae, genus Chaetodon , subgenus Corallochaetodon )

    KAUST Repository

    Waldrop, Ellen; Hobbs, Jean-Paul A.; Randall, John E.; DiBattista, Joseph; Rocha, Luiz A.; Kosaki, Randall K.; Berumen, Michael L.; Bowen, Brian W.

    2016-01-01

    This study compares the phylogeography, population structure and evolution of four butterflyfish species in the Chaetodon subgenus Corallochaetodon, with two widespread species (Indian Ocean – C. trifasciatus and Pacific Ocean – C. lunulatus

  4. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  5. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  6. Parallel Genetic and Phenotypic Evolution of DNA Superhelicity in Experimental Populations of Escherichia coli

    DEFF Research Database (Denmark)

    Crozat, Estelle; Winkworth, Cynthia; Gaffé, Joël

    2010-01-01

    , indicate that changes in DNA superhelicity have been important in the evolution of these populations. Surprisingly, however, most of the evolved alleles we tested had either no detectable or slightly deleterious effects on fitness, despite these signatures of positive selection.......DNA supercoiling is the master function that interconnects chromosome structure and global gene transcription. This function has recently been shown to be under strong selection in Escherichia coli. During the evolution of 12 initially identical populations propagated in a defined environment...

  7. Large-scale patterns of diversification in the widespread legume genus Senna and the evolutionary role of extrafloral nectaries.

    Science.gov (United States)

    Marazzi, Brigitte; Sanderson, Michael J

    2010-12-01

    Unraveling the diversification history of old, species-rich and widespread clades is difficult because of extinction, undersampling, and taxonomic uncertainty. In the context of these challenges, we investigated the timing and mode of lineage diversification in Senna (Leguminosae) to gain insights into the evolutionary role of extrafloral nectaries (EFNs). EFNs secrete nectar, attracting ants and forming ecologically important ant-plant mutualisms. In Senna, EFNs characterize one large clade (EFN clade), including 80% of its 350 species. Taxonomic accounts make Senna the largest caesalpinioid genus, but quantitative comparisons to other taxa require inferences about rates. Molecular dating analyses suggest that Senna originated in the early Eocene, and its major lineages appeared during early/mid Eocene to early Oligocene. EFNs evolved in the late Eocene, after the main radiation of ants. The EFN clade diversified faster, becoming significantly more species-rich than non-EFN clades. The shift in diversification rates associated with EFN evolution supports the hypothesis that EFNs represent a (relatively old) key innovation in Senna. EFNs may have promoted the colonization of new habitats appearing with the early uplift of the Andes. This would explain the distinctive geographic concentration of the EFN clade in South America. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.

  8. Are neonicotinoid insecticides driving declines of widespread butterflies?

    Directory of Open Access Journals (Sweden)

    Andre S. Gilburn

    2015-11-01

    Full Text Available There has been widespread concern that neonicotinoid pesticides may be adversely impacting wild and managed bees for some years, but recently attention has shifted to examining broader effects they may be having on biodiversity. For example in the Netherlands, declines in insectivorous birds are positively associated with levels of neonicotinoid pollution in surface water. In England, the total abundance of widespread butterfly species declined by 58% on farmed land between 2000 and 2009 despite both a doubling in conservation spending in the UK, and predictions that climate change should benefit most species. Here we build models of the UK population indices from 1985 to 2012 for 17 widespread butterfly species that commonly occur at farmland sites. Of the factors we tested, three correlated significantly with butterfly populations. Summer temperature and the index for a species the previous year are both positively associated with butterfly indices. By contrast, the number of hectares of farmland where neonicotinoid pesticides are used is negatively associated with butterfly indices. Indices for 15 of the 17 species show negative associations with neonicotinoid usage. The declines in butterflies have largely occurred in England, where neonicotinoid usage is at its highest. In Scotland, where neonicotinoid usage is comparatively low, butterfly numbers are stable. Further research is needed urgently to show whether there is a causal link between neonicotinoid usage and the decline of widespread butterflies or whether it simply represents a proxy for other environmental factors associated with intensive agriculture.

  9. Widespread presence of human BOULE homologs among animals and conservation of their ancient reproductive function.

    Directory of Open Access Journals (Sweden)

    Chirag Shah

    2010-07-01

    Full Text Available Sex-specific traits that lead to the production of dimorphic gametes, sperm in males and eggs in females, are fundamental for sexual reproduction and accordingly widespread among animals. Yet the sex-biased genes that underlie these sex-specific traits are under strong selective pressure, and as a result of adaptive evolution they often become divergent. Indeed out of hundreds of male or female fertility genes identified in diverse organisms, only a very small number of them are implicated specifically in reproduction in more than one lineage. Few genes have exhibited a sex-biased, reproductive-specific requirement beyond a given phylum, raising the question of whether any sex-specific gametogenesis factors could be conserved and whether gametogenesis might have evolved multiple times. Here we describe a metazoan origin of a conserved human reproductive protein, BOULE, and its prevalence from primitive basal metazoans to chordates. We found that BOULE homologs are present in the genomes of representative species of each of the major lineages of metazoans and exhibit reproductive-specific expression in all species examined, with a preponderance of male-biased expression. Examination of Boule evolution within insect and mammalian lineages revealed little evidence for accelerated evolution, unlike most reproductive genes. Instead, purifying selection was the major force behind Boule evolution. Furthermore, loss of function of mammalian Boule resulted in male-specific infertility and a global arrest of sperm development remarkably similar to the phenotype in an insect boule mutation. This work demonstrates the conservation of a reproductive protein throughout eumetazoa, its predominant testis-biased expression in diverse bilaterian species, and conservation of a male gametogenic requirement in mice. This shows an ancient gametogenesis requirement for Boule among Bilateria and supports a model of a common origin of spermatogenesis.

  10. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  11. Gas solubilities widespread applications

    CERN Document Server

    Gerrard, William

    1980-01-01

    Gas Solubilities: Widespread Applications discusses several topics concerning the various applications of gas solubilities. The first chapter of the book reviews Henr's law, while the second chapter covers the effect of temperature on gas solubility. The third chapter discusses the various gases used by Horiuti, and the following chapters evaluate the data on sulfur dioxide, chlorine data, and solubility data for hydrogen sulfide. Chapter 7 concerns itself with solubility of radon, thoron, and actinon. Chapter 8 tackles the solubilities of diborane and the gaseous hydrides of groups IV, V, and

  12. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  13. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Eberhard, Peter [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: eberhard@ltm.uni-erlangen.de; Dignath, Florian [University of Stuttgart, Institute B of Mechanics (Germany)], E-mail: fd@mechb.uni-stuttgart.de; Kuebler, Lars [University of Erlangen-Nuremberg, Institute of Applied Mechanics (Germany)], E-mail: kuebler@ltm.uni-erlangen.de

    2003-03-15

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized.

  14. Parallel Evolutionary Optimization of Multibody Systems with Application to Railway Dynamics

    International Nuclear Information System (INIS)

    Eberhard, Peter; Dignath, Florian; Kuebler, Lars

    2003-01-01

    The optimization of multibody systems usually requires many costly criteria computations since the equations of motion must be evaluated by numerical time integration for each considered design. For actively controlled or flexible multibody systems additional difficulties arise as the criteria may contain non-differentiable points or many local minima. Therefore, in this paper a stochastic evolution strategy is used in combination with parallel computing in order to reduce the computation times whilst keeping the inherent robustness. For the parallelization a master-slave approach is used in a heterogeneous workstation/PC cluster. The pool-of-tasks concept is applied in order to deal with the frequently changing workloads of different machines in the cluster. In order to analyze the performance of the parallel optimization method, the suspension of an ICE passenger coach, modeled as an elastic multibody system, is optimized simultaneously with regard to several criteria including vibration damping and a criterion related to safety against derailment. The iterative and interactive nature of a typical optimization process for technical systems is emphasized

  15. On the role of sparseness in the evolution of modularity in gene regulatory networks.

    Science.gov (United States)

    Espinosa-Soto, Carlos

    2018-05-01

    Modularity is a widespread property in biological systems. It implies that interactions occur mainly within groups of system elements. A modular arrangement facilitates adjustment of one module without perturbing the rest of the system. Therefore, modularity of developmental mechanisms is a major factor for evolvability, the potential to produce beneficial variation from random genetic change. Understanding how modularity evolves in gene regulatory networks, that create the distinct gene activity patterns that characterize different parts of an organism, is key to developmental and evolutionary biology. One hypothesis for the evolution of modules suggests that interactions between some sets of genes become maladaptive when selection favours additional gene activity patterns. The removal of such interactions by selection would result in the formation of modules. A second hypothesis suggests that modularity evolves in response to sparseness, the scarcity of interactions within a system. Here I simulate the evolution of gene regulatory networks and analyse diverse experimentally sustained networks to study the relationship between sparseness and modularity. My results suggest that sparseness alone is neither sufficient nor necessary to explain modularity in gene regulatory networks. However, sparseness amplifies the effects of forms of selection that, like selection for additional gene activity patterns, already produce an increase in modularity. That evolution of new gene activity patterns is frequent across evolution also supports that it is a major factor in the evolution of modularity. That sparseness is widespread across gene regulatory networks indicates that it may have facilitated the evolution of modules in a wide variety of cases.

  16. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  17. Synchronous parallel kinetic Monte Carlo for continuum diffusion-reaction systems

    International Nuclear Information System (INIS)

    Martinez, E.; Marian, J.; Kalos, M.H.; Perlado, J.M.

    2008-01-01

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm is intended as a generalization of the standard n-fold kMC method, and is trivially implemented in parallel architectures. In its present form, the algorithm is not rigorous in the sense that boundary conflicts are ignored. We demonstrate, however, that, in their absence, or if they were correctly accounted for, our algorithm solves the same master equation as the serial method. We test the validity and parallel performance of the method by solving several pure diffusion problems (i.e. with no particle interactions) with known analytical solution. We also study diffusion-reaction systems with known asymptotic behavior and find that, for large systems with interaction radii smaller than the typical diffusion length, boundary conflicts are negligible and do not affect the global kinetic evolution, which is seen to agree with the expected analytical behavior. Our method is a controlled approximation in the sense that the error incurred by ignoring boundary conflicts can be quantified intrinsically, during the course of a simulation, and decreased arbitrarily (controlled) by modifying a few problem-dependent simulation parameters

  18. Thermal Non-equilibrium Consistent with Widespread Cooling

    Science.gov (United States)

    Winebarger, A.; Lionello, R.; Mikic, Z.; Linker, J.; Mok, Y.

    2014-01-01

    Time correlation analysis has been used to show widespread cooling in the solar corona; this cooling has been interpreted as a result of impulsive (nanoflare) heating. In this work, we investigate wide-spread cooling using a 3D model for a solar active region which has been heated with highly stratified heating. This type of heating drives thermal non-equilibrium solutions, meaning that though the heating is effectively steady, the density and temperature in the solution are not. We simulate the expected observations in narrowband EUV images and apply the time correlation analysis. We find that the results of this analysis are qualitatively similar to the observed data. We discuss additional diagnostics that may be applied to differentiate between these two heating scenarios.

  19. Convergent evolution of the genomes of marine mammals

    Science.gov (United States)

    Foote, Andrew D.; Liu, Yue; Thomas, Gregg W.C.; Vinař, Tomáš; Alföldi, Jessica; Deng, Jixin; Dugan, Shannon; van Elk, Cornelis E.; Hunter, Margaret; Joshi, Vandita; Khan, Ziad; Kovar, Christie; Lee, Sandra L.; Lindblad-Toh, Kerstin; Mancia, Annalaura; Nielsen, Rasmus; Qin, Xiang; Qu, Jiaxin; Raney, Brian J.; Vijay, Nagarjun; Wolf, Jochen B. W.; Hahn, Matthew W.; Muzny, Donna M.; Worley, Kim C.; Gilbert, M. Thomas P.; Gibbs, Richard A.

    2015-01-01

    Marine mammals from different mammalian orders share several phenotypic traits adapted to the aquatic environment and therefore represent a classic example of convergent evolution. To investigate convergent evolution at the genomic level, we sequenced and performed de novo assembly of the genomes of three species of marine mammals (the killer whale, walrus and manatee) from three mammalian orders that share independently evolved phenotypic adaptations to a marine existence. Our comparative genomic analyses found that convergent amino acid substitutions were widespread throughout the genome and that a subset of these substitutions were in genes evolving under positive selection and putatively associated with a marine phenotype. However, we found higher levels of convergent amino acid substitutions in a control set of terrestrial sister taxa to the marine mammals. Our results suggest that, whereas convergent molecular evolution is relatively common, adaptive molecular convergence linked to phenotypic convergence is comparatively rare.

  20. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  1. Parallel and orthogonal stimulus in ultradiluted neural networks

    International Nuclear Information System (INIS)

    Sobral, G. A. Jr.; Vieira, V. M.; Lyra, M. L.; Silva, C. R. da

    2006-01-01

    Extending a model due to Derrida, Gardner, and Zippelius, we have studied the recognition ability of an extreme and asymmetrically diluted version of the Hopfield model for associative memory by including the effect of a stimulus in the dynamics of the system. We obtain exact results for the dynamic evolution of the average network superposition. The stimulus field was considered as proportional to the overlapping of the state of the system with a particular stimulated pattern. Two situations were analyzed, namely, the external stimulus acting on the initialization pattern (parallel stimulus) and the external stimulus acting on a pattern orthogonal to the initialization one (orthogonal stimulus). In both cases, we obtained the complete phase diagram in the parameter space composed of the stimulus field, thermal noise, and network capacity. Our results show that the system improves its recognition ability for parallel stimulus. For orthogonal stimulus two recognition phases emerge with the system locking at the initialization or stimulated pattern. We confront our analytical results with numerical simulations for the noiseless case T=0

  2. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  3. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  4. A novel lineage of myoviruses infecting cyanobacteria is widespread in the oceans.

    Science.gov (United States)

    Sabehi, Gazalah; Shaulov, Lihi; Silver, David H; Yanai, Itai; Harel, Amnon; Lindell, Debbie

    2012-02-07

    Viruses infecting bacteria (phages) are thought to greatly impact microbial population dynamics as well as the genome diversity and evolution of their hosts. Here we report on the discovery of a novel lineage of tailed dsDNA phages belonging to the family Myoviridae and describe its first representative, S-TIM5, that infects the ubiquitous marine cyanobacterium, Synechococcus. The genome of this phage encodes an entirely unique set of structural proteins not found in any currently known phage, indicating that it uses lineage-specific genes for virion morphogenesis and represents a previously unknown lineage of myoviruses. Furthermore, among its distinctive collection of replication and DNA metabolism genes, it carries a mitochondrial-like DNA polymerase gene, providing strong evidence for the bacteriophage origin of the mitochondrial DNA polymerase. S-TIM5 also encodes an array of bacterial-like metabolism genes commonly found in phages infecting cyanobacteria including photosynthesis, carbon metabolism and phosphorus acquisition genes. This suggests a common gene pool and gene swapping of cyanophage-specific genes among different phage lineages despite distinct sets of structural and replication genes. All cytosines following purine nucleotides are methylated in the S-TIM5 genome, constituting a unique methylation pattern that likely protects the genome from nuclease degradation. This phage is abundant in the Red Sea and S-TIM5 gene homologs are widespread in the oceans. This unusual phage type is thus likely to be an important player in the oceans, impacting the population dynamics and evolution of their primary producing cyanobacterial hosts.

  5. Extension parallel to the rift zone during segmented fault growth: application to the evolution of the NE Atlantic

    Directory of Open Access Journals (Sweden)

    A. Bubeck

    2017-11-01

    Full Text Available The mechanical interaction of propagating normal faults is known to influence the linkage geometry of first-order faults, and the development of second-order faults and fractures, which transfer displacement within relay zones. Here we use natural examples of growth faults from two active volcanic rift zones (Koa`e, island of Hawai`i, and Krafla, northern Iceland to illustrate the importance of horizontal-plane extension (heave gradients, and associated vertical axis rotations, in evolving continental rift systems. Second-order extension and extensional-shear faults within the relay zones variably resolve components of regional extension, and components of extension and/or shortening parallel to the rift zone, to accommodate the inherently three-dimensional (3-D strains associated with relay zone development and rotation. Such a configuration involves volume increase, which is accommodated at the surface by open fractures; in the subsurface this may be accommodated by veins or dikes oriented obliquely and normal to the rift axis. To consider the scalability of the effects of relay zone rotations, we compare the geometry and kinematics of fault and fracture sets in the Koa`e and Krafla rift zones with data from exhumed contemporaneous fault and dike systems developed within a > 5×104 km2 relay system that developed during formation of the NE Atlantic margins. Based on the findings presented here we propose a new conceptual model for the evolution of segmented continental rift basins on the NE Atlantic margins.

  6. a Predator-Prey Model Based on the Fully Parallel Cellular Automata

    Science.gov (United States)

    He, Mingfeng; Ruan, Hongbo; Yu, Changliang

    We presented a predator-prey lattice model containing moveable wolves and sheep, which are characterized by Penna double bit strings. Sexual reproduction and child-care strategies are considered. To implement this model in an efficient way, we build a fully parallel Cellular Automata based on a new definition of the neighborhood. We show the roles played by the initial densities of the populations, the mutation rate and the linear size of the lattice in the evolution of this model.

  7. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  8. Population genomic scans suggest novel genes underlie convergent flowering time evolution in the introduced range of Arabidopsis thaliana.

    Science.gov (United States)

    Gould, Billie A; Stinchcombe, John R

    2017-01-01

    A long-standing question in evolutionary biology is whether the evolution of convergent phenotypes results from selection on the same heritable genetic components. Using whole-genome sequencing and genome scans, we tested whether the evolution of parallel longitudinal flowering time clines in the native and introduced ranges of Arabidopsis thaliana has a similar genetic basis. We found that common variants of large effect on flowering time in the native range do not appear to have been under recent strong selection in the introduced range. We identified a set of 38 new candidate genes that are putatively linked to the evolution of flowering time. A high degree of conditional neutrality of flowering time variants between the native and introduced range may preclude parallel evolution at the level of genes. Overall, neither gene pleiotropy nor available standing genetic variation appears to have restricted the evolution of flowering time to high-frequency variants from the native range or to known flowering time pathway genes. © 2016 John Wiley & Sons Ltd.

  9. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  10. Cosmological evolution of p-brane networks

    International Nuclear Information System (INIS)

    Sousa, L.; Avelino, P. P.

    2011-01-01

    In this paper we derive, directly from the Nambu-Goto action, the relevant components of the acceleration of cosmological featureless p-branes, extending previous analysis based on the field theory equations in the thin-brane limit. The component of the acceleration parallel to the velocity is at the core of the velocity-dependent one-scale model for the evolution of p-brane networks. We use this model to show that, in a decelerating expanding universe in which the p-branes are relevant cosmologically, interactions cannot lead to frustration, except for fine-tuned nonrelativistic networks with a dimensionless curvature parameter k<<1. We discuss the implications of our findings for the cosmological evolution of p-brane networks.

  11. MEvoLib v1.0: the first molecular evolution library for Python.

    Science.gov (United States)

    Álvarez-Jarreta, Jorge; Ruiz-Pesini, Eduardo

    2016-10-28

    Molecular evolution studies involve many different hard computational problems solved, in most cases, with heuristic algorithms that provide a nearly optimal solution. Hence, diverse software tools exist for the different stages involved in a molecular evolution workflow. We present MEvoLib, the first molecular evolution library for Python, providing a framework to work with different tools and methods involved in the common tasks of molecular evolution workflows. In contrast with already existing bioinformatics libraries, MEvoLib is focused on the stages involved in molecular evolution studies, enclosing the set of tools with a common purpose in a single high-level interface with fast access to their frequent parameterizations. The gene clustering from partial or complete sequences has been improved with a new method that integrates accessible external information (e.g. GenBank's features data). Moreover, MEvoLib adjusts the fetching process from NCBI databases to optimize the download bandwidth usage. In addition, it has been implemented using parallelization techniques to cope with even large-case scenarios. MEvoLib is the first library for Python designed to facilitate molecular evolution researches both for expert and novel users. Its unique interface for each common task comprises several tools with their most used parameterizations. It has also included a method to take advantage of biological knowledge to improve the gene partition of sequence datasets. Additionally, its implementation incorporates parallelization techniques to enhance computational costs when handling very large input datasets.

  12. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  13. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  14. Convergent evolution of marine mammals is associated with distinct substitutions in common genes

    Science.gov (United States)

    Zhou, Xuming; Seim, Inge; Gladyshev, Vadim N.

    2015-01-01

    Phenotypic convergence is thought to be driven by parallel substitutions coupled with natural selection at the sequence level. Multiple independent evolutionary transitions of mammals to an aquatic environment offer an opportunity to test this thesis. Here, whole genome alignment of coding sequences identified widespread parallel amino acid substitutions in marine mammals; however, the majority of these changes were not unique to these animals. Conversely, we report that candidate aquatic adaptation genes, identified by signatures of likelihood convergence and/or elevated ratio of nonsynonymous to synonymous nucleotide substitution rate, are characterized by very few parallel substitutions and exhibit distinct sequence changes in each group. Moreover, no significant positive correlation was found between likelihood convergence and positive selection in all three marine lineages. These results suggest that convergence in protein coding genes associated with aquatic lifestyle is mainly characterized by independent substitutions and relaxed negative selection. PMID:26549748

  15. Microstructure and microtexture evolutions of deformed oxide layers on a hot-rolled microalloyed steel

    International Nuclear Information System (INIS)

    Yu, Xianglong; Jiang, Zhengyi; Zhao, Jingwei; Wei, Dongbin; Zhou, Cunlong; Huang, Qingxue

    2015-01-01

    Highlights: • Microtexture development of deformed oxide layers is investigated. • Magnetite shares the {0 0 1} fibre texture with wustite. • Hematite develops the {0 0 0 1} basal fibre parallel to the oxide growth. • Stress relief and ion vacancy diffusion mechanism for magnetite seam. - Abstract: Electron backscatter diffraction (EBSD) analysis has been presented to investigate the microstructure and microtexture evolutions of deformed oxide scale formed on a microalloyed steel during hot rolling and accelerated cooling. Magnetite and wustite in oxide layers share a strong {0 0 1} and a weak {1 1 0} fibres texture parallel to the oxide growth. Trigonal hematite develops the {0 0 0 1} basal fibre parallel to the crystallographic plane {1 1 1} in magnetite. Taylor factor estimates have been conducted to elucidate the microtexture evolution. The fine-grained magnetite seam adjacent to the substrate is governed by stress relief and ions vacancy diffusion mechanism

  16. Sympatric parallel diversification of major oak clades in the Americas and the origins of Mexican species diversity.

    Science.gov (United States)

    Hipp, Andrew L; Manos, Paul S; González-Rodríguez, Antonio; Hahn, Marlene; Kaproth, Matthew; McVay, John D; Avalos, Susana Valencia; Cavender-Bares, Jeannine

    2018-01-01

    Oaks (Quercus, Fagaceae) are the dominant tree genus of North America in species number and biomass, and Mexico is a global center of oak diversity. Understanding the origins of oak diversity is key to understanding biodiversity of northern temperate forests. A phylogenetic study of biogeography, niche evolution and diversification patterns in Quercus was performed using 300 samples, 146 species. Next-generation sequencing data were generated using the restriction-site associated DNA (RAD-seq) method. A time-calibrated maximum likelihood phylogeny was inferred and analyzed with bioclimatic, soils, and leaf habit data to reconstruct the biogeographic and evolutionary history of the American oaks. Our highly resolved phylogeny demonstrates sympatric parallel diversification in climatic niche, leaf habit, and diversification rates. The two major American oak clades arose in what is now the boreal zone and radiated, in parallel, from eastern North America into Mexico and Central America. Oaks adapted rapidly to niche transitions. The Mexican oaks are particularly numerous, not because Mexico is a center of origin, but because of high rates of lineage diversification associated with high rates of evolution along moisture gradients and between the evergreen and deciduous leaf habits. Sympatric parallel diversification in the oaks has shaped the diversity of North American forests. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  17. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  18. Diversification of AID/APOBEC-like deaminases in metazoa: multiplicity of clades and widespread roles in immunity.

    Science.gov (United States)

    Krishnan, Arunkumar; Iyer, Lakshminarayan M; Holland, Stephen J; Boehm, Thomas; Aravind, L

    2018-04-03

    AID/APOBEC deaminases (AADs) convert cytidine to uridine in single-stranded nucleic acids. They are involved in numerous mutagenic processes, including those underpinning vertebrate innate and adaptive immunity. Using a multipronged sequence analysis strategy, we uncover several AADs across metazoa, dictyosteliida, and algae, including multiple previously unreported vertebrate clades, and versions from urochordates, nematodes, echinoderms, arthropods, lophotrochozoans, cnidarians, and porifera. Evolutionary analysis suggests a fundamental division of AADs early in metazoan evolution into secreted deaminases (SNADs) and classical AADs, followed by diversification into several clades driven by rapid-sequence evolution, gene loss, lineage-specific expansions, and lateral transfer to various algae. Most vertebrate AADs, including AID and APOBECs1-3, diversified in the vertebrates, whereas the APOBEC4-like clade has a deeper origin in metazoa. Positional entropy analysis suggests that several AAD clades are diversifying rapidly, especially in the positions predicted to interact with the nucleic acid target motif, and with potential viral inhibitors. Further, several AADs have evolved neomorphic metal-binding inserts, especially within loops predicted to interact with the target nucleic acid. We also observe polymorphisms, driven by alternative splicing, gene loss, and possibly intergenic recombination between paralogs. We propose that biological conflicts of AADs with viruses and genomic retroelements are drivers of rapid AAD evolution, suggesting a widespread presence of mutagenesis-based immune-defense systems. Deaminases like AID represent versions "institutionalized" from the broader array of AADs pitted in such arms races for mutagenesis of self-DNA, and similar recruitment might have independently occurred elsewhere in metazoa. Copyright © 2018 the Author(s). Published by PNAS.

  19. Modeling SOL evolution during disruptions

    International Nuclear Information System (INIS)

    Rognlien, T.D.; Cohen, R.H.; Crotinger, J.A.

    1996-01-01

    We present the status of our models and transport simulations of the 2-D evolution of the scrape-off layer (SOL) during tokamak disruptions. This evolution is important for several reasons: It determines how the power from the core plasma is distributed on material surfaces, how impurities from those surfaces or from gas injection migrate back to the core region, and what are the properties of the SOL for carrying halo currents. We simulate this plasma in a time-dependent fashion using the SOL transport code UEDGE. This code models the SOL plasma using fluid equations of plasma density, parallel momentum (along the magnetic field), electron energy, ion energy, and neutral gas density. A multispecies model is used to follow the density of different charge-states of impurities. The parallel transport is classical but with kinetic modifications; these are presently treated by flux limits, but we have initiated more sophisticated models giving the correct long-mean-free path limit. The cross-field transport is anomalous, and one of the results of this work is to determine reasonable values to characterize disruptions. Our primary focus is on the initial thermal quench phase when most of the core energy is lost, but the total current is maintained. The impact of edge currents on the MHD equilibrium will be discussed

  20. Genome evolution in an ancient bacteria-ant symbiosis: parallel gene loss among Blochmannia spanning the origin of the ant tribe Camponotini

    Directory of Open Access Journals (Sweden)

    Laura E. Williams

    2015-04-01

    Full Text Available Stable associations between bacterial endosymbionts and insect hosts provide opportunities to explore genome evolution in the context of established mutualisms and assess the roles of selection and genetic drift across host lineages and habitats. Blochmannia, obligate endosymbionts of ants of the tribe Camponotini, have coevolved with their ant hosts for ∼40 MY. To investigate early events in Blochmannia genome evolution across this ant host tribe, we sequenced Blochmannia from two divergent host lineages, Colobopsis obliquus and Polyrhachis turneri, and compared them with four published genomes from Blochmannia of Camponotus sensu stricto. Reconstructed gene content of the last common ancestor (LCA of these six Blochmannia genomes is reduced (690 protein coding genes, consistent with rapid gene loss soon after establishment of the symbiosis. Differential gene loss among Blochmannia lineages has affected cellular functions and metabolic pathways, including DNA replication and repair, vitamin biosynthesis and membrane proteins. Blochmannia of P. turneri (i.e., B. turneri encodes an intact DnaA chromosomal replication initiation protein, demonstrating that loss of dnaA was not essential for establishment of the symbiosis. Based on gene content, B. obliquus and B. turneri are unable to provision hosts with riboflavin. Of the six sequenced Blochmannia, B. obliquus is the earliest diverging lineage (i.e., the sister group of other Blochmannia sampled and encodes the fewest protein-coding genes and the most pseudogenes. We identified 55 genes involved in parallel gene loss, including glutamine synthetase, which may participate in nitrogen recycling. Pathways for biosynthesis of coenzyme A, terpenoids and riboflavin were lost in multiple lineages, suggesting relaxed selection on the pathway after inactivation of one component. Analysis of Illumina read datasets did not detect evidence of plasmids encoding missing functions, nor the presence of

  1. The evolution of tail weaponization in amniotes.

    Science.gov (United States)

    Arbour, Victoria M; Zanno, Lindsay E

    2018-01-31

    Weaponry, for the purpose of intraspecific combat or predator defence, is one of the most widespread animal adaptations, yet the selective pressures and constraints governing its phenotypic diversity and skeletal regionalization are not well understood. Here, we investigate the evolution of tail weaponry in amniotes, a rare form of weaponry that nonetheless evolved independently among a broad spectrum of life including mammals, turtles and dinosaurs. Using phylogenetic comparative methods, we test for links between morphology, ecology and behaviour in extant amniotes known to use the tail as a weapon, and in extinct taxa bearing osseous tail armaments. We find robust ecological and morphological correlates of both tail lashing behaviour and bony tail weaponry, including large body size, body armour and herbivory, suggesting these life-history parameters factor into the evolution of antipredator behaviours and tail armaments. We suggest that the evolution of tail weaponry is rare because large, armoured herbivores are uncommon in extant terrestrial faunas, as they have been throughout evolutionary history. © 2018 The Author(s).

  2. Multi-objective based on parallel vector evaluated particle swarm optimization for optimal steady-state performance of power systems

    DEFF Research Database (Denmark)

    Vlachogiannis, Ioannis (John); Lee, K Y

    2009-01-01

    In this paper the state-of-the-art extended particle swarm optimization (PSO) methods for solving multi-objective optimization problems are represented. We emphasize in those, the co-evolution technique of the parallel vector evaluated PSO (VEPSO), analysed and applied in a multi-objective problem...

  3. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  4. Parallel selective pressures drive convergent diversification of phenotypes in pythons and boas.

    Science.gov (United States)

    Esquerré, Damien; Scott Keogh, J

    2016-07-01

    Pythons and boas are globally distributed and distantly related radiations with remarkable phenotypic and ecological diversity. We tested whether pythons, boas and their relatives have evolved convergent phenotypes when they display similar ecology. We collected geometric morphometric data on head shape for 1073 specimens representing over 80% of species. We show that these two groups display strong and widespread convergence when they occupy equivalent ecological niches and that the history of phenotypic evolution strongly matches the history of ecological diversification, suggesting that both processes are strongly coupled. These results are consistent with replicated adaptive radiation in both groups. We argue that strong selective pressures related to habitat-use have driven this convergence. Pythons and boas provide a new model system for the study of macro-evolutionary patterns of morphological and ecological evolution and they do so at a deeper level of divergence and global scale than any well-established adaptive radiation model systems. © 2016 John Wiley & Sons Ltd/CNRS.

  5. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  6. The evolution of teaching.

    Science.gov (United States)

    Fogarty, L; Strimling, P; Laland, K N

    2011-10-01

    Teaching, alongside imitation, is widely thought to underlie the success of humanity by allowing high-fidelity transmission of information, skills, and technology between individuals, facilitating both cumulative knowledge gain and normative culture. Yet, it remains a mystery why teaching should be widespread in human societies but extremely rare in other animals. We explore the evolution of teaching using simple genetic models in which a single tutor transmits adaptive information to a related pupil at a cost. Teaching is expected to evolve where its costs are outweighed by the inclusive fitness benefits that result from the tutor's relatives being more likely to acquire the valuable information. We find that teaching is not favored where the pupil can easily acquire the information on its own, or through copying others, or for difficult to learn traits, where teachers typically do not possess the information to pass on to relatives. This leads to a narrow range of traits for which teaching would be efficacious, which helps to explain the rarity of teaching in nature, its unusual distribution, and its highly specific nature. Further models that allow for cumulative cultural knowledge gain suggest that teaching evolved in humans because cumulative culture renders otherwise difficult-to-acquire valuable information available to teach. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  7. Ising ferromagnet: zero-temperature dynamic evolution

    International Nuclear Information System (INIS)

    Oliveira, P M C de; Newman, C M; Sidoravicious, V; Stein, D L

    2006-01-01

    The dynamic evolution at zero temperature of a uniform Ising ferromagnet on a square lattice is followed by Monte Carlo computer simulations. The system always eventually reaches a final, absorbing state, which sometimes coincides with a ground state (all spins parallel), and sometimes does not (parallel stripes of spins up and down). We initiate here the numerical study of 'chaotic time dependence' (CTD) by seeing how much information about the final state is predictable from the randomly generated quenched initial state. CTD was originally proposed to explain how nonequilibrium spin glasses could manifest an equilibrium pure state structure, but in simpler systems such as homogeneous ferromagnets it is closely related to long-term predictability and our results suggest that CTD might indeed occur in the infinite volume limit

  8. The elusive nature of adaptive mitochondrial DNA evolution of an Arctic lineage prone to frequent introgression

    DEFF Research Database (Denmark)

    Melo-Ferreira, Jose; Vilela, Joana; Fonseca, Miguel M.

    2014-01-01

    understood. Hares (Lepus spp.) are privileged models to study the impact of natural selection on mitogenomic evolution because 1) species are adapted to contrasting environments, including arctic, with different metabolic pressures, and 2) mtDNA introgression from arctic into temperate species is widespread...

  9. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  10. Patterns of gene flow and selection across multiple species of Acrocephalus warblers: footprints of parallel selection on the Z chromosome

    Czech Academy of Sciences Publication Activity Database

    Reifová, R.; Majerová, V.; Reif, J.; Ahola, M.; Lindholm, A.; Procházka, Petr

    2016-01-01

    Roč. 16, č. 130 (2016), s. 130 ISSN 1471-2148 Institutional support: RVO:68081766 Keywords : Adaptive radiation * Speciation * Gene flow * Parallel adaptive evolution * Z chromosome * Acrocephalus warblers Subject RIV: EG - Zoology Impact factor: 3.221, year: 2016

  11. Cognitive Function, Origin, and Evolution of Musical Emotions

    Directory of Open Access Journals (Sweden)

    Leonid Perlovsky

    2013-12-01

    Full Text Available Cognitive function of music, its origin, and evolution has been a mystery until recently. Here we discuss a theory of a fundamental function of music in cognition and culture. Music evolved in parallel with language. The evolution of language toward a semantically powerful tool required freeing from uncontrolled emotions. Knowledge evolved fast along with language. This created cognitive dissonances, contradictions among knowledge and instincts, which differentiated consciousness. To sustain evolution of language and culture, these contradictions had to be unified. Music was the mechanism of unification. Differentiated emotions are needed for resolving cognitive dissonances. As knowledge has been accumulated, contradictions multiplied and correspondingly more varied emotions had to evolve. While language differentiated psyche, music unified it. Thus the need for refined musical emotions in the process of cultural evolution is grounded in fundamental mechanisms of cognition. This is why today's human mind and cultures cannot exist without today's music.

  12. Widespread marrow necrosis during pregnancy

    International Nuclear Information System (INIS)

    Knickerbocker, W.J.; Quenville, N.F.

    1982-01-01

    Recently, a 22-year-old Caucasian female was referred to our Hospital two days post-partum. She had been feeling unwell during the last few days of her pregnancy and complained of multiple aches and pains, worst in the abdomen and lower back. Her admission platelet count was severely depressed and a bone biopsy showed extensive marrow necrosis with viable bony trabeculae. There was no evidence of vasculitis, vascular thrombosis, or malignancy. Widespread marrow necrosis in pregnancy followed by recovery, to our knowledge, has not been previously reported. (orig.)

  13. Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan

    2009-10-01

    The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

  14. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  15. WRF simulation of a severe hailstorm over Baramati: a study into the space-time evolution

    Science.gov (United States)

    Murthy, B. S.; Latha, R.; Madhuparna, H.

    2018-04-01

    Space-time evolution of a severe hailstorm occurred over the western India as revealed by WRF-ARW simulations are presented. We simulated a specific event centered over Baramati (18.15°N, 74.58°E, 537 m AMSL) on March 9, 2014. A physical mechanism, proposed as a conceptual model, signifies the role of multiple convective cells organizing through outflows leading to a cold frontal type flow, in the presence of a low over the northern Arabian Sea, propagates from NW to SE triggering deep convection and precipitation. A `U' shaped cold pool encircled by a converging boundary forms to the north of Baramati due to precipitation behind the moisture convergence line with strong updrafts ( 15 ms-1) leading to convective clouds extending up to 8 km in a narrow region of 30 km. The outflows from the convective clouds merge with the opposing southerly or southwesterly winds from the Arabian Sea and southerly or southeasterly winds from the Bay of Bengal resulting in moisture convergence (maximum 80 × 10-3 g kg-1 s-1). The vertical profile of the area-averaged moisture convergence over the cold pool shows strong convergence above 850 hPa and divergence near the surface indicating elevated convection. Radar reflectivity (50-60 dBZ) and vertical component of vorticity maximum ( 0.01-0.14 s-1) are observed along the convergence zone. Stratiform clouds ahead of the squall line and parallel wind flow at 850 hPa and nearly perpendicular flow at higher levels relative to squall line as evidenced by relatively low and wide-spread reflectivity suggests that organizational mode of squall line may be categorized as `Mixed Mode' type where northern part can be a parallel stratiform while the southern part resembles with a leading stratiform. Simulated rainfall (grid scale 27 km) leads the observed rainfall by 1 h while its magnitude is 2 times of the observed rainfall (grid scale 100 km) derived from Kalpana-1. Thus, this study indicates that under synoptically favorable conditions

  16. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  17. Vlasov modelling of parallel transport in a tokamak scrape-off layer

    International Nuclear Information System (INIS)

    Manfredi, G; Hirstoaga, S; Devaux, S

    2011-01-01

    A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.

  18. Vlasov modelling of parallel transport in a tokamak scrape-off layer

    Energy Technology Data Exchange (ETDEWEB)

    Manfredi, G [Institut de Physique et Chimie des Materiaux, CNRS and Universite de Strasbourg, BP 43, F-67034 Strasbourg (France); Hirstoaga, S [INRIA Nancy Grand-Est and Institut de Recherche en Mathematiques Avancees, 7 rue Rene Descartes, F-67084 Strasbourg (France); Devaux, S, E-mail: Giovanni.Manfredi@ipcms.u-strasbg.f, E-mail: hirstoaga@math.unistra.f, E-mail: Stephane.Devaux@ccfe.ac.u [JET-EFDA, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom)

    2011-01-15

    A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.

  19. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  20. Phenomenon in the Evolution of Voles (Mammalia, Rodentia, Arvicolidae

    Directory of Open Access Journals (Sweden)

    Rekovets L. I.

    2017-04-01

    Full Text Available This paper presents analytical results of the study of adaptatiogenesis within the family Arvicolidae (Mammalia, Rodentia based of morphological changes of the most functional characters of their masticatory apparatus — dental system — through time. The main directions of the morphological differentiation in parallel evolution of the arvicolid tooth type within the Cricetidae and Arvicolidae during late Miocene and Pliocene were identified and substantiated. It is shown that such unique morphological structure as the arvicolid tooth type has provided a relatively high rate of evolution of voles and a wide range of their adaptive radiation, as well as has determined their taxonomic and ecological diversity. The optimality of the current state of this group and evaluation of evolutionary prospects of Arvicolidae were presented and substantiated here as a phenomenon in their evolution.

  1. Vicariance and Oceanic Barriers Drive Contemporary Genetic Structure of Widespread Mangrove Species Sonneratia alba J. Sm in the Indo-West Pacific

    Directory of Open Access Journals (Sweden)

    Alison K. S. Wee

    2017-12-01

    Full Text Available Patterns of genetic structure are essential for a comprehensive understanding of the evolution and biogeography of a species. Here, we investigated the genetic patterns of one of the most widespread and abundant mangrove species in the Indo-West Pacific, Sonneratia alba J. Sm., in order to gain insights into the ecological and evolutionary drivers of genetic structure in mangroves. We employed 11 nuclear microsatellite loci and two chloroplast regions to genotyped 25 S. alba populations. Our objectives were to (1 assess the level of genetic diversity and its geographic distribution; and (2 determine the genetic structure of the populations. Our results revealed significant genetic differentiation among populations. We detected a major genetic break between Indo-Malesia and Australasia, and further population subdivision within each oceanic region in these two major clusters. The phylogeographic patterns indicated a strong influence of vicariance, oceanic barriers and geographic distance on genetic structure. In addition, we found low genetic diversity and high genetic drift at range edge. This study advances the scope of mangrove biogeography by demonstrating a unique scenario whereby a widespread species has limited dispersal and high genetic divergence among populations.

  2. Studies of parallel algorithms for the solution of a Fokker-Planck equation

    International Nuclear Information System (INIS)

    Deck, D.; Samba, G.

    1995-11-01

    The study of laser-created plasmas often requires the use of a kinetic model rather than a hydrodynamic one. This model change occurs, for example, in the hot spot formation in an ICF experiment or during the relaxation of colliding plasmas. When the gradients scalelengths or the size of a given system are not small compared to the characteristic mean-free-path, we have to deal with non-equilibrium situations, which can be described by the distribution functions of every species in the system. We present here a numerical method in plane or spherical 1-D geometry, for the solution of a Fokker-Planck equation that describes the evolution of stich functions in the phase space. The size and the time scale of kinetic simulations require the use of Massively Parallel Computers (MPP). We have adopted a message-passing strategy using Parallel Virtual Machine (PVM)

  3. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  4. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  5. Strain partitioning in the footwall of the Somiedo Nappe: structural evolution of the Narcea Tectonic Window, NW Spain

    Science.gov (United States)

    Gutiérrez-Alonso, Gabriel

    1996-10-01

    The Somiedo Nappe is a major thrust unit in the Cantabrian Zone, the external foreland fold and thrust belt of the North Iberian Variscan orogen. Exposed at the Narcea Tectonic Window are Precambrian rocks below the basal decollement of the Somiedo Nappe, which exhibit a different deformation style than the overlying Paleozoic rocks above the basal decollement. During Variscan deformation, folding and widespread subhorizontal, bedding-parallel decollements were produced in the hanging wall within the Paleozoic rocks. Vertical folding, with related axial-planar cleavage at a high angle to the decollement planes, developed simultaneously in the upper Proterozoic Narcea Slates of the footwall, below the detachment. The relative magnitude of finite strain, measured in the footwall rocks, diminishes towards the foreland. These observations indicate that (1) significant deformation may occur in the footwall of foreland fold and thrust belts, (2) the shortening mechanism in the footwall may be different from that of the hanging wall, and (3) in this particular case, the partitioning of the deformation implies the existence of a deeper, blind decollement surface contemporaneous with the first stages of the foreland development, that does not crop out in the region. This implies a significant shortening in the footwall, which must be taken into account when restoration and balancing of cross-sections is attempted. A sequential diagram of the evolution of the Narcea Tectonic Window with a minimum shortening of 85 km is proposed, explaining the complete Variscan evolution of the foreland to hinterland transition in the North Iberian Variscan orogen.

  6. Interdisciplinary rehabilitation of patients with chronic widespread pain:

    DEFF Research Database (Denmark)

    Amris, Kirstine; Wæhrens, Eva E; Christensen, Robin

    2014-01-01

    This study examined the functional and psychological outcomes of a 2-week, group-based multicomponent treatment course that targeted patients with chronic widespread pain. Patients (192 included in the intention-to-treat population), all fulfilling the 1990 American College of Rheumatology...

  7. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  8. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  9. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  10. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  11. Analysis of ribosomal protein gene structures: implications for intron evolution.

    Directory of Open Access Journals (Sweden)

    2006-03-01

    Full Text Available Many spliceosomal introns exist in the eukaryotic nuclear genome. Despite much research, the evolution of spliceosomal introns remains poorly understood. In this paper, we tried to gain insights into intron evolution from a novel perspective by comparing the gene structures of cytoplasmic ribosomal proteins (CRPs and mitochondrial ribosomal proteins (MRPs, which are held to be of archaeal and bacterial origin, respectively. We analyzed 25 homologous pairs of CRP and MRP genes that together had a total of 527 intron positions. We found that all 12 of the intron positions shared by CRP and MRP genes resulted from parallel intron gains and none could be considered to be "conserved," i.e., descendants of the same ancestor. This was supported further by the high frequency of proto-splice sites at these shared positions; proto-splice sites are proposed to be sites for intron insertion. Although we could not definitively disprove that spliceosomal introns were already present in the last universal common ancestor, our results lend more support to the idea that introns were gained late. At least, our results show that MRP genes were intronless at the time of endosymbiosis. The parallel intron gains between CRP and MRP genes accounted for 2.3% of total intron positions, which should provide a reliable estimate for future inferences of intron evolution.

  12. ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation

    International Nuclear Information System (INIS)

    Sousbie, Thierry; Colombi, Stéphane

    2016-01-01

    Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the best way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.

  13. ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation

    Energy Technology Data Exchange (ETDEWEB)

    Sousbie, Thierry, E-mail: tsousbie@gmail.com [Institut d' Astrophysique de Paris, CNRS UMR 7095 and UPMC, 98bis, bd Arago, F-75014 Paris (France); Department of Physics, The University of Tokyo, Tokyo 113-0033 (Japan); Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033 (Japan); Colombi, Stéphane, E-mail: colombi@iap.fr [Institut d' Astrophysique de Paris, CNRS UMR 7095 and UPMC, 98bis, bd Arago, F-75014 Paris (France); Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan)

    2016-09-15

    Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the best way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.

  14. Origin and evolution of chromosomal sperm proteins.

    Science.gov (United States)

    Eirín-López, José M; Ausió, Juan

    2009-10-01

    In the eukaryotic cell, DNA compaction is achieved through its interaction with histones, constituting a nucleoprotein complex called chromatin. During metazoan evolution, the different structural and functional constraints imposed on the somatic and germinal cell lines led to a unique process of specialization of the sperm nuclear basic proteins (SNBPs) associated with chromatin in male germ cells. SNBPs encompass a heterogeneous group of proteins which, since their discovery in the nineteenth century, have been studied extensively in different organisms. However, the origin and controversial mechanisms driving the evolution of this group of proteins has only recently started to be understood. Here, we analyze in detail the histone hypothesis for the vertical parallel evolution of SNBPs, involving a "vertical" transition from a histone to a protamine-like and finally protamine types (H --> PL --> P), the last one of which is present in the sperm of organisms at the uppermost tips of the phylogenetic tree. In particular, the common ancestry shared by the protamine-like (PL)- and protamine (P)-types with histone H1 is discussed within the context of the diverse structural and functional constraints acting upon these proteins during bilaterian evolution.

  15. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  16. Evolution of symmetric reconnection layer in the presence of parallel shear flow

    Energy Technology Data Exchange (ETDEWEB)

    Lu Haoyu [Space Science Institute, School of Astronautics, Beihang University, Beijing 100191 (China); Sate Key Laboratory of Space Weather, Chinese Academy of Sciences, Beijing 100190 (China); Cao Jinbin [Space Science Institute, School of Astronautics, Beihang University, Beijing 100191 (China)

    2011-07-15

    The development of the structure of symmetric reconnection layer in the presence of a shear flow parallel to the antiparallel magnetic field component is studied by using a set of one-dimensional (1D) magnetohydrodynamic (MHD) equations. The Riemann problem is simulated through a second-order conservative TVD (total variation diminishing) scheme, in conjunction with Roe's averages for the Riemann problem. The simulation results indicate that besides the MHD shocks and expansion waves, there exist some new small-scale structures in the reconnection layer. For the case of zero initial guide magnetic field (i.e., B{sub y0} = 0), a pair of intermediate shock and slow shock (SS) is formed in the presence of the parallel shear flow. The critical velocity of initial shear flow V{sub zc} is just the Alfven velocity in the inflow region. As V{sub z{infinity}} increases to the value larger than V{sub zc}, a new slow expansion wave appears in the position of SS in the case V{sub z{infinity}} < V{sub zc}, and one of the current densities drops to zero. As plasma {beta} increases, the out-flow region is widened. For B{sub y0} {ne} 0, a pair of SSs and an additional pair of time-dependent intermediate shocks (TDISs) are found to be present. Similar to the case of B{sub y0} = 0, there exists a critical velocity of initial shear flow V{sub zc}. The value of V{sub zc} is, however, smaller than the Alfven velocity of the inflow region. As plasma {beta} increases, the velocities of SS and TDIS increase, and the out-flow region is widened. However, the velocity of downstream SS increases even faster, making the distance between SS and TDIS smaller. Consequently, the interaction between SS and TDIS in the case of high plasma {beta} influences the property of direction rotation of magnetic field across TDIS. Thereby, a wedge in the hodogram of tangential magnetic field comes into being. When {beta}{yields}{infinity}, TDISs disappear and the guide magnetic field becomes constant.

  17. Intra-arterial cis-diamminedichloroplatinum infusion treatment for widespread hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Park, Sung Il; Yang, Hee Chul; Lee, Do Yon; Shim, Yong Woon; Kim, Sang Heum; Kim, Myeong Jin; Lee, Jong Tae; Yoo, Hyung Sik

    1997-01-01

    The purpose of this study is to evaluate the therapeutic efficacy of intra-arterial infusion of Cis-diamminedichloroplatinum (C-DDP) for the treatment of hepatocellular carcinomas with widespread involvement. We retrospectively analyzed 22 patients who between July 1994 and June 1996 had undergone intra-arterial c-DDP infusion therapy for the treatment of hepatocellular carcinomas with widespread involvement. The hepatomas involved both lobes in ten, portal venous obstructions in fourteen, arterio-portal shunts in nine, and arterio-venous shunts in two. Proper hepatic artery was selected for infusion of 100 mg/BSA of C-DDP. The same procedure was repeated every 3 to 4 weeks, and the total number of infusions was 65. On the basis of WHO criteria, response was classified as complete remission, partial remission, stable, or progression of the disease. Six-month and one-year survival rates were estimated, and adverse reactions were evaluated. Although the response rate is not high, intra-arterial C-DDP infusion therapy can be used as an alternative treatment for hepatocellular carcinomas with widespread involvement; adverse reactions are tolerable. (author). 16 refs., 3 figs

  18. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  19. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  20. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  1. Evolution Is Linear: Debunking Life's Little Joke.

    Science.gov (United States)

    Jenner, Ronald A

    2018-01-01

    Linear depictions of the evolutionary process are ubiquitous in popular culture, but linear evolutionary imagery is strongly rejected by scientists who argue that evolution branches. This point is frequently illustrated by saying that we didn't evolve from monkeys, but that we are related to them as collateral relatives. Yet, we did evolve from monkeys, but our monkey ancestors are extinct, not extant. Influential voices, such as the late Stephen Jay Gould, have misled audiences for decades by falsely portraying the linear and branching aspects of evolution to be in conflict, and by failing to distinguish between the legitimate linearity of evolutionary descent, and the branching relationships among collateral relatives that result when lineages of ancestors diverge. The purpose of this article is to correct the widespread misplaced rejection of linear evolutionary imagery, and to re-emphasize the basic truth that the evolutionary process is fundamentally linear. © 2017 WILEY Periodicals, Inc.

  2. Evolution of DNA Methylation across Insects.

    Science.gov (United States)

    Bewick, Adam J; Vogel, Kevin J; Moore, Allen J; Schmitz, Robert J

    2017-03-01

    DNA methylation contributes to gene and transcriptional regulation in eukaryotes, and therefore has been hypothesized to facilitate the evolution of plastic traits such as sociality in insects. However, DNA methylation is sparsely studied in insects. Therefore, we documented patterns of DNA methylation across a wide diversity of insects. We predicted that underlying enzymatic machinery is concordant with patterns of DNA methylation. Finally, given the suggestion that DNA methylation facilitated social evolution in Hymenoptera, we tested the hypothesis that the DNA methylation system will be associated with presence/absence of sociality among other insect orders. We found DNA methylation to be widespread, detected in all orders examined except Diptera (flies). Whole genome bisulfite sequencing showed that orders differed in levels of DNA methylation. Hymenopteran (ants, bees, wasps and sawflies) had some of the lowest levels, including several potential losses. Blattodea (cockroaches and termites) show all possible patterns, including a potential loss of DNA methylation in a eusocial species whereas solitary species had the highest levels. Species with DNA methylation do not always possess the typical enzymatic machinery. We identified a gene duplication event in the maintenance DNA methyltransferase 1 (DNMT1) that is shared by some Hymenoptera, and paralogs have experienced divergent, nonneutral evolution. This diversity and nonneutral evolution of underlying machinery suggests alternative DNA methylation pathways may exist. Phylogenetically corrected comparisons revealed no evidence that supports evolutionary association between sociality and DNA methylation. Future functional studies will be required to advance our understanding of DNA methylation in insects. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  3. Widespread structural brain changes in OCD: a systematic review of voxel-based morphometry studies.

    Science.gov (United States)

    Piras, Federica; Piras, Fabrizio; Chiapponi, Chiara; Girardi, Paolo; Caltagirone, Carlo; Spalletta, Gianfranco

    2015-01-01

    . Morphometric changes in both "affective" and "executive" parallel the disease clinical course, being at the same time responsible for variation in symptom severity. Thus, OCD mechanisms involve a more widespread network of cerebral dysfunctions than previously thought, which may explain the heterogeneity in clinical manifestations and symptom severity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  5. From Massively Parallel Algorithms and Fluctuating Time Horizons to Nonequilibrium Surface Growth

    International Nuclear Information System (INIS)

    Korniss, G.; Toroczkai, Z.; Novotny, M. A.; Rikvold, P. A.

    2000-01-01

    We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society

  6. isoenzyme analysis of five endemic and one widespread kniphofia ...

    African Journals Online (AJOL)

    Preferred Customer

    ISOENZYME ANALYSIS OF FIVE ENDEMIC AND ONE WIDESPREAD ... plants. The over all mean inbreeding coefficient (F) was positive indicating slight deficiency in the number of ...... populations, indicates rather recent speciation.

  7. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Two-fluid and parallel compressibility effects in tokamak plasmas

    International Nuclear Information System (INIS)

    Sugiyama, L.E.; Park, W.

    1998-01-01

    The MHD, or single fluid, model for a plasma has long been known to provide a surprisingly good description of much of the observed nonlinear dynamics of confined plasmas, considering its simple nature compared to the complexity of the real system. On the other hand, some of the supposed agreement arises from the lack of the detailed measurements that are needed to distinguish MHD from more sophisticated models that incorporate slower time scale processes. At present, a number of factors combine to make models beyond MHD of practical interest. Computational considerations still favor fluid rather than particle models for description of the full plasma, and suggest an approach that starts from a set of fluid-like equations that extends MHD to slower time scales and more accurate parallel dynamics. This paper summarizes a set of two-fluid equations for toroidal (tokamak) geometry that has been developed and tested as the MH3D-T code [1] and some results from the model. The electrons and ions are described as separate fluids. The code and its original MHD version, MH3D [2], are the first numerical, initial value models in toroidal geometry that include the full 3D (fluid) compressibility and electromagnetic effects. Previous nonlinear MHD codes for toroidal geometry have, in practice, neglected the plasma density evolution, on the grounds that MHD plasmas are only weakly compressible and that the background density variation is weaker than the temperature variation. Analytically, the common use of toroidal plasma models based on aspect ratio expansion, such as reduced MHD, has reinforced this impression, since this ordering reduces plasma compressibility effects. For two-fluid plasmas, the density evolution cannot be neglected in principle, since it provides the basic driving energy for the diamagnetic drifts of the electrons and ions perpendicular to the magnetic field. It also strongly influences the parallel dynamics, in combination with the parallel thermal

  9. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured ...

  10. Thinking critically about the occurrence of widespread participation in poor nursing care.

    Science.gov (United States)

    Roberts, Marc; Ion, Robin

    2015-04-01

    A discussion of how Arendt's work can be productively re-contextualized to provide a critical analysis of the occurrence of widespread participation in poor nursing care and what the implications of this are for the providers of nursing education. While the recent participation of nurses in healthcare failings, such as that detailed in the Francis report, has been universally condemned, there has been an absence of critical analyses in the literature that attempt to understand the occurrence of such widespread participation in poor nursing care. This is a significant omission in so far as such analyses will form an integral part of the strategy to limit the occurrence of such widespread participation of nurses in future healthcare failings. Discussion paper. Arendt's 'Eichmann in Jerusalem: A Report on the Banality of Evil' and 'Thinking and Moral Considerations: A Lecture'. In addition, a literature search was conducted and articles published in English relating to the terms care, compassion, ethics, judgement and thinking between 2004-2014 were included. It is anticipated that this discussion will stimulate further critical debate about the role of Arendt's work for an understanding of the occurrence of poor nursing care, and encouraging additional detailed analyses of the widespread participation of nurses in healthcare failings more generally. This article provides a challenging analysis of the widespread participation of nurses in poor care and discusses the opportunities confronting the providers of nursing education in limiting future healthcare failings. © 2014 John Wiley & Sons Ltd.

  11. Ultraviolet vision may be widespread in bats

    Science.gov (United States)

    Gorresen, P. Marcos; Cryan, Paul; Dalton, David C.; Wolf, Sandy; Bonaccorso, Frank

    2015-01-01

    Insectivorous bats are well known for their abilities to find and pursue flying insect prey at close range using echolocation, but they also rely heavily on vision. For example, at night bats use vision to orient across landscapes, avoid large obstacles, and locate roosts. Although lacking sharp visual acuity, the eyes of bats evolved to function at very low levels of illumination. Recent evidence based on genetics, immunohistochemistry, and laboratory behavioral trials indicated that many bats can see ultraviolet light (UV), at least at illumination levels similar to or brighter than those before twilight. Despite this growing evidence for potentially widespread UV vision in bats, the prevalence of UV vision among bats remains unknown and has not been studied outside of the laboratory. We used a Y-maze to test whether wild-caught bats could see reflected UV light and whether such UV vision functions at the dim lighting conditions typically experienced by night-flying bats. Seven insectivorous species of bats, representing five genera and three families, showed a statistically significant ‘escape-toward-the-light’ behavior when placed in the Y-maze. Our results provide compelling evidence of widespread dim-light UV vision in bats.

  12. Historical Evolution of Spatial Abilities

    Directory of Open Access Journals (Sweden)

    A. Ardila

    1993-01-01

    Full Text Available Historical evolution and cross-cultural differences in spatial abilities are analyzed. Spatial abilities have been found to be significantly associated with the complexity of geographical conditions and survival demands. Although impaired spatial cognition is found in cases of, exclusively or predominantly, right hemisphere pathology, it is proposed that this asymmetry may depend on the degree of training in spatial abilities. It is further proposed that spatial cognition might have evolved in a parallel way with cultural evolution and environmental demands. Contemporary city humans might be using spatial abilities in some new, conceptual tasks that did not exist in prehistoric times: mathematics, reading, writing, mechanics, music, etc. Cross-cultural analysis of spatial abilities in different human groups, normalization of neuropsychological testing instruments, and clinical observations of spatial ability disturbances in people with different cultural backgrounds and various spatial requirements, are required to construct a neuropsychological theory of brain organization of spatial cognition.

  13. Parallel Evolution under Chemotherapy Pressure in 29 Breast Cancer Cell Lines Results in Dissimilar Mechanisms of Resistance

    DEFF Research Database (Denmark)

    Tegze, Balint; Szallasi, Zoltan Imre; Haltrich, Iren

    2012-01-01

    Background: Developing chemotherapy resistant cell lines can help to identify markers of resistance. Instead of using a panel of highly heterogeneous cell lines, we assumed that truly robust and convergent pattern of resistance can be identified in multiple parallel engineered derivatives of only...

  14. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  15. Parallel Evolution of Genes and Languages in the Caucasus Region

    Science.gov (United States)

    Balanovsky, Oleg; Dibirova, Khadizhat; Dybo, Anna; Mudrak, Oleg; Frolova, Svetlana; Pocheshkhova, Elvira; Haber, Marc; Platt, Daniel; Schurr, Theodore; Haak, Wolfgang; Kuznetsova, Marina; Radzhabov, Magomed; Balaganskaya, Olga; Romanov, Alexey; Zakharova, Tatiana; Soria Hernanz, David F.; Zalloua, Pierre; Koshel, Sergey; Ruhlen, Merritt; Renfrew, Colin; Wells, R. Spencer; Tyler-Smith, Chris; Balanovska, Elena

    2012-01-01

    We analyzed 40 SNP and 19 STR Y-chromosomal markers in a large sample of 1,525 indigenous individuals from 14 populations in the Caucasus and 254 additional individuals representing potential source populations. We also employed a lexicostatistical approach to reconstruct the history of the languages of the North Caucasian family spoken by the Caucasus populations. We found a different major haplogroup to be prevalent in each of four sets of populations that occupy distinct geographic regions and belong to different linguistic branches. The haplogroup frequencies correlated with geography and, even more strongly, with language. Within haplogroups, a number of haplotype clusters were shown to be specific to individual populations and languages. The data suggested a direct origin of Caucasus male lineages from the Near East, followed by high levels of isolation, differentiation and genetic drift in situ. Comparison of genetic and linguistic reconstructions covering the last few millennia showed striking correspondences between the topology and dates of the respective gene and language trees, and with documented historical events. Overall, in the Caucasus region, unmatched levels of gene-language co-evolution occurred within geographically isolated populations, probably due to its mountainous terrain. PMID:21571925

  16. Intelligence in childhood and chronic widespread pain in middle age: the National Child Development Survey.

    Science.gov (United States)

    Gale, Catharine R; Deary, Ian J; Cooper, Cyrus; Batty, G David

    2012-12-01

    Psychological factors are thought to play a part in the aetiology of chronic widespread pain. We investigated the relationship between intelligence in childhood and risk of chronic widespread pain in adulthood in 6902 men and women from the National Child Development Survey (1958 British Birth Cohort). Participants took a test of general cognitive ability at age 11 years; and chronic widespread pain, defined according to the American College of Rheumatology criteria, was assessed at age 45 years. Risk ratios (RRs) and 95% confidence intervals (CIs) were estimated using log-binomial regression, adjusting for sex and potential confounding or mediating factors. Risk of chronic widespread pain, defined according to the American College of Rheumatology criteria, rose in a stepwise fashion as intelligence fell (P for linear trend intelligence quotient, the RR of chronic widespread pain was 1.26 (95% CI 1.17-1.35). In multivariate backwards stepwise regression, lower childhood intelligence remained as an independent predictor of chronic widespread pain (RR 1.10; 95% CI 1.01-1.19), along with social class, educational attainment, body mass index, smoking status, and psychological distress. Part of the effect of lower childhood intelligence on risk of chronic widespread pain in midlife was significantly mediated through greater body mass index and more disadvantaged socioeconomic position. Men and women with higher intelligence in childhood are less likely as adults to report chronic widespread pain. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  17. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  18. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  19. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  20. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  1. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  2. Influence of Paralleling Dies and Paralleling Half-Bridges on Transient Current Distribution in Multichip Power Modules

    DEFF Research Database (Denmark)

    Li, Helong; Zhou, Wei; Wang, Xiongfei

    2018-01-01

    This paper addresses the transient current distribution in the multichip half-bridge power modules, where two types of paralleling connections with different current commutation mechanisms are considered: paralleling dies and paralleling half-bridges. It reveals that with paralleling dies, both t...

  3. Time evolution of tokamak states with flow

    International Nuclear Information System (INIS)

    Kerner, W.; Weitzner, H.

    1985-12-01

    The general dissipative Braginskii single-fluid model is applied to simulate tokamak transport. An expansion with respect to epsilon = (ω/sub i/tau/sub i/) -1 , the factor by which perpendicular and parallel transport coefficients differ, yields a numerically tractable scheme. The resulting 1-1/2 D procedure requires computation of 2D toroidal equilibria with flow together with the solution of a system of ordinary 1D flux-averaged equations for the time evolution of the profiles. 13 refs

  4. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  5. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  6. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  7. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  8. Evolution Engines and Artificial Intelligence

    Science.gov (United States)

    Hemker, Andreas; Becks, Karl-Heinz

    In the last years artificial intelligence has achieved great successes, mainly in the field of expert systems and neural networks. Nevertheless the road to truly intelligent systems is still obscured. Artificial intelligence systems with a broad range of cognitive abilities are not within sight. The limited competence of such systems (brittleness) is identified as a consequence of the top-down design process. The evolution principle of nature on the other hand shows an alternative and elegant way to build intelligent systems. We propose to take an evolution engine as the driving force for the bottom-up development of knowledge bases and for the optimization of the problem-solving process. A novel data analysis system for the high energy physics experiment DELPHI at CERN shows the practical relevance of this idea. The system is able to reconstruct the physical processes after the collision of particles by making use of the underlying standard model of elementary particle physics. The evolution engine acts as a global controller of a population of inference engines working on the reconstruction task. By implementing the system on the Connection Machine (Model CM-2) we use the full advantage of the inherent parallelization potential of the evolutionary approach.

  9. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  10. Back to the sea twice: identifying candidate plant genes for molecular evolution to marine life

    Directory of Open Access Journals (Sweden)

    Reusch Thorsten BH

    2011-01-01

    Full Text Available Abstract Background Seagrasses are a polyphyletic group of monocotyledonous angiosperms that have adapted to a completely submerged lifestyle in marine waters. Here, we exploit two collections of expressed sequence tags (ESTs of two wide-spread and ecologically important seagrass species, the Mediterranean seagrass Posidonia oceanica (L. Delile and the eelgrass Zostera marina L., which have independently evolved from aquatic ancestors. This replicated, yet independent evolutionary history facilitates the identification of traits that may have evolved in parallel and are possible instrumental candidates for adaptation to a marine habitat. Results In our study, we provide the first quantitative perspective on molecular adaptations in two seagrass species. By constructing orthologous gene clusters shared between two seagrasses (Z. marina and P. oceanica and eight distantly related terrestrial angiosperm species, 51 genes could be identified with detection of positive selection along the seagrass branches of the phylogenetic tree. Characterization of these positively selected genes using KEGG pathways and the Gene Ontology uncovered that these genes are mostly involved in translation, metabolism, and photosynthesis. Conclusions These results provide first insights into which seagrass genes have diverged from their terrestrial counterparts via an initial aquatic stage characteristic of the order and to the derived fully-marine stage characteristic of seagrasses. We discuss how adaptive changes in these processes may have contributed to the evolution towards an aquatic and marine existence.

  11. Back to the sea twice: identifying candidate plant genes for molecular evolution to marine life.

    Science.gov (United States)

    Wissler, Lothar; Codoñer, Francisco M; Gu, Jenny; Reusch, Thorsten B H; Olsen, Jeanine L; Procaccini, Gabriele; Bornberg-Bauer, Erich

    2011-01-12

    Seagrasses are a polyphyletic group of monocotyledonous angiosperms that have adapted to a completely submerged lifestyle in marine waters. Here, we exploit two collections of expressed sequence tags (ESTs) of two wide-spread and ecologically important seagrass species, the Mediterranean seagrass Posidonia oceanica (L.) Delile and the eelgrass Zostera marina L., which have independently evolved from aquatic ancestors. This replicated, yet independent evolutionary history facilitates the identification of traits that may have evolved in parallel and are possible instrumental candidates for adaptation to a marine habitat. In our study, we provide the first quantitative perspective on molecular adaptations in two seagrass species. By constructing orthologous gene clusters shared between two seagrasses (Z. marina and P. oceanica) and eight distantly related terrestrial angiosperm species, 51 genes could be identified with detection of positive selection along the seagrass branches of the phylogenetic tree. Characterization of these positively selected genes using KEGG pathways and the Gene Ontology uncovered that these genes are mostly involved in translation, metabolism, and photosynthesis. These results provide first insights into which seagrass genes have diverged from their terrestrial counterparts via an initial aquatic stage characteristic of the order and to the derived fully-marine stage characteristic of seagrasses. We discuss how adaptive changes in these processes may have contributed to the evolution towards an aquatic and marine existence.

  12. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  13. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  14. Parallelizing Gene Expression Programming Algorithm in Enabling Large-Scale Classification

    Directory of Open Access Journals (Sweden)

    Lixiong Xu

    2017-01-01

    Full Text Available As one of the most effective function mining algorithms, Gene Expression Programming (GEP algorithm has been widely used in classification, pattern recognition, prediction, and other research fields. Based on the self-evolution, GEP is able to mine an optimal function for dealing with further complicated tasks. However, in big data researches, GEP encounters low efficiency issue due to its long time mining processes. To improve the efficiency of GEP in big data researches especially for processing large-scale classification tasks, this paper presents a parallelized GEP algorithm using MapReduce computing model. The experimental results show that the presented algorithm is scalable and efficient for processing large-scale classification tasks.

  15. Evolution based on domain combinations: the case of glutaredoxins

    Directory of Open Access Journals (Sweden)

    Herrero Enrique

    2009-03-01

    Full Text Available Abstract Background Protein domains represent the basic units in the evolution of proteins. Domain duplication and shuffling by recombination and fusion, followed by divergence are the most common mechanisms in this process. Such domain fusion and recombination events are predicted to occur only once for a given multidomain architecture. However, other scenarios may be relevant in the evolution of specific proteins, such as convergent evolution of multidomain architectures. With this in mind, we study glutaredoxin (GRX domains, because these domains of approximately one hundred amino acids are widespread in archaea, bacteria and eukaryotes and participate in fusion proteins. GRXs are responsible for the reduction of protein disulfides or glutathione-protein mixed disulfides and are involved in cellular redox regulation, although their specific roles and targets are often unclear. Results In this work we analyze the distribution and evolution of GRX proteins in archaea, bacteria and eukaryotes. We study over one thousand GRX proteins, each containing at least one GRX domain, from hundreds of different organisms and trace the origin and evolution of the GRX domain within the tree of life. Conclusion Our results suggest that single domain GRX proteins of the CGFS and CPYC classes have, each, evolved through duplication and divergence from one initial gene that was present in the last common ancestor of all organisms. Remarkably, we identify a case of convergent evolution in domain architecture that involves the GRX domain. Two independent recombination events of a TRX domain to a GRX domain are likely to have occurred, which is an exception to the dominant mechanism of domain architecture evolution.

  16. Parallel Mitogenome Sequencing Alleviates Random Rooting Effect in Phylogeography.

    Science.gov (United States)

    Hirase, Shotaro; Takeshima, Hirohiko; Nishida, Mutsumi; Iwasaki, Wataru

    2016-04-28

    Reliably rooted phylogenetic trees play irreplaceable roles in clarifying diversification in the patterns of species and populations. However, such trees are often unavailable in phylogeographic studies, particularly when the focus is on rapidly expanded populations that exhibit star-like trees. A fundamental bottleneck is known as the random rooting effect, where a distant outgroup tends to root an unrooted tree "randomly." We investigated whether parallel mitochondrial genome (mitogenome) sequencing alleviates this effect in phylogeography using a case study on the Sea of Japan lineage of the intertidal goby Chaenogobius annularis Eighty-three C. annularis individuals were collected and their mitogenomes were determined by high-throughput and low-cost parallel sequencing. Phylogenetic analysis of these mitogenome sequences was conducted to root the Sea of Japan lineage, which has a star-like phylogeny and had not been reliably rooted. The topologies of the bootstrap trees were investigated to determine whether the use of mitogenomes alleviated the random rooting effect. The mitogenome data successfully rooted the Sea of Japan lineage by alleviating the effect, which hindered phylogenetic analysis that used specific gene sequences. The reliable rooting of the lineage led to the discovery of a novel, northern lineage that expanded during an interglacial period with high bootstrap support. Furthermore, the finding of this lineage suggested the existence of additional glacial refugia and provided a new recent calibration point that revised the divergence time estimation between the Sea of Japan and Pacific Ocean lineages. This study illustrates the effectiveness of parallel mitogenome sequencing for solving the random rooting problem in phylogeographic studies. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  17. Feynman’s clock, a new variational principle, and parallel-in-time quantum dynamics

    Science.gov (United States)

    McClean, Jarrod R.; Parkhill, John A.; Aspuru-Guzik, Alán

    2013-01-01

    We introduce a discrete-time variational principle inspired by the quantum clock originally proposed by Feynman and use it to write down quantum evolution as a ground-state eigenvalue problem. The construction allows one to apply ground-state quantum many-body theory to quantum dynamics, extending the reach of many highly developed tools from this fertile research area. Moreover, this formalism naturally leads to an algorithm to parallelize quantum simulation over time. We draw an explicit connection between previously known time-dependent variational principles and the time-embedded variational principle presented. Sample calculations are presented, applying the idea to a hydrogen molecule and the spin degrees of freedom of a model inorganic compound, demonstrating the parallel speedup of our method as well as its flexibility in applying ground-state methodologies. Finally, we take advantage of the unique perspective of this variational principle to examine the error of basis approximations in quantum dynamics. PMID:24062428

  18. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  19. Microwave Photonics: current challenges towards widespread application.

    Science.gov (United States)

    Capmany, José; Li, Guifang; Lim, Christina; Yao, Jianping

    2013-09-23

    Microwave Photonics, a symbiotic field of research that brings together the worlds of optics and radio frequency is currently facing several challenges in its transition from a niche to a truly widespread technology essential to support the ever-increasing values for speed, bandwidth, processing capability and dynamic range that will be required in next generation hybrid access networks. We outline these challenges, which are the subject of the contributions to this focus issue.

  20. Parallel selection on TRPV6 in human populations.

    Science.gov (United States)

    Hughes, David A; Tang, Kun; Strotmann, Rainer; Schöneberg, Torsten; Prenen, Jean; Nilius, Bernd; Stoneking, Mark

    2008-02-27

    We identified and examined a candidate gene for local directional selection in Europeans, TRPV6, and conclude that selection has acted on standing genetic variation at this locus, creating parallel soft sweep events in humans. A novel modification of the extended haplotype homozygosity (EHH) test was utilized, which compares EHH for a single allele across populations, to investigate the signature of selection at TRPV6 and neighboring linked loci in published data sets for Europeans, Asians and African-Americans, as well as in newly-obtained sequence data for additional populations. We find that all non-African populations carry a signature of selection on the same haplotype at the TRPV6 locus. The selective footprints, however, are significantly differentiated between non-African populations and estimated to be younger than an ancestral population of non-Africans. The possibility of a single selection event occurring in an ancestral population of non-Africans was tested by simulations and rejected. The putatively-selected TRPV6 haplotype contains three candidate sites for functional differences, namely derived non-synonymous substitutions C157R, M378V and M681T. Potential functional differences between the ancestral and derived TRPV6 proteins were investigated by cloning the ancestral and derived forms, transfecting cell lines, and carrying out electrophysiology experiments via patch clamp analysis. No statistically-significant differences in biophysical channel function were found, although one property of the protein, namely Ca(2+) dependent inactivation, may show functionally relevant differences between the ancestral and derived forms. Although the reason for selection on this locus remains elusive, this is the first demonstration of a widespread parallel selection event acting on standing genetic variation in humans, and highlights the utility of between population EHH statistics.

  1. Parallel selection on TRPV6 in human populations.

    Directory of Open Access Journals (Sweden)

    David A Hughes

    Full Text Available We identified and examined a candidate gene for local directional selection in Europeans, TRPV6, and conclude that selection has acted on standing genetic variation at this locus, creating parallel soft sweep events in humans. A novel modification of the extended haplotype homozygosity (EHH test was utilized, which compares EHH for a single allele across populations, to investigate the signature of selection at TRPV6 and neighboring linked loci in published data sets for Europeans, Asians and African-Americans, as well as in newly-obtained sequence data for additional populations. We find that all non-African populations carry a signature of selection on the same haplotype at the TRPV6 locus. The selective footprints, however, are significantly differentiated between non-African populations and estimated to be younger than an ancestral population of non-Africans. The possibility of a single selection event occurring in an ancestral population of non-Africans was tested by simulations and rejected. The putatively-selected TRPV6 haplotype contains three candidate sites for functional differences, namely derived non-synonymous substitutions C157R, M378V and M681T. Potential functional differences between the ancestral and derived TRPV6 proteins were investigated by cloning the ancestral and derived forms, transfecting cell lines, and carrying out electrophysiology experiments via patch clamp analysis. No statistically-significant differences in biophysical channel function were found, although one property of the protein, namely Ca(2+ dependent inactivation, may show functionally relevant differences between the ancestral and derived forms. Although the reason for selection on this locus remains elusive, this is the first demonstration of a widespread parallel selection event acting on standing genetic variation in humans, and highlights the utility of between population EHH statistics.

  2. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  3. GPU: the biggest key processor for AI and parallel processing

    Science.gov (United States)

    Baji, Toru

    2017-07-01

    Two types of processors exist in the market. One is the conventional CPU and the other is Graphic Processor Unit (GPU). Typical CPU is composed of 1 to 8 cores while GPU has thousands of cores. CPU is good for sequential processing, while GPU is good to accelerate software with heavy parallel executions. GPU was initially dedicated for 3D graphics. However from 2006, when GPU started to apply general-purpose cores, it was noticed that this architecture can be used as a general purpose massive-parallel processor. NVIDIA developed a software framework Compute Unified Device Architecture (CUDA) that make it possible to easily program the GPU for these application. With CUDA, GPU started to be used in workstations and supercomputers widely. Recently two key technologies are highlighted in the industry. The Artificial Intelligence (AI) and Autonomous Driving Cars. AI requires a massive parallel operation to train many-layers of neural networks. With CPU alone, it was impossible to finish the training in a practical time. The latest multi-GPU system with P100 makes it possible to finish the training in a few hours. For the autonomous driving cars, TOPS class of performance is required to implement perception, localization, path planning processing and again SoC with integrated GPU will play a key role there. In this paper, the evolution of the GPU which is one of the biggest commercial devices requiring state-of-the-art fabrication technology will be introduced. Also overview of the GPU demanding key application like the ones described above will be introduced.

  4. Widespread plant species: natives vs. aliens in our changing world

    Science.gov (United States)

    Stohlgren, Thomas J.; Pyšek, Petr; Kartesz, John; Nishino, Misako; Pauchard, Aníbal; Winter, Marten; Pino, Joan; Richardson, David M.; Wilson, John R.U.; Murray, Brad R.; Phillips, Megan L.; Ming-yang, Li; Celesti-Grapow, Laura; Font, Xavier

    2011-01-01

    Estimates of the level of invasion for a region are traditionally based on relative numbers of native and alien species. However, alien species differ dramatically in the size of their invasive ranges. Here we present the first study to quantify the level of invasion for several regions of the world in terms of the most widely distributed plant species (natives vs. aliens). Aliens accounted for 51.3% of the 120 most widely distributed plant species in North America, 43.3% in New South Wales (Australia), 34.2% in Chile, 29.7% in Argentina, and 22.5% in the Republic of South Africa. However, Europe had only 1% of alien species among the most widespread species of the flora. Across regions, alien species relative to native species were either as well-distributed (10 comparisons) or more widely distributed (5 comparisons). These striking patterns highlight the profound contribution that widespread invasive alien plants make to floristic dominance patterns across different regions. Many of the most widespread species are alien plants, and, in particular, Europe and Asia appear as major contributors to the homogenization of the floras in the Americas. We recommend that spatial extent of invasion should be explicitly incorporated in assessments of invasibility, globalization, and risk assessments.

  5. Widespread plant species: Natives versus aliens in our changing world

    Science.gov (United States)

    Stohlgren, T.J.; Pysek, P.; Kartesz, J.; Nishino, M.; Pauchard, A.; Winter, M.; Pino, J.; Richardson, D.M.; Wilson, J.R.U.; Murray, B.R.; Phillips, M.L.; Ming-yang, L.; Celesti-Grapow, L.; Font, X.

    2011-01-01

    Estimates of the level of invasion for a region are traditionally based on relative numbers of native and alien species. However, alien species differ dramatically in the size of their invasive ranges. Here we present the first study to quantify the level of invasion for several regions of the world in terms of the most widely distributed plant species (natives vs. aliens). Aliens accounted for 51.3% of the 120 most widely distributed plant species in North America, 43.3% in New South Wales (Australia), 34.2% in Chile, 29.7% in Argentina, and 22.5% in the Republic of South Africa. However, Europe had only 1% of alien species among the most widespread species of the flora. Across regions, alien species relative to native species were either as well-distributed (10 comparisons) or more widely distributed (5 comparisons). These striking patterns highlight the profound contribution that widespread invasive alien plants make to floristic dominance patterns across different regions. Many of the most widespread species are alien plants, and, in particular, Europe and Asia appear as major contributors to the homogenization of the floras in the Americas. We recommend that spatial extent of invasion should be explicitly incorporated in assessments of invasibility, globalization, and risk assessments. ?? 2011 Springer Science+Business Media B.V.

  6. Distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm for deployment of wireless sensor networks

    DEFF Research Database (Denmark)

    Cao, Bin; Zhao, Jianwei; Yang, Po

    2018-01-01

    -objective evolutionary algorithms the Cooperative Coevolutionary Generalized Differential Evolution 3, the Cooperative Multi-objective Differential Evolution and the Nondominated Sorting Genetic Algorithm III, the proposed algorithm addresses the deployment optimization problem efficiently and effectively.......Using immune algorithms is generally a time-intensive process especially for problems with a large number of variables. In this paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm that is implemented using the message passing interface...... (MPI). The proposed algorithm is composed of three layers: objective, group and individual layers. First, for each objective in the multi-objective problem to be addressed, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives. Second, the large...

  7. The wide-spread presence of rib-like patterns in basal shear of ice streams detected by surface data inversion

    Science.gov (United States)

    Sergienko, O. V.

    2013-12-01

    The direct observations of the basal conditions under continental-scale ice sheets are logistically impossible. A possible approach to estimate conditions at the ice - bed interface is from surface observations by means of inverse methods. The recent advances in remote and ground-based observations have allowed to acquire a wealth observations from Greenland and Antarctic ice sheets. Using high-resolution data sets of ice surface and bed elevations and surface velocities, inversions for basal conditions have been performed for several ice streams in Greenland and Antarctica. The inversion results reveal the wide-spread presence of rib-like spatial structures in basal shear. The analysis of the hydraulic potential distribution shows that these rib-like structures co-locate with highs of the gradient of hydraulic potential. This suggests that subglacial water plays a role in the development and evolution of the basal shear ribs.

  8. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  9. Toxin structures as evolutionary tools: Using conserved 3D folds to study the evolution of rapidly evolving peptides.

    Science.gov (United States)

    Undheim, Eivind A B; Mobli, Mehdi; King, Glenn F

    2016-06-01

    Three-dimensional (3D) structures have been used to explore the evolution of proteins for decades, yet they have rarely been utilized to study the molecular evolution of peptides. Here, we highlight areas in which 3D structures can be particularly useful for studying the molecular evolution of peptide toxins. Although we focus our discussion on animal toxins, including one of the most widespread disulfide-rich peptide folds known, the inhibitor cystine knot, our conclusions should be widely applicable to studies of the evolution of disulfide-constrained peptides. We show that conserved 3D folds can be used to identify evolutionary links and test hypotheses regarding the evolutionary origin of peptides with extremely low sequence identity; construct accurate multiple sequence alignments; and better understand the evolutionary forces that drive the molecular evolution of peptides. Also watch the video abstract. © 2016 WILEY Periodicals, Inc.

  10. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  11. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  12. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  13. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  14. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  15. Convergent evolution and mimicry of protein linear motifs in host-pathogen interactions.

    Science.gov (United States)

    Chemes, Lucía Beatriz; de Prat-Gay, Gonzalo; Sánchez, Ignacio Enrique

    2015-06-01

    Pathogen linear motif mimics are highly evolvable elements that facilitate rewiring of host protein interaction networks. Host linear motifs and pathogen mimics differ in sequence, leading to thermodynamic and structural differences in the resulting protein-protein interactions. Moreover, the functional output of a mimic depends on the motif and domain repertoire of the pathogen protein. Regulatory evolution mediated by linear motifs can be understood by measuring evolutionary rates, quantifying positive and negative selection and performing phylogenetic reconstructions of linear motif natural history. Convergent evolution of linear motif mimics is widespread among unrelated proteins from viral, prokaryotic and eukaryotic pathogens and can also take place within individual protein phylogenies. Statistics, biochemistry and laboratory models of infection link pathogen linear motifs to phenotypic traits such as tropism, virulence and oncogenicity. In vitro evolution experiments and analysis of natural sequences suggest that changes in linear motif composition underlie pathogen adaptation to a changing environment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Digits lost or gained? Evidence for pedal evolution in the dwarf salamander complex (Eurycea, Plethodontidae.

    Directory of Open Access Journals (Sweden)

    Trip Lamb

    Full Text Available Change in digit number, particularly digit loss, has occurred repeatedly over the evolutionary history of tetrapods. Although digit loss has been documented among distantly related species of salamanders, it is relatively uncommon in this amphibian order. For example, reduction from five to four toes appears to have evolved just three times in the morphologically and ecologically diverse family Plethodontidae. Here we report a molecular phylogenetic analysis for one of these four-toed lineages--the Eurycea quadridigitata complex (dwarf salamanders--emphasizing relationships to other species in the genus. A multilocus phylogeny reveals that dwarf salamanders are paraphyletic with respect to a complex of five-toed, paedomorphic Eurycea from the Edwards Plateau in Texas. We use this phylogeny to examine evolution of digit number within the dwarf-Edwards Plateau clade, testing contrasting hypotheses of digit loss (parallelism among dwarf salamanders versus digit gain (re-evolution in the Edwards Plateau complex. Bayes factors analysis provides statistical support for a five-toed common ancestor at the dwarf-Edwards node, favoring, slightly, the parallelism hypothesis for digit loss. More importantly, our phylogenetic results pinpoint a rare event in the pedal evolution of plethodontid salamanders.

  17. Entropy in the Tangled Nature Model of evolution

    DEFF Research Database (Denmark)

    Roach, Ty N.F.; Nulton, James; Sibani, Paolo

    2017-01-01

    Applications of entropy principles to evolution and ecology are of tantamount importance given the central role spatiotemporal structuring plays in both evolution and ecological succession. We obtain here a qualitative interpretation of the role of entropy in evolving ecological systems. Our...... interpretation is supported by mathematical arguments using simulation data generated by the Tangled Nature Model (TNM), a stochastic model of evolving ecologies. We define two types of configurational entropy and study their empirical time dependence obtained from the data. Both entropy measures increase...... logarithmically with time, while the entropy per individual decreases in time, in parallel with the growth of emergent structures visible from other aspects of the simulation. We discuss the biological relevance of these entropies to describe niche space and functional space of ecosystems, as well as their use...

  18. The covert world of fish biofluorescence: a phylogenetically widespread and phenotypically variable phenomenon.

    Directory of Open Access Journals (Sweden)

    John S Sparks

    Full Text Available The discovery of fluorescent proteins has revolutionized experimental biology. Whereas the majority of fluorescent proteins have been identified from cnidarians, recently several fluorescent proteins have been isolated across the animal tree of life. Here we show that biofluorescence is not only phylogenetically widespread, but is also phenotypically variable across both cartilaginous and bony fishes, highlighting its evolutionary history and the possibility for discovery of numerous novel fluorescent proteins. Fish biofluorescence is especially common and morphologically variable in cryptically patterned coral-reef lineages. We identified 16 orders, 50 families, 105 genera, and more than 180 species of biofluorescent fishes. We have also reconstructed our current understanding of the phylogenetic distribution of biofluorescence for ray-finned fishes. The presence of yellow long-pass intraocular filters in many biofluorescent fish lineages and the substantive color vision capabilities of coral-reef fishes suggest that they are capable of detecting fluoresced light. We present species-specific emission patterns among closely related species, indicating that biofluorescence potentially functions in intraspecific communication and evidence that fluorescence can be used for camouflage. This research provides insight into the distribution, evolution, and phenotypic variability of biofluorescence in marine lineages and examines the role this variation may play.

  19. GRADSPMHD: A parallel MHD code based on the SPH formalism

    Science.gov (United States)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a

  20. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  1. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  2. The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition

    Directory of Open Access Journals (Sweden)

    Ashkan Tousimojarad

    2013-12-01

    Full Text Available We present the Glasgow Parallel Reduction Machine (GPRM, a novel, flexible framework for parallel task-composition based many-core programming. We allow the programmer to structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with functional semantics and parallel evaluation. In this paper we discuss the GPRM, the virtual machine framework that enables the parallel task composition approach. We focus the discussion on GPIR, the functional language used as the intermediate representation of the bytecode running on the GPRM. Using examples in this language we show the flexibility and power of our task composition framework. We demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor and an AMD 48-core processor system. We also compare our framework with OpenMP tasks in a parallel pointer chasing algorithm running on the Tilera processor. Our results show that the GPRM programs outperform the corresponding OpenMP codes on all test platforms, and can greatly facilitate writing of parallel programs, in particular non-data parallel algorithms such as reductions.

  3. Genetic architecture underlying convergent evolution of egg-laying behavior in a seed-feeding beetle.

    Science.gov (United States)

    Fox, Charles W; Wagner, James D; Cline, Sara; Thomas, Frances Ann; Messina, Frank J

    2009-05-01

    Independent populations subjected to similar environments often exhibit convergent evolution. An unresolved question is the frequency with which such convergence reflects parallel genetic mechanisms. We examined the convergent evolution of egg-laying behavior in the seed-feeding beetle Callosobruchus maculatus. Females avoid ovipositing on seeds bearing conspecific eggs, but the degree of host discrimination varies among geographic populations. In a previous experiment, replicate lines switched from a small host to a large one evolved reduced discrimination after 40 generations. We used line crosses to determine the genetic architecture underlying this rapid response. The most parsimonious genetic models included dominance and/or epistasis for all crosses. The genetic architecture underlying reduced discrimination in two lines was not significantly different from the architecture underlying differences between geographic populations, but the architecture underlying the divergence of a third line differed from all others. We conclude that convergence of this complex trait may in some cases involve parallel genetic mechanisms.

  4. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  5. Modeling on Fe-Cr microstructure: evolution with Cr content

    International Nuclear Information System (INIS)

    Diaz Arroyo, D.; Perlado, J.M.; Hernandez-Mayoral, M.; Caturla, M.J.; Victoria, M.

    2007-01-01

    Full text of publication follows: The minimum energy configuration of interstitials in the Fe-Cr system, which is the base for the low activation steels being developed in the European fusion reactor materials community, is determined by magnetism. Magnetism plays also a role in the atomic configurations found with increasing Cr content. Results will be presented from a program in which the microstructure evolution produced after heavy ion irradiation in the range from room temperature to 80 K is studied as a function of the Cr content in alloys produced under well controlled conditions, i.e. from high purity elements and with adequate heat treatment. It is expected that these measurements will serve as matrix for model validation. The first step in such modeling sequence is being performed by modeling the evolution of displacement cascades in Fe using the Dudarev -Derlet and Mendeleev potentials for Fe and the Caro potential for Fe-Cr. It is of particular interest to study the evolution of high-energy cascades, where an attempt will be made to clarify the role of the evolution of sub-cascades. Kinetic Monte Carlo (kMC) techniques will be used then to simulate the defect evolution. A new parallel kMC code is being implemented for this purpose. (authors)

  6. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    International Nuclear Information System (INIS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K

    2012-01-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  7. Tempo and mode in human evolution.

    Science.gov (United States)

    McHenry, H M

    1994-01-01

    The quickening pace of paleontological discovery is matched by rapid developments in geochronology. These new data show that the pattern of morphological change in the hominid lineage was mosaic. Adaptations essential to bipedalism appeared early, but some locomotor features changed much later. Relative to the highly derived postcrania of the earliest hominids, the craniodental complex was quite primitive (i.e., like the reconstructed last common ancestor with the African great apes). The pattern of craniodental change among successively younger species of Hominidae implies extensive parallel evolution between at least two lineages in features related to mastication. Relative brain size increased slightly among successively younger species of Australopithecus, expanded significantly with the appearance of Homo, but within early Homo remained at about half the size of Homo sapiens for almost a million years. Many apparent trends in human evolution may actually be due to the accumulation of relatively rapid shifts in successive species. PMID:8041697

  8. Reanalyzing Head et al. (2015): investigating the robustness of widespread p-hacking.

    Science.gov (United States)

    Hartgerink, Chris H J

    2017-01-01

    Head et al. (2015) provided a large collection of p -values that, from their perspective, indicates widespread statistical significance seeking (i.e., p -hacking). This paper inspects this result for robustness. Theoretically, the p -value distribution should be a smooth, decreasing function, but the distribution of reported p -values shows systematically more reported p -values for .01, .02, .03, .04, and .05 than p -values reported to three decimal places, due to apparent tendencies to round p -values to two decimal places. Head et al. (2015) correctly argue that an aggregate p -value distribution could show a bump below .05 when left-skew p -hacking occurs frequently. Moreover, the elimination of p  = .045 and p  = .05, as done in the original paper, is debatable. Given that eliminating p  = .045 is a result of the need for symmetric bins and systematically more p -values are reported to two decimal places than to three decimal places, I did not exclude p  = .045 and p  = .05. I conducted Fisher's method .04 hacking remains when we look at the entire range between .04 hacking is widespread. Given the far-reaching implications of supposed widespread p -hacking throughout the sciences Head et al. (2015), it is important that these findings are robust to data analysis choices if the conclusion is to be considered unequivocal. Although no evidence of widespread left-skew p -hacking is found in this reanalysis, this does not mean that there is no p -hacking at all. These results nuance the conclusion by Head et al. (2015), indicating that the results are not robust and that the evidence for widespread left-skew p -hacking is ambiguous at best.

  9. Comparative genomics and evolution of eukaryotic phospholipidbiosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Lykidis, Athanasios

    2006-12-01

    Phospholipid biosynthetic enzymes produce diverse molecular structures and are often present in multiple forms encoded by different genes. This work utilizes comparative genomics and phylogenetics for exploring the distribution, structure and evolution of phospholipid biosynthetic genes and pathways in 26 eukaryotic genomes. Although the basic structure of the pathways was formed early in eukaryotic evolution, the emerging picture indicates that individual enzyme families followed unique evolutionary courses. For example, choline and ethanolamine kinases and cytidylyltransferases emerged in ancestral eukaryotes, whereas, multiple forms of the corresponding phosphatidyltransferases evolved mainly in a lineage specific manner. Furthermore, several unicellular eukaryotes maintain bacterial-type enzymes and reactions for the synthesis of phosphatidylglycerol and cardiolipin. Also, base-exchange phosphatidylserine synthases are widespread and ancestral enzymes. The multiplicity of phospholipid biosynthetic enzymes has been largely generated by gene expansion in a lineage specific manner. Thus, these observations suggest that phospholipid biosynthesis has been an actively evolving system. Finally, comparative genomic analysis indicates the existence of novel phosphatidyltransferases and provides a candidate for the uncharacterized eukaryotic phosphatidylglycerol phosphate phosphatase.

  10. Interspecific Plastome Recombination Reflects Ancient Reticulate Evolution in Picea (Pinaceae).

    Science.gov (United States)

    Sullivan, Alexis R; Schiffthaler, Bastian; Thompson, Stacey Lee; Street, Nathaniel R; Wang, Xiao-Ru

    2017-07-01

    Plastid sequences are a cornerstone in plant systematic studies and key aspects of their evolution, such as uniparental inheritance and absent recombination, are often treated as axioms. While exceptions to these assumptions can profoundly influence evolutionary inference, detecting them can require extensive sampling, abundant sequence data, and detailed testing. Using advancements in high-throughput sequencing, we analyzed the whole plastomes of 65 accessions of Picea, a genus of ∼35 coniferous forest tree species, to test for deviations from canonical plastome evolution. Using complementary hypothesis and data-driven tests, we found evidence for chimeric plastomes generated by interspecific hybridization and recombination in the clade comprising Norway spruce (P. abies) and 10 other species. Support for interspecific recombination remained after controlling for sequence saturation, positive selection, and potential alignment artifacts. These results reconcile previous conflicting plastid-based phylogenies and strengthen the mounting evidence of reticulate evolution in Picea. Given the relatively high frequency of hybridization and biparental plastid inheritance in plants, we suggest interspecific plastome recombination may be more widespread than currently appreciated and could underlie reported cases of discordant plastid phylogenies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  11. Punctuated equilibrium in the large-scale evolution of programming languages†

    Science.gov (United States)

    Valverde, Sergi; Solé, Ricard V.

    2015-01-01

    The analogies and differences between biological and cultural evolution have been explored by evolutionary biologists, historians, engineers and linguists alike. Two well-known domains of cultural change are language and technology. Both share some traits relating the evolution of species, but technological change is very difficult to study. A major challenge in our way towards a scientific theory of technological evolution is how to properly define evolutionary trees or clades and how to weight the role played by horizontal transfer of information. Here, we study the large-scale historical development of programming languages, which have deeply marked social and technological advances in the last half century. We analyse their historical connections using network theory and reconstructed phylogenetic networks. Using both data analysis and network modelling, it is shown that their evolution is highly uneven, marked by innovation events where new languages are created out of improved combinations of different structural components belonging to previous languages. These radiation events occur in a bursty pattern and are tied to novel technological and social niches. The method can be extrapolated to other systems and consistently captures the major classes of languages and the widespread horizontal design exchanges, revealing a punctuated evolutionary path. PMID:25994298

  12. Punctuated equilibrium in the large-scale evolution of programming languages.

    Science.gov (United States)

    Valverde, Sergi; Solé, Ricard V

    2015-06-06

    The analogies and differences between biological and cultural evolution have been explored by evolutionary biologists, historians, engineers and linguists alike. Two well-known domains of cultural change are language and technology. Both share some traits relating the evolution of species, but technological change is very difficult to study. A major challenge in our way towards a scientific theory of technological evolution is how to properly define evolutionary trees or clades and how to weight the role played by horizontal transfer of information. Here, we study the large-scale historical development of programming languages, which have deeply marked social and technological advances in the last half century. We analyse their historical connections using network theory and reconstructed phylogenetic networks. Using both data analysis and network modelling, it is shown that their evolution is highly uneven, marked by innovation events where new languages are created out of improved combinations of different structural components belonging to previous languages. These radiation events occur in a bursty pattern and are tied to novel technological and social niches. The method can be extrapolated to other systems and consistently captures the major classes of languages and the widespread horizontal design exchanges, revealing a punctuated evolutionary path. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  13. The role of internal and external constructive processes in evolution

    Science.gov (United States)

    Laland, Kevin; Odling-Smee, John; Turner, Scott

    2014-01-01

    The architects of the Modern Synthesis viewed development as an unfolding of a form already latent in the genes. However, developing organisms play a far more active, constructive role in both their own development and their evolution than the Modern Synthesis proclaims. Here we outline what is meant by constructive processes in development and evolution, emphasizing how constructive development is a shared feature of many of the research developments central to the developing Extended Evolutionary Synthesis. Our article draws out the parallels between constructive physiological processes expressed internally and in the external environment (niche construction), showing how in each case they play important and not fully recognized evolutionary roles by modifying and biasing natural selection. PMID:24591574

  14. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  15. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  16. Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux

    International Nuclear Information System (INIS)

    Guo Zehua; Tang Xianzhu

    2012-01-01

    In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

  17. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipeline model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.

  18. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  19. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  20. 3-D Hybrid Simulation of Quasi-Parallel Bow Shock and Its Effects on the Magnetosphere

    International Nuclear Information System (INIS)

    Lin, Y.; Wang, X.Y.

    2005-01-01

    A three-dimensional (3-D) global-scale hybrid simulation is carried out for the structure of the quasi-parallel bow shock, in particular the foreshock waves and pressure pulses. The wave evolution and interaction with the dayside magnetosphere are discussed. It is shown that diamagnetic cavities are generated in the turbulent foreshock due to the ion beam plasma interaction, and these compressional pulses lead to strong surface perturbations at the magnetopause and Alfven waves/field line resonance in the magnetosphere

  1. Infrequent widespread microsatellite instability in hepatocellular carcinomas.

    Science.gov (United States)

    Yamamoto, H; Itoh, F; Fukushima, H; Kaneto, H; Sasaki, S; Ohmura, T; Satoh, T; Karino, Y; Endo, T; Toyota, J; Imai, K

    2000-03-01

    Widespread or high-frequency microsatellite instability (MSI) due to the defective DNA mismatch repair (MMR) occurs in the majority of hereditary non-polyposis colorectal cancer and a subset of sporadic malignant tumors. The incidence of MSI and underlying DNA MMR defects have been well characterized in gastrointestinal carcinogenesis, but not in hepatocarcinogenesis. To address the issue, we analyzed 55 Japanese hepatocellular carcinomas using several indicators of DNA MMR defects, such as microsatellite analysis, loss of heterozygosity (LOH) and mutation analysis of MMR genes, methylation of hMLH1 promoter, and frameshift mutations of mononucleotide repeat sequences within possible target genes. Mutation of beta2-microglobulin gene, which is presumably involved in MSI-positive tumor cell escape from immune surveillance was also examined. Some of these analyses were also carried out in 9 human liver cancer cell lines. None of the 3 quasi-monomorphic mononucleotide markers sensitive for MSI, BAT26, BAT25, and BAT34C4 presented shortened unstable alleles in any of the carcinoma, cirrhosis, chronic hepatitis tissues, or cell lines. LOH at MMR genes was infrequent (4.4 approximately 7.1%), and no mutations were detected. Neither hMLH1 hypermethylation nor frameshift mutation in the target genes was detected. No mutations were found in beta2-microglobulin. Widespread MSI due to the defective DNA MMR appears to play little if any part in Japanese hepatocarcinogenesis.

  2. Parallel random number generator for inexpensive configurable hardware cells

    Science.gov (United States)

    Ackermann, J.; Tangen, U.; Bödekker, B.; Breyer, J.; Stoll, E.; McCaskill, J. S.

    2001-11-01

    A new random number generator ( RNG) adapted to parallel processors has been created. This RNG can be implemented with inexpensive hardware cells. The correlation between neighboring cells is suppressed with smart connections. With such connection structures, sequences of pseudo-random numbers are produced. Numerical tests including a self-avoiding random walk test and the simulation of the order parameter and energy of the 2D Ising model give no evidence for correlation in the pseudo-random sequences. Because the new random number generator has suppressed the correlation between neighboring cells which is usually observed in cellular automaton implementations, it is applicable for extended time simulations. It gives an immense speed-up factor if implemented directly in configurable hardware, and has recently been used for long time simulations of spatially resolved molecular evolution.

  3. Selective sweeps of mitochondrial DNA can drive the evolution of uniparental inheritance.

    Science.gov (United States)

    Christie, Joshua R; Beekman, Madeleine

    2017-08-01

    Although the uniparental (or maternal) inheritance of mitochondrial DNA (mtDNA) is widespread, the reasons for its evolution remain unclear. Two main hypotheses have been proposed: selection against individuals containing different mtDNAs (heteroplasmy) and selection against "selfish" mtDNA mutations. Recently, uniparental inheritance was shown to promote adaptive evolution in mtDNA, potentially providing a third hypothesis for its evolution. Here, we explore this hypothesis theoretically and ask if the accumulation of beneficial mutations provides a sufficient fitness advantage for uniparental inheritance to invade a population in which mtDNA is inherited biparentally. In a deterministic model, uniparental inheritance increases in frequency but cannot replace biparental inheritance if only a single beneficial mtDNA mutation sweeps through the population. When we allow successive selective sweeps of mtDNA, however, uniparental inheritance can replace biparental inheritance. Using a stochastic model, we show that a combination of selection and drift facilitates the fixation of uniparental inheritance (compared to a neutral trait) when there is only a single selective mtDNA sweep. When we consider multiple mtDNA sweeps in a stochastic model, uniparental inheritance becomes even more likely to replace biparental inheritance. Our findings thus suggest that selective sweeps of beneficial mtDNA haplotypes can drive the evolution of uniparental inheritance. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  4. Highly parallel translation of DNA sequences into small molecules.

    Directory of Open Access Journals (Sweden)

    Rebecca M Weisinger

    Full Text Available A large body of in vitro evolution work establishes the utility of biopolymer libraries comprising 10(10 to 10(15 distinct molecules for the discovery of nanomolar-affinity ligands to proteins. Small-molecule libraries of comparable complexity will likely provide nanomolar-affinity small-molecule ligands. Unlike biopolymers, small molecules can offer the advantages of cell permeability, low immunogenicity, metabolic stability, rapid diffusion and inexpensive mass production. It is thought that such desirable in vivo behavior is correlated with the physical properties of small molecules, specifically a limited number of hydrogen bond donors and acceptors, a defined range of hydrophobicity, and most importantly, molecular weights less than 500 Daltons. Creating a collection of 10(10 to 10(15 small molecules that meet these criteria requires the use of hundreds to thousands of diversity elements per step in a combinatorial synthesis of three to five steps. With this goal in mind, we have reported a set of mesofluidic devices that enable DNA-programmed combinatorial chemistry in a highly parallel 384-well plate format. Here, we demonstrate that these devices can translate DNA genes encoding 384 diversity elements per coding position into corresponding small-molecule gene products. This robust and efficient procedure yields small molecule-DNA conjugates suitable for in vitro evolution experiments.

  5. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  6. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  7. Changes in Cis-regulatory Elements during Morphological Evolution

    Directory of Open Access Journals (Sweden)

    Yu-Lee Paul

    2012-10-01

    Full Text Available How have animals evolved new body designs (morphological evolution? This requires explanations both for simple morphological changes, such as differences in pigmentation and hair patterns between different Drosophila populations and species, and also for more complex changes, such as differences in the forelimbs of mice and bats, and the necks of amphibians and reptiles. The genetic changes and pathways involved in these evolutionary steps require identification. Many, though not all, of these events occur by changes in cis-regulatory (enhancer elements within developmental genes. Enhancers are modular, each affecting expression in only one or a few tissues. Therefore it is possible to add, remove or alter an enhancer without producing changes in multiple tissues, and thereby avoid widespread (pleiotropic deleterious effects. Ideally, for a given step in morphological evolution it is necessary to identify (i the change in phenotype, (ii the changes in gene expression, (iii the DNA region, enhancer or otherwise, affected, (iv the mutation involved, (v the nature of the transcription or other factors that bind to this site. In practice these data are incomplete for most of the published studies upon morphological evolution. Here, the investigations are categorized according to how far these analyses have proceeded.

  8. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  9. Causal evidence between monsoon and evolution of rhizomyine rodents.

    Science.gov (United States)

    López-Antoñanzas, Raquel; Knoll, Fabien; Wan, Shiming; Flynn, Lawrence J

    2015-03-11

    The modern Asian monsoonal systems are currently believed to have originated around the end of the Oligocene following a crucial step of uplift of the Tibetan-Himalayan highlands. Although monsoon possibly drove the evolution of many mammal lineages during the Neogene, no evidence thereof has been provided so far. We examined the evolutionary history of a clade of rodents, the Rhizomyinae, in conjunction with our current knowledge of monsoon fluctuations over time. The macroevolutionary dynamics of rhizomyines were analyzed within a well-constrained phylogenetic framework coupled with biogeographic and evolutionary rate studies. The evolutionary novelties developed by these rodents were surveyed in parallel with the fluctuations of the Indian monsoon so as to evaluate synchroneity and postulate causal relationships. We showed the existence of three drops in biodiversity during the evolution of rhizomyines, all of which reflected elevated extinction rates. Our results demonstrated linkage of monsoon variations with the evolution and biogeography of rhizomyines. Paradoxically, the evolution of rhizomyines was accelerated during the phases of weakening of the monsoons, not of strengthening, most probably because at those intervals forest habitats declined, which triggered extinction and progressive specialization toward a burrowing existence.

  10. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  11. Electrification and Decarbonization: Exploring U.S. Energy Use and Greenhouse Gas Emissions in Scenarios with Widespread Electrification and Power Sector Decarbonization

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, Daniel [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bielen, Dave [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eichman, Josh [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States); Logan, Jeff [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); McMillan, Colin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Parker, Andrew [National Renewable Energy Lab. (NREL), Golden, CO (United States); Vimmerstedt, Laura [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wilson, Eric [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-07-19

    Electrification of end-use services in the transportation, buildings, and industrial sectors coupled with decarbonization of electricity generation has been identified as one of the key pathways to achieving a low-carbon future in the United States. By lowering the carbon intensity of the electricity generation and substituting electricity for higher-emissions fossil fuels in end-use sectors, significant reductions in carbon dioxide emissions can be achieved. This report describes a preliminary analysis that examines the potential impacts of widespread electrification on the U.S. energy sector. We develop a set of exploratory scenarios under which electrification is aggressively pursued across all end-use sectors and examine the impacts of achieving these electrification levels on electricity load patterns, total fossil energy consumption, carbon dioxide emissions, and the evolution of the U.S. power system.

  12. Asymmetric evolution and domestication in allotetraploid cotton (Gossypium hirsutum L.

    Directory of Open Access Journals (Sweden)

    Lei Fang

    2017-04-01

    Full Text Available Polyploidy plays a major role in genome evolution, which corresponds to environmental changes over millions of years. The mechanisms of genome evolution, particularly during the process of domestication, are of broad interest in the fields of plant science and crop breeding. Upland cotton is derived from the hybridization and polyploidization of its ancient A and D diploid ancestors. As a result, cotton is a model for polyploid genome evolution and crop domestication. To explore the genomic mysteries of allopolyploid cotton, we investigated asymmetric evolution and domestication in the A and D subgenomes. Interestingly, more structural rearrangements have been characterized in the A subgenome than in the D subgenome. Correspondingly, more transposable elements, a greater number of lost and disrupted genes, and faster evolution have been identified in the A subgenome. In contrast, the centromeric retroelement (RT-domain related sequence of tetraploid cotton derived from the D subgenome progenitor was found to have invaded the A subgenome centromeres after allotetrapolyploid formation. Although there is no genome-wide expression bias between the subgenomes, as with expression-level alterations, gene expression bias of homoeologous gene pairs is widespread and varies from tissue to tissue. Further, there are more positively selected genes for fiber yield and quality in the A subgenome and more for stress tolerance in the D subgenome, indicating asymmetric domestication. This review highlights the asymmetric subgenomic evolution and domestication of allotetraploid cotton, providing valuable genomic resources for cotton research and enhancing our understanding of the basis of many other allopolyploids.

  13. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  14. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  15. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  16. A simple method for the parallel deep sequencing of full influenza A genomes

    DEFF Research Database (Denmark)

    Kampmann, Marie-Louise; Fordyce, Sarah Louise; Avila Arcos, Maria del Carmen

    2011-01-01

    Given the major threat of influenza A to human and animal health, and its ability to evolve rapidly through mutation and reassortment, tools that enable its timely characterization are necessary to help monitor its evolution and spread. For this purpose, deep sequencing can be a very valuable tool....... This study reports a comprehensive method that enables deep sequencing of the complete genomes of influenza A subtypes using the Illumina Genome Analyzer IIx (GAIIx). By using this method, the complete genomes of nine viruses were sequenced in parallel, representing the 2009 pandemic H1N1 virus, H5N1 virus...

  17. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  18. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  19. Crystallization, Microstructure, and Viscosity Evolutions in Lithium Aluminosilicate Glass-Ceramics

    Directory of Open Access Journals (Sweden)

    Qiang Fu

    2016-11-01

    Full Text Available Lithium aluminosilicate glass-ceramics have found widespread commercial success in areas such as consumer products, telescope mirrors, fireplace windows, etc. However, there is still much to learn regarding the fundamental mechanisms of crystallization, especially related to the evolution of viscosity as a function of the crystallization (ceramming process. In this study, the impact of phase assemblage and microstructure on the viscosity was investigated using high temperature X-ray diffraction (HTXRD, beam bending viscometry (BBV, and transmission electron microscopy (TEM. Results from this study provide a first direct observation of viscosity evolution as a function of ceramming time and temperature. Sharp viscosity increases due to phase separation, nucleation and phase transformation are noticed through BBV measurement. A near-net shape ceramming can be achieved in TiO2-containing compositions by keeping the glass at a high viscosity (> 109 Pa.s throughout the whole thermal treatment.

  20. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  1. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  2. Distributed Memory Parallel Computing with SEAWAT

    Science.gov (United States)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources

  3. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  4. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  5. Karyotype evolution in Phalaris (Poaceae): The role of reductional dysploidy, polyploidy and chromosome alteration in a wide-spread and diverse genus.

    Science.gov (United States)

    Winterfeld, Grit; Becher, Hannes; Voshell, Stephanie; Hilu, Khidir; Röser, Martin

    2018-01-01

    Karyotype characteristics can provide valuable information on genome evolution and speciation, in particular in taxa with varying basic chromosome numbers and ploidy levels. Due to its worldwide distribution, remarkable variability in morphological traits and the fact that ploidy change plays a key role in its evolution, the canary grass genus Phalaris (Poaceae) is an excellent study system to investigate the role of chromosomal changes in species diversification and expansion. Phalaris comprises diploid species with two basic chromosome numbers of x = 6 and 7 as well as polyploids based on x = 7. To identify distinct karyotype structures and to trace chromosome evolution within the genus, we apply fluorescence in situ hybridisation (FISH) of 5S and 45S rDNA probes in four diploid and four tetraploid Phalaris species of both basic numbers. The data agree with a dysploid reduction from x = 7 to x = 6 as the result of reciprocal translocations between three chromosomes of an ancestor with a diploid chromosome complement of 2n = 14. We recognize three different genomes in the genus: (1) the exclusively Mediterranean genome A based on x = 6, (2) the cosmopolitan genome B based on x = 7 and (3) a genome C based on x = 7 and with a distribution in the Mediterranean and the Middle East. Both auto- and allopolyploidy of genomes B and C are suggested for the formation of tetraploids. The chromosomal divergence observed in Phalaris can be explained by the occurrence of dysploidy, the emergence of three different genomes, and the chromosome rearrangements accompanied by karyotype change and polyploidization. Mapping the recognized karyotypes on the existing phylogenetic tree suggests that genomes A and C are restricted to sections Phalaris and Bulbophalaris, respectively, while genome B occurs across all taxa with x = 7.

  6. Karyotype evolution in Phalaris (Poaceae: The role of reductional dysploidy, polyploidy and chromosome alteration in a wide-spread and diverse genus.

    Directory of Open Access Journals (Sweden)

    Grit Winterfeld

    Full Text Available Karyotype characteristics can provide valuable information on genome evolution and speciation, in particular in taxa with varying basic chromosome numbers and ploidy levels. Due to its worldwide distribution, remarkable variability in morphological traits and the fact that ploidy change plays a key role in its evolution, the canary grass genus Phalaris (Poaceae is an excellent study system to investigate the role of chromosomal changes in species diversification and expansion. Phalaris comprises diploid species with two basic chromosome numbers of x = 6 and 7 as well as polyploids based on x = 7. To identify distinct karyotype structures and to trace chromosome evolution within the genus, we apply fluorescence in situ hybridisation (FISH of 5S and 45S rDNA probes in four diploid and four tetraploid Phalaris species of both basic numbers. The data agree with a dysploid reduction from x = 7 to x = 6 as the result of reciprocal translocations between three chromosomes of an ancestor with a diploid chromosome complement of 2n = 14. We recognize three different genomes in the genus: (1 the exclusively Mediterranean genome A based on x = 6, (2 the cosmopolitan genome B based on x = 7 and (3 a genome C based on x = 7 and with a distribution in the Mediterranean and the Middle East. Both auto- and allopolyploidy of genomes B and C are suggested for the formation of tetraploids. The chromosomal divergence observed in Phalaris can be explained by the occurrence of dysploidy, the emergence of three different genomes, and the chromosome rearrangements accompanied by karyotype change and polyploidization. Mapping the recognized karyotypes on the existing phylogenetic tree suggests that genomes A and C are restricted to sections Phalaris and Bulbophalaris, respectively, while genome B occurs across all taxa with x = 7.

  7. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  8. Structural parallels between terrestrial microbialites and Martian sediments: are all cases of `Pareidolia'?

    Science.gov (United States)

    Rizzo, Vincenzo; Cantasano, Nicola

    2017-10-01

    The study analyses possible parallels of the microbialite-known structures with a set of similar settings selected by a systematic investigation from the wide record and data set of images shot by NASA rovers. Terrestrial cases involve structures both due to bio-mineralization processes and those induced by bacterial metabolism, that occur in a dimensional field longer than 0.1 mm, at micro, meso and macro scales. The study highlights occurrence on Martian sediments of widespread structures like microspherules, often organized into some higher-order settings. Such structures also occur on terrestrial stromatolites in a great variety of `Microscopic Induced Sedimentary Structures', such as voids, gas domes and layer deformations of microbial mats. We present a suite of analogies so compelling (i.e. different scales of morphological, structural and conceptual relevance), to make the case that similarities between Martian sediment structures and terrestrial microbialites are not all cases of `Pareidolia'.

  9. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  10. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  11. Evolution of morphological and climatic adaptations in Veronica L. (Plantaginaceae

    Directory of Open Access Journals (Sweden)

    Jian-Cheng Wang

    2016-08-01

    Full Text Available Perennials and annuals apply different strategies to adapt to the adverse environment, based on ‘tolerance’ and ‘avoidance’, respectively. To understand lifespan evolution and its impact on plant adaptability, we carried out a comparative study of perennials and annuals in the genus Veronica from a phylogenetic perspective. The results showed that ancestors of the genus Veronicawere likely to be perennial plants. Annual life history of Veronica has evolved multiple times and subtrees with more annual species have a higher substitution rate. Annuals can adapt to more xeric habitats than perennials. This indicates that annuals are more drought-resistant than their perennial relatives. Due to adaptation to similar selective pressures, parallel evolution occurs in morphological characters among annual species of Veronica.

  12. Giant hub Src and Syk tyrosine kinase thermodynamic profiles recapitulate evolution

    Science.gov (United States)

    Phillips, J. C.

    2017-10-01

    Thermodynamic scaling theory, previously applied mainly to small proteins, here analyzes quantitative evolution of the titled functional network giant hub enzymes. The broad domain structure identified homologically is confirmed hydropathically using amino acid sequences only. The most surprising results concern the evolution of the tyrosine kinase globular surface roughness from avians to mammals, which is first order, compared to the evolution within mammals from rodents to humans, which is second order. The mystery of the unique amide terminal region of proto oncogene tyrosine protein kinase is resolved by the discovery there of a rare hydroneutral septad targeting cluster, which is paralleled by an equally rare octad catalytic cluster in tyrosine kinase in humans and a few other species (cat and dog). These results, which go far towards explaining why these proteins are among the largest giant hubs in protein interaction networks, use no adjustable parameters.

  13. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  14. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  15. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  16. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  17. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  18. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  19. Evolution of the salivary apyrases of blood-feeding arthropods.

    Science.gov (United States)

    Hughes, Austin L

    2013-09-15

    Phylogenetic analyses of three families of arthropod apyrases were used to reconstruct the evolutionary relationships of salivary-expressed apyrases, which have an anti-coagulant function in blood-feeding arthropods. Members of the 5'nucleotidase family were recruited for salivary expression in blood-feeding species at least five separate times in the history of arthropods, while members of the Cimex-type apyrase family have been recruited at least twice. In spite of these independent events of recruitment for salivary function, neither of these families showed evidence of convergent amino acid sequence evolution in salivary-expressed members. On the contrary, in the 5'-nucleotide family, salivary-expressed proteins conserved ancestral amino acid residues to a significantly greater extent than related proteins without salivary function, implying parallel evolution by conservation of ancestral characters. This unusual pattern of sequence evolution suggests the hypothesis that purifying selection favoring conservation of ancestral residues is particularly strong in salivary-expressed members of the 5'-nucleotidase family of arthropods because of constraints arising from expression within the vertebrate host. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Detecting regular sound changes in linguistics as events of concerted evolution.

    Science.gov (United States)

    Hruschka, Daniel J; Branford, Simon; Smith, Eric D; Wilkins, Jon; Meade, Andrew; Pagel, Mark; Bhattacharya, Tanmoy

    2015-01-05

    Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  2. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  3. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  4. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  5. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  6. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  7. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  8. A widespread allergic reaction to black tattoo ink caused by laser treatment.

    Science.gov (United States)

    Bernstein, Eric F

    2015-02-01

    This is the first reported case of a local and widespread reaction in a 39 year old woman, to black tattoo ink, induced by Q-switched laser treatment. A 39 year old woman was treated with the Q-switched Nd:YAG laser for removal of a decorative tattoo of her lower back. Subsequent to laser treatment, a severe, widespread allergic reaction developed within and surrounding the treated tattoo. Tattoo reactions subsequent to laser treatment should be considered in addition to reactions to topical antibiotics or wound dressings, following laser treatment of tattoos. © 2015 Wiley Periodicals, Inc.

  9. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  10. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  11. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  12. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  13. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  14. Evolution of Lower Brachyceran Flies (Diptera and Their Adaptive Radiation with Angiosperms

    Directory of Open Access Journals (Sweden)

    Bo Wang

    2017-04-01

    Full Text Available The Diptera (true flies is one of the most species-abundant orders of Insecta, and it is also among the most important flower-visiting insects. Dipteran fossils are abundant in the Mesozoic, especially in the Late Jurassic and Early Cretaceous. Here, we review the fossil record and early evolution of some Mesozoic lower brachyceran flies together with new records in Burmese amber, including Tabanidae, Nemestrinidae, Bombyliidae, Eremochaetidae, and Zhangsolvidae. The fossil records reveal that some flower-visiting groups had diversified during the mid-Cretaceous, consistent with the rise of angiosperms to widespread floristic dominance. These brachyceran groups played an important role in the origin of co-evolutionary relationships with basal angiosperms. Moreover, the rise of angiosperms not only improved the diversity of flower-visiting flies, but also advanced the turnover and evolution of other specialized flies.

  15. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R; Ratterman, Joseph D; Smith, Brian E

    2014-11-11

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.

  16. Insights into hominid evolution from the gorilla genome sequence

    Science.gov (United States)

    Scally, Aylwyn; Dutheil, Julien Y.; Hillier, LaDeana W.; Jordan, Greg E.; Goodhead, Ian; Herrero, Javier; Hobolth, Asger; Lappalainen, Tuuli; Mailund, Thomas; Marques-Bonet, Tomas; McCarthy, Shane; Montgomery, Stephen H.; Schwalie, Petra C.; Tang, Y. Amy; Ward, Michelle C.; Xue, Yali; Yngvadottir, Bryndis; Alkan, Can; Andersen, Lars N.; Ayub, Qasim; Ball, Edward V.; Beal, Kathryn; Bradley, Brenda J.; Chen, Yuan; Clee, Chris M.; Fitzgerald, Stephen; Graves, Tina A.; Gu, Yong; Heath, Paul; Heger, Andreas; Karakoc, Emre; Kolb-Kokocinski, Anja; Laird, Gavin K.; Lunter, Gerton; Meader, Stephen; Mort, Matthew; Mullikin, James C.; Munch, Kasper; O’Connor, Timothy D.; Phillips, Andrew D.; Prado-Martinez, Javier; Rogers, Anthony S.; Sajjadian, Saba; Schmidt, Dominic; Shaw, Katy; Simpson, Jared T.; Stenson, Peter D.; Turner, Daniel J.; Vigilant, Linda; Vilella, Albert J.; Whitener, Weldon; Zhu, Baoli; Cooper, David N.; de Jong, Pieter; Dermitzakis, Emmanouil T.; Eichler, Evan E.; Flicek, Paul; Goldman, Nick; Mundy, Nicholas I.; Ning, Zemin; Odom, Duncan T.; Ponting, Chris P.; Quail, Michael A.; Ryder, Oliver A.; Searle, Stephen M.; Warren, Wesley C.; Wilson, Richard K.; Schierup, Mikkel H.; Rogers, Jane; Tyler-Smith, Chris; Durbin, Richard

    2012-01-01

    Summary Gorillas are humans’ closest living relatives after chimpanzees, and are of comparable importance for the study of human origins and evolution. Here we present the assembly and analysis of a genome sequence for the western lowland gorilla, and compare the whole genomes of all extant great ape genera. We propose a synthesis of genetic and fossil evidence consistent with placing the human-chimpanzee and human-chimpanzee-gorilla speciation events at approximately 6 and 10 million years ago (Mya). In 30% of the genome, gorilla is closer to human or chimpanzee than the latter are to each other; this is rarer around coding genes, indicating pervasive selection throughout great ape evolution, and has functional consequences in gene expression. A comparison of protein coding genes reveals approximately 500 genes showing accelerated evolution on each of the gorilla, human and chimpanzee lineages, and evidence for parallel acceleration, particularly of genes involved in hearing. We also compare the western and eastern gorilla species, estimating an average sequence divergence time 1.75 million years ago, but with evidence for more recent genetic exchange and a population bottleneck in the eastern species. The use of the genome sequence in these and future analyses will promote a deeper understanding of great ape biology and evolution. PMID:22398555

  17. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  18. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  19. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  20. The early thermal evolution of Mars

    Science.gov (United States)

    Bhatia, G. K.; Sahijpal, S.

    2016-01-01

    Hf-W isotopic systematics of Martian meteorites have provided evidence for the early accretion and rapid core formation of Mars. We present the results of numerical simulations performed to study the early thermal evolution and planetary scale differentiation of Mars. The simulations are confined to the initial 50 Myr (Ma) of the formation of solar system. The accretion energy produced during the growth of Mars and the decay energy due to the short-lived radio-nuclides 26Al, 60Fe, and the long-lived nuclides, 40K, 235U, 238U, and 232Th are incorporated as the heat sources for the thermal evolution of Mars. During the core-mantle differentiation of Mars, the molten metallic blobs were numerically moved using Stoke's law toward the center with descent velocity that depends on the local acceleration due to gravity. Apart from the accretion and the radioactive heat energies, the gravitational energy produced during the differentiation of Mars and the associated heat transfer is also parametrically incorporated in the present work to make an assessment of its contribution to the early thermal evolution of Mars. We conclude that the accretion energy alone cannot produce widespread melting and differentiation of Mars even with an efficient consumption of the accretion energy. This makes 26Al the prime source for the heating and planetary scale differentiation of Mars. We demonstrate a rapid accretion and core-mantle differentiation of Mars within the initial ~1.5 Myr. This is consistent with the chronological records of Martian meteorites.

  1. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  2. Parallelization for first principles electronic state calculation program

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Oguchi, Tamio.

    1997-03-01

    In this report we study the parallelization for First principles electronic state calculation program. The target machines are NEC SX-4 for shared memory type parallelization and FUJITSU VPP300 for distributed memory type parallelization. The features of each parallel machine are surveyed, and the parallelization methods suitable for each are proposed. It is shown that 1.60 times acceleration is achieved with 2 CPU parallelization by SX-4 and 4.97 times acceleration is achieved with 12 PE parallelization by VPP 300. (author)

  3. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  4. Neoclassical parallel flow calculation in the presence of external parallel momentum sources in Heliotron J

    Energy Technology Data Exchange (ETDEWEB)

    Nishioka, K.; Nakamura, Y. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Nishimura, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Lee, H. Y. [Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); Kobayashi, S.; Mizuuchi, T.; Nagasaki, K.; Okada, H.; Minami, T.; Kado, S.; Yamamoto, S.; Ohshima, S.; Konoshima, S.; Sano, F. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan)

    2016-03-15

    A moment approach to calculate neoclassical transport in non-axisymmetric torus plasmas composed of multiple ion species is extended to include the external parallel momentum sources due to unbalanced tangential neutral beam injections (NBIs). The momentum sources that are included in the parallel momentum balance are calculated from the collision operators of background particles with fast ions. This method is applied for the clarification of the physical mechanism of the neoclassical parallel ion flows and the multi-ion species effect on them in Heliotron J NBI plasmas. It is found that parallel ion flow can be determined by the balance between the parallel viscosity and the external momentum source in the region where the external source is much larger than the thermodynamic force driven source in the collisional plasmas. This is because the friction between C{sup 6+} and D{sup +} prevents a large difference between C{sup 6+} and D{sup +} flow velocities in such plasmas. The C{sup 6+} flow velocities, which are measured by the charge exchange recombination spectroscopy system, are numerically evaluated with this method. It is shown that the experimentally measured C{sup 6+} impurity flow velocities do not contradict clearly with the neoclassical estimations, and the dependence of parallel flow velocities on the magnetic field ripples is consistent in both results.

  5. Reduce, reuse, and recycle: developmental evolution of trait diversification.

    Science.gov (United States)

    Preston, Jill C; Hileman, Lena C; Cubas, Pilar

    2011-03-01

    A major focus of evolutionary developmental (evo-devo) studies is to determine the genetic basis of variation in organismal form and function, both of which are fundamental to biological diversification. Pioneering work on metazoan and flowering plant systems has revealed conserved sets of genes that underlie the bauplan of organisms derived from a common ancestor. However, the extent to which variation in the developmental genetic toolkit mirrors variation at the phenotypic level is an active area of research. Here we explore evidence from the angiosperm evo-devo literature supporting the frugal use of genes and genetic pathways in the evolution of developmental patterning. In particular, these examples highlight the importance of genetic pleiotropy in different developmental modules, thus reducing the number of genes required in growth and development, and the reuse of particular genes in the parallel evolution of ecologically important traits.

  6. Structural Properties of G,T-Parallel Duplexes

    Directory of Open Access Journals (Sweden)

    Anna Aviñó

    2010-01-01

    Full Text Available The structure of G,T-parallel-stranded duplexes of DNA carrying similar amounts of adenine and guanine residues is studied by means of molecular dynamics (MD simulations and UV- and CD spectroscopies. In addition the impact of the substitution of adenine by 8-aminoadenine and guanine by 8-aminoguanine is analyzed. The presence of 8-aminoadenine and 8-aminoguanine stabilizes the parallel duplex structure. Binding of these oligonucleotides to their target polypyrimidine sequences to form the corresponding G,T-parallel triplex was not observed. Instead, when unmodified parallel-stranded duplexes were mixed with their polypyrimidine target, an interstrand Watson-Crick duplex was formed. As predicted by theoretical calculations parallel-stranded duplexes carrying 8-aminopurines did not bind to their target. The preference for the parallel-duplex over the Watson-Crick antiparallel duplex is attributed to the strong stabilization of the parallel duplex produced by the 8-aminopurines. Theoretical studies show that the isomorphism of the triads is crucial for the stability of the parallel triplex.

  7. Study of three-dimensional Rayleigh--Taylor instability in compressible fluids through level set method and parallel computation

    International Nuclear Information System (INIS)

    Li, X.L.

    1993-01-01

    Computation of three-dimensional (3-D) Rayleigh--Taylor instability in compressible fluids is performed on a MIMD computer. A second-order TVD scheme is applied with a fully parallelized algorithm to the 3-D Euler equations. The computational program is implemented for a 3-D study of bubble evolution in the Rayleigh--Taylor instability with varying bubble aspect ratio and for large-scale simulation of a 3-D random fluid interface. The numerical solution is compared with the experimental results by Taylor

  8. High-speed parallel solution of the neutron diffusion equation with the hierarchical domain decomposition boundary element method incorporating parallel communications

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Chiba, Gou

    2000-01-01

    A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)

  9. Phylogeny and adaptive evolution of the brain-development gene microcephalin (MCPH1 in cetaceans

    Directory of Open Access Journals (Sweden)

    Montgomery Stephen H

    2011-04-01

    Full Text Available Abstract Background Representatives of Cetacea have the greatest absolute brain size among animals, and the largest relative brain size aside from humans. Despite this, genes implicated in the evolution of large brain size in primates have yet to be surveyed in cetaceans. Results We sequenced ~1240 basepairs of the brain development gene microcephalin (MCPH1 in 38 cetacean species. Alignments of these data and a published complete sequence from Tursiops truncatus with primate MCPH1 were utilized in phylogenetic analyses and to estimate ω (rate of nonsynonymous substitution/rate of synonymous substitution using site and branch models of molecular evolution. We also tested the hypothesis that selection on MCPH1 was correlated with brain size in cetaceans using a continuous regression analysis that accounted for phylogenetic history. Our analyses revealed widespread signals of adaptive evolution in the MCPH1 of Cetacea and in other subclades of Mammalia, however, there was not a significant positive association between ω and brain size within Cetacea. Conclusion In conjunction with a recent study of Primates, we find no evidence to support an association between MCPH1 evolution and the evolution of brain size in highly encephalized mammalian species. Our finding of significant positive selection in MCPH1 may be linked to other functions of the gene.

  10. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  11. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    Science.gov (United States)

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  12. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  13. Subjects with Knee Osteoarthritis Exhibit Widespread Hyperalgesia to Pressure and Cold.

    Directory of Open Access Journals (Sweden)

    Penny Moss

    Full Text Available Hyperalgesia to mechanical and thermal stimuli are characteristics of a range of disorders such as tennis elbow, whiplash and fibromyalgia. This study evaluated the presence of local and widespread mechanical and thermal hyperalgesia in individuals with knee osteoarthritis, compared to healthy control subjects. Twenty-three subjects with knee osteoarthritis and 23 healthy controls, matched for age, gender and body mass index, were recruited for the study. Volunteers with any additional chronic pain conditions were excluded. Pain thresholds to pressure, cold and heat were tested at the knee, ipsilateral heel and ipsilateral elbow, in randomized order, using standardised methodology. Significant between-groups differences for pressure pain and cold pain thresholds were found with osteoarthritic subjects demonstrating significantly increased sensitivity to both pressure (p = .018 and cold (p = .003 stimuli, compared with controls. A similar pattern of results extended to the pain-free ipsilateral ankle and elbow indicating widespread pressure and cold hyperalgesia. No significant differences were found between groups for heat pain threshold, although correlations showed that subjects with greater sensitivity to pressure pain were also likely to be more sensitive to both cold pain and heat pain. This study found widespread elevated pain thresholds in subjects with painful knee osteoarthritis, suggesting that altered nociceptive system processing may play a role in ongoing arthritic pain for some patients.

  14. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  15. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  16. The role of internal and external constructive processes in evolution.

    Science.gov (United States)

    Laland, Kevin; Odling-Smee, John; Turner, Scott

    2014-06-01

    The architects of the Modern Synthesis viewed development as an unfolding of a form already latent in the genes. However, developing organisms play a far more active, constructive role in both their own development and their evolution than the Modern Synthesis proclaims. Here we outline what is meant by constructive processes in development and evolution, emphasizing how constructive development is a shared feature of many of the research developments central to the developing Extended Evolutionary Synthesis. Our article draws out the parallels between constructive physiological processes expressed internally and in the external environment (niche construction), showing how in each case they play important and not fully recognized evolutionary roles by modifying and biasing natural selection. © 2014 The Authors. The Journal of Physiology © 2014 The Physiological Society.

  17. Parallelization methods study of thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Gaudart, Catherine

    2000-01-01

    The variety of parallelization methods and machines leads to a wide selection for programmers. In this study we suggest, in an industrial context, some solutions from the experience acquired through different parallelization methods. The study is about several scientific codes which simulate a large variety of thermal-hydraulics phenomena. A bibliography on parallelization methods and a first analysis of the codes showed the difficulty of our process on the whole applications to study. Therefore, it would be necessary to identify and extract a representative part of these applications and parallelization methods. The linear solver part of the codes forced itself. On this particular part several parallelization methods had been used. From these developments one could estimate the necessary work for a non initiate programmer to parallelize his application, and the impact of the development constraints. The different methods of parallelization tested are the numerical library PETSc, the parallelizer PAF, the language HPF, the formalism PEI and the communications library MPI and PYM. In order to test several methods on different applications and to follow the constraint of minimization of the modifications in codes, a tool called SPS (Server of Parallel Solvers) had be developed. We propose to describe the different constraints about the optimization of codes in an industrial context, to present the solutions given by the tool SPS, to show the development of the linear solver part with the tested parallelization methods and lastly to compare the results against the imposed criteria. (author) [fr

  18. Near neutrality: leading edge of the neutral theory of molecular evolution.

    Science.gov (United States)

    Hughes, Austin L

    2008-01-01

    The nearly neutral theory represents a development of Kimura's neutral theory of molecular evolution that makes testable predictions that go beyond a mere null model. Recent evidence has strongly supported several of these predictions, including the prediction that slightly deleterious variants will accumulate in a species that has undergone a severe bottleneck or in cases where recombination is reduced or absent. Because bottlenecks often occur in speciation and slightly deleterious mutations in coding regions will usually be nonsynonymous, we should expect that the ratio of nonsynonymous to synonymous fixed differences between species should often exceed the ratio of nonsynonymous to synonymous polymorphisms within species. Many data support this prediction, although they have often been wrongly interpreted as evidence for positive Darwinian selection. The use of conceptually flawed tests for positive selection has become widespread in recent years, seriously harming the quest for an understanding of genome evolution. When properly analyzed, many (probably most) claimed cases of positive selection will turn out to involve the fixation of slightly deleterious mutations by genetic drift in bottlenecked populations. Slightly deleterious variants are a transient feature of evolution in the long term, but they have substantially affected contemporary species, including our own.

  19. Multi-area market clearing in wind-integrated interconnected power systems: A fast parallel decentralized method

    International Nuclear Information System (INIS)

    Doostizadeh, Meysam; Aminifar, Farrokh; Lesani, Hamid; Ghasemi, Hassan

    2016-01-01

    Highlights: • A parallel-decentralized multi-area energy & reserve clearance model is proposed. • A fictitious area and joint variables coordinate & parallelize area market models. • Adjustable intervals of random variables compromise optimality and robustness. • The stochastic nature of problem is tackled in an efficient deterministic manner. • The model is compact and applicable in multi-area real-scale systems. - Abstract: The growing evolution of regional electricity markets and proliferation of wind power penetration underline the prominence of coordinated operation of interconnected regional power systems. This paper develops a parallel decentralized methodology for multi-area energy and reserve clearance under wind power uncertainty. Preserving the independency of regional markets while fully taking the advantages of interconnection is a salient feature of the new model. Additionally, the parallel procedure simultaneously clears regional markets for the sake of acceleration particularly in large-scale systems. In order to achieve the optimal solution in a distributed fashion, the augmented Lagrangian relaxation along with alternative direction method of multipliers are applied. The wind power intermittency and uncertainty are tackled through the interval optimization approach. Opposed to the conventional wisdom, adjustable intervals, as subsets of conventional predefined intervals, are introduced here to compromise the cost and conservatism of the solution. The confidence level approach is employed to accommodate the stochastic nature of wind power in a computationally efficient deterministic manner. The effectiveness and robustness of the proposed method are evaluated through several case studies on a two-area 6-bus and the modified three-area IEEE 118-bus test systems.

  20. Simulation Exploration through Immersive Parallel Planes

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Steve [Los Alamos Visualization Associates

    2017-05-25

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  1. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  2. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  3. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  4. Climate change and forest fires synergistically drive widespread melt events of the Greenland Ice Sheet.

    Science.gov (United States)

    Keegan, Kaitlin M; Albert, Mary R; McConnell, Joseph R; Baker, Ian

    2014-06-03

    In July 2012, over 97% of the Greenland Ice Sheet experienced surface melt, the first widespread melt during the era of satellite remote sensing. Analysis of six Greenland shallow firn cores from the dry snow region confirms that the most recent prior widespread melt occurred in 1889. A firn core from the center of the ice sheet demonstrated that exceptionally warm temperatures combined with black carbon sediments from Northern Hemisphere forest fires reduced albedo below a critical threshold in the dry snow region, and caused the melting events in both 1889 and 2012. We use these data to project the frequency of widespread melt into the year 2100. Since Arctic temperatures and the frequency of forest fires are both expected to rise with climate change, our results suggest that widespread melt events on the Greenland Ice Sheet may begin to occur almost annually by the end of century. These events are likely to alter the surface mass balance of the ice sheet, leaving the surface susceptible to further melting.

  5. Increasing phylogenetic resolution at low taxonomic levels using massively parallel sequencing of chloroplast genomes

    Directory of Open Access Journals (Sweden)

    Cronn Richard

    2009-12-01

    Full Text Available Abstract Background Molecular evolutionary studies share the common goal of elucidating historical relationships, and the common challenge of adequately sampling taxa and characters. Particularly at low taxonomic levels, recent divergence, rapid radiations, and conservative genome evolution yield limited sequence variation, and dense taxon sampling is often desirable. Recent advances in massively parallel sequencing make it possible to rapidly obtain large amounts of sequence data, and multiplexing makes extensive sampling of megabase sequences feasible. Is it possible to efficiently apply massively parallel sequencing to increase phylogenetic resolution at low taxonomic levels? Results We reconstruct the infrageneric phylogeny of Pinus from 37 nearly-complete chloroplast genomes (average 109 kilobases each of an approximately 120 kilobase genome generated using multiplexed massively parallel sequencing. 30/33 ingroup nodes resolved with ≥ 95% bootstrap support; this is a substantial improvement relative to prior studies, and shows massively parallel sequencing-based strategies can produce sufficient high quality sequence to reach support levels originally proposed for the phylogenetic bootstrap. Resampling simulations show that at least the entire plastome is necessary to fully resolve Pinus, particularly in rapidly radiating clades. Meta-analysis of 99 published infrageneric phylogenies shows that whole plastome analysis should provide similar gains across a range of plant genera. A disproportionate amount of phylogenetic information resides in two loci (ycf1, ycf2, highlighting their unusual evolutionary properties. Conclusion Plastome sequencing is now an efficient option for increasing phylogenetic resolution at lower taxonomic levels in plant phylogenetic and population genetic analyses. With continuing improvements in sequencing capacity, the strategies herein should revolutionize efforts requiring dense taxon and character sampling

  6. Collectively loading an application in a parallel computer

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  7. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  8. Nothing in the History of Spanish "Anis" Makes Sense, Except in the Light of Evolution

    Science.gov (United States)

    Delgado, Juan Antonio; Palma, Ricardo Luis

    2011-01-01

    We describe, discuss and illustrate a metaphoric parallel between the history of the most famous Spanish liqueur, "Anis del Mono" ("Anis" of the Monkey), and the evolution of living organisms in the light of Darwinian theory and other biological hypotheses published subsequent to Charles Darwin's "Origin of Species." Also, we report the use of a…

  9. Pteros 2.0: Evolution of the fast parallel molecular analysis library for C++ and python.

    Science.gov (United States)

    Yesylevskyy, Semen O

    2015-07-15

    Pteros is the high-performance open-source library for molecular modeling and analysis of molecular dynamics trajectories. Starting from version 2.0 Pteros is available for C++ and Python programming languages with very similar interfaces. This makes it suitable for writing complex reusable programs in C++ and simple interactive scripts in Python alike. New version improves the facilities for asynchronous trajectory reading and parallel execution of analysis tasks by introducing analysis plugins which could be written in either C++ or Python in completely uniform way. The high level of abstraction provided by analysis plugins greatly simplifies prototyping and implementation of complex analysis algorithms. Pteros is available for free under Artistic License from http://sourceforge.net/projects/pteros/. © 2015 Wiley Periodicals, Inc.

  10. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  11. Integrated Task And Data Parallel Programming: Language Design

    Science.gov (United States)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated

  12. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  13. Collateral damage: rapid exposure-induced evolution of pesticide resistance leads to increased susceptibility to parasites.

    Science.gov (United States)

    Jansen, Mieke; Stoks, Robby; Coors, Anja; van Doorslaer, Wendy; de Meester, Luc

    2011-09-01

    Although natural populations may evolve resistance to anthropogenic stressors such as pollutants, this evolved resistance may carry costs. Using an experimental evolution approach, we exposed different Daphnia magna populations in outdoor containers to the carbamate pesticide carbaryl and control conditions, and assessed the resulting populations for both their resistance to carbaryl as well as their susceptibility to infection by the widespread bacterial microparasite Pasteuria ramosa. Our results show that carbaryl selection led to rapid evolution of carbaryl resistance with seemingly no cost when assessed in a benign environment. However, carbaryl-resistant populations were more susceptible to parasite infection than control populations. Exposure to both stressors reveals a synergistic effect on sterilization rate by P. ramosa, but this synergism did not evolve under pesticide selection. Assessing costs of rapid adaptive evolution to anthropogenic stress in a semi-natural context may be crucial to avoid too optimistic predictions for the fitness of the evolving populations. © 2011 The Author(s).

  14. Widespread Wolbachia infection in terrestrial isopods and other crustaceans

    Directory of Open Access Journals (Sweden)

    Richard Cordaux

    2012-03-01

    Full Text Available Wolbachia bacteria are obligate intracellular alpha-Proteobacteria of arthropods and nematodes. Although widespread among isopod crustaceans, they have seldom been found in non-isopod crustacean species. Here, we report Wolbachia infection in fourteen new crustacean species. Our results extend the range of Wolbachia infections in terrestrial isopods and amphipods (class Malacostraca. We report the occurrence of two different Wolbachia strains in two host species (a terrestrial isopod and an amphipod. Moreover, the discovery of Wolbachia in the goose barnacle Lepas anatifera (subclass Thecostraca establishes Wolbachia infection in class Maxillopoda. The new bacterial strains are closely related to B-supergroup Wolbachia strains previously reported from crustacean hosts. Our results suggest that Wolbachia infection may be much more widespread in crustaceans than previously thought. The presence of related Wolbachia strains in highly divergent crustacean hosts suggests that Wolbachia endosymbionts can naturally adapt to a wide range of crustacean hosts. Given the ability of isopod Wolbachia strains to induce feminization of genetic males or cytoplasmic incompatibility, we speculate that manipulation of crustacean-borne Wolbachia bacteria might represent potential tools for controlling crustacean species of commercial interest and crustacean or insect disease vectors.

  15. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  16. Comparative genomics reveals conservative evolution of the xylem transcriptome in vascular plants.

    Science.gov (United States)

    Li, Xinguo; Wu, Harry X; Southerton, Simon G

    2010-06-21

    Wood is a valuable natural resource and a major carbon sink. Wood formation is an important developmental process in vascular plants which played a crucial role in plant evolution. Although genes involved in xylem formation have been investigated, the molecular mechanisms of xylem evolution are not well understood. We use comparative genomics to examine evolution of the xylem transcriptome to gain insights into xylem evolution. The xylem transcriptome is highly conserved in conifers, but considerably divergent in angiosperms. The functional domains of genes in the xylem transcriptome are moderately to highly conserved in vascular plants, suggesting the existence of a common ancestral xylem transcriptome. Compared to the total transcriptome derived from a range of tissues, the xylem transcriptome is relatively conserved in vascular plants. Of the xylem transcriptome, cell wall genes, ancestral xylem genes, known proteins and transcription factors are relatively more conserved in vascular plants. A total of 527 putative xylem orthologs were identified, which are unevenly distributed across the Arabidopsis chromosomes with eight hot spots observed. Phylogenetic analysis revealed that evolution of the xylem transcriptome has paralleled plant evolution. We also identified 274 conifer-specific xylem unigenes, all of which are of unknown function. These xylem orthologs and conifer-specific unigenes are likely to have played a crucial role in xylem evolution. Conifers have highly conserved xylem transcriptomes, while angiosperm xylem transcriptomes are relatively diversified. Vascular plants share a common ancestral xylem transcriptome. The xylem transcriptomes of vascular plants are more conserved than the total transcriptomes. Evolution of the xylem transcriptome has largely followed the trend of plant evolution.

  17. Comparative phylogeography of two widespread magpies

    DEFF Research Database (Denmark)

    Zhang, Ruiying; Song, Gang; Qu, Yanhua

    2012-01-01

    Historical geological events and climatic changes are believed to have played important roles in shaping the current distribution of species. However, sympatric species may have responded in different ways to such climatic fluctuations. Here we compared genetic structures of two corvid species......, the Azure-winged Magpie Cyanopica cyanus and the Eurasian Magpie Pica pica, both widespread but with different habitat dependence and some aspects of breeding behavior. Three mitochondrial genes and two nuclear introns were used to examine their co-distributed populations in East China and the Iberian...... Peninsula. Both species showed deep divergences between these two regions that were dated to the late Pliocene/early Pleistocene. In the East Chinese clade of C. cyanus, populations were subdivided between Northeast China and Central China, probably since the early to mid-Pleistocene, and the Central...

  18. Vitamin D inadequacy is widespread in Tunisian active boys and is ...

    African Journals Online (AJOL)

    Vitamin D inadequacy is widespread in Tunisian active boys and is related to diet but not to adiposity or insulin resistance. Ikram Bezrati, Mohamed Kacem Ben Fradj, Nejmeddine Ouerghi, Moncef Feki, Anis Chaouachi, Naziha Kaabachi ...

  19. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    Science.gov (United States)

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  20. Parallel and non-parallel laminar mixed convection flow in an inclined tube: The effect of the boundary conditions

    International Nuclear Information System (INIS)

    Barletta, A.

    2008-01-01

    The necessary condition for the onset of parallel flow in the fully developed region of an inclined duct is applied to the case of a circular tube. Parallel flow in inclined ducts is an uncommon regime, since in most cases buoyancy tends to produce the onset of secondary flow. The present study shows how proper thermal boundary conditions may preserve parallel flow regime. Mixed convection flow is studied for a special non-axisymmetric thermal boundary condition that, with a proper choice of a switch parameter, may be compatible with parallel flow. More precisely, a circumferentially variable heat flux distribution is prescribed on the tube wall, expressed as a sinusoidal function of the azimuthal coordinate θ with period 2π. A π/2 rotation in the position of the maximum heat flux, achieved by setting the switch parameter, may allow or not the existence of parallel flow. Two cases are considered corresponding to parallel and non-parallel flow. In the first case, the governing balance equations allow a simple analytical solution. On the contrary, in the second case, the local balance equations are solved numerically by employing a finite element method

  1. Prevalence of widespread pain and associations with work status: a population study

    Directory of Open Access Journals (Sweden)

    Henriksson KG

    2008-07-01

    Full Text Available Abstract Background This population study based on a representative sample from a Swedish county investigates the prevalence, duration, and determinants of widespread pain (WSP in the population using two constructs and estimates how WSP affects work status. In addition, this study investigates the prevalence of widespread pain and its relationship to pain intensity, gender, age, income, work status, citizenship, civil status, urban residence, and health care seeking. Methods A cross-sectional survey using a postal questionnaire was sent to a representative sample (n = 9952 of the target population (284,073 people, 18–74 years in a county (Östergötland in the southern Sweden. The questionnaire was mailed and followed by two postal reminders when necessary. Results The participation rate was 76.7% (n = 7637; the non-participants were on the average younger, earned less money, and male. Women had higher prevalences of pain in 10 different predetermined anatomical regions. WSP was generally chronic (90–94% and depending on definition of WSP the prevalence varied between 4.8–7.4% in the population. Women had significantly higher prevalence of WSP than men and the age effect appeared to be stronger in women than in men. WSP was a significant negative factor – together with age 50–64 years, low annual income, and non-Nordic citizen – for work status in the community and in the group with chronic pain. Chronic pain but not the spreading of pain was related to health care seeking in the population. Conclusion This study confirms earlier studies that report high prevalences of widespread pain in the population and especially among females and with increasing age. Widespread pain is associated with prominent effects on work status.

  2. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  3. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  4. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  5. Conformal pure radiation with parallel rays

    International Nuclear Information System (INIS)

    Leistner, Thomas; Paweł Nurowski

    2012-01-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

  6. The origin of widespread species in a poor dispersing lineage (diving beetle genus Deronectes

    Directory of Open Access Journals (Sweden)

    David García-Vázquez

    2016-09-01

    Full Text Available In most lineages, most species have restricted geographic ranges, with only few reaching widespread distributions. How these widespread species reached their current ranges is an intriguing biogeographic and evolutionary question, especially in groups known to be poor dispersers. We reconstructed the biogeographic and temporal origin of the widespread species in a lineage with particularly poor dispersal capabilities, the diving beetle genus Deronectes (Dytiscidae. Most of the ca. 60 described species of Deronectes have narrow ranges in the Mediterranean area, with only four species with widespread European distributions. We sequenced four mitochondrial and two nuclear genes of 297 specimens of 109 different populations covering the entire distribution of the four lineages of Deronectes, including widespread species. Using Bayesian probabilities with an a priori evolutionary rate, we performed (1 a global phylogeny/phylogeography to estimate the relationships of the main lineages within each group and root them, and (2 demographic analyses of the best population coalescent model for each species group, including a reconstruction of the geographical history estimated from the distribution of the sampled localities. We also selected 56 specimens to test for the presence of Wolbachia, a maternally transmitted parasite that can alter the patterns of mtDNA variability. All species of the four studied groups originated in the southern Mediterranean peninsulas and were estimated to be of Pleistocene origin. In three of the four widespread species, the central and northern European populations were nested within those in the northern areas of the Anatolian, Balkan and Iberian peninsulas respectively, suggesting a range expansion at the edge of the southern refugia. In the Mediterranean peninsulas the widespread European species were replaced by vicariant taxa of recent origin. The fourth species (D. moestus was proven to be a composite of unrecognised

  7. Why Africa matters: evolution of Old World Salvia (Lamiaceae) in Africa.

    Science.gov (United States)

    Will, Maria; Claßen-Bockhoff, Regine

    2014-07-01

    Salvia is the largest genus in Lamiaceae and it has recently been found to be non-monophyletic. Molecular data on Old World Salvia are largely lacking. In this study, we present data concerning Salvia in Africa. The focus is on the colonization of the continent, character evolution and the switch of pollination systems in the genus. Maximum likelihood and Bayesian inference were used for phylogenetic reconstruction. Analyses were based on two nuclear markers [internal transcribed spacer (ITS) and external transcribed spacer (ETS)] and one plastid marker (rpl32-trnL). Sequence data were generated for 41 of the 62 African taxa (66 %). Mesquite was used to reconstruct ancestral character states for distribution, life form, calyx shape, stamen type and pollination syndrome. Salvia in Africa is non-monophyletic. Each of the five major regions in Africa, except Madagascar, was colonized at least twice, and floristic links between North African, south-west Asian and European species are strongly supported. The large radiation in Sub-Saharan Africa (23 species) can be traced back to dispersal from North Africa via East Africa to the Cape Region. Adaptation to bird pollination in southern Africa and Madagascar reflects parallel evolution. The phenotypic diversity in African Salvia is associated with repeated introductions to the continent. Many important evolutionary processes, such as colonization, adaptation, parallelism and character transformation, are reflected in this comparatively small group. The data presented in this study can help to understand the evolution of Salvia sensu lato and other large genera. © The Author 2014. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  9. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro

    2012-11-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM, experiences have almost exclusively been limited to formulation based on flat homogeneous parallel loops. FMM in fact contains operations that cannot be readily expressed in such conventional but restrictive models. We show that task parallelism, or parallel recursions in particular, allows us to parallelize all operations of FMM naturally and scalably. Moreover it allows us to parallelize a \\'\\'mutual interaction\\'\\' for force/potential evaluation, which is roughly twice as efficient as a more conventional, unidirectional force/potential evaluation. The net result is an open source FMM that is clearly among the fastest single node implementations, including those on GPUs; with a million particles on a 32 cores Sandy Bridge 2.20GHz node, it completes a single time step including tree construction and force/potential evaluation in 65 milliseconds. The study clearly showcases both programmability and performance benefits of flexible parallel constructs over more monolithic parallel loops. © 2012 IEEE.

  10. Beyond R0 Maximisation: On Pathogen Evolution and Environmental Dimensions.

    Science.gov (United States)

    Lion, Sébastien; Metz, Johan A J

    2018-06-01

    A widespread tenet is that evolution of pathogens maximises their basic reproduction ratio, R 0 . The breakdown of this principle is typically discussed as exception. Here, we argue that a radically different stance is needed, based on evolutionarily stable strategy (ESS) arguments that take account of the 'dimension of the environmental feedback loop'. The R 0 maximisation paradigm requires this feedback loop to be one-dimensional, which notably excludes pathogen diversification. By contrast, almost all realistic ecological ingredients of host-pathogen interactions (density-dependent mortality, multiple infections, limited cross-immunity, multiple transmission routes, host heterogeneity, and spatial structure) will lead to multidimensional feedbacks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  12. Puzzles in modern biology. IV. Neurodegeneration, localized origin and widespread decay [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Steven A. Frank

    2016-10-01

    Full Text Available The motor neuron disease amyotrophic lateral sclerosis (ALS typically begins with localized muscle weakness. Progressive, widespread paralysis often follows over a few years. Does the disease begin with local changes in a small piece of neural tissue and then spread? Or does neural decay happen independently across diverse spatial locations? The distinction matters, because local initiation may arise by local changes in a tissue microenvironment, by somatic mutation, or by various epigenetic or regulatory fluctuations in a few cells. A local trigger must be coupled with a mechanism for spread. By contrast, independent decay across spatial locations cannot begin by a local change, but must depend on some global predisposition or spatially distributed change that leads to approximately synchronous decay. This article outlines the conceptual frame by which one contrasts local triggers and spread versus parallel spatially distributed decay. Various neurodegenerative diseases differ in their mechanistic details, but all can usefully be understood as falling along a continuum of interacting local and global processes. Cancer provides an example of disease progression by local triggers and spatial spread, setting a conceptual basis for clarifying puzzles in neurodegeneration. Heart disease also has crucial interactions between global processes, such as circulating lipid levels, and local processes in the development of atherosclerotic plaques. The distinction between local and global processes helps to understand these various age-related diseases.

  13. Biodiversity Meets Neuroscience: From the Sequencing Ship (Ship-Seq) to Deciphering Parallel Evolution of Neural Systems in Omic's Era.

    Science.gov (United States)

    Moroz, Leonid L

    2015-12-01

    The origins of neural systems and centralized brains are one of the major transitions in evolution. These events might occur more than once over 570-600 million years. The convergent evolution of neural circuits is evident from a diversity of unique adaptive strategies implemented by ctenophores, cnidarians, acoels, molluscs, and basal deuterostomes. But, further integration of biodiversity research and neuroscience is required to decipher critical events leading to development of complex integrative and cognitive functions. Here, we outline reference species and interdisciplinary approaches in reconstructing the evolution of nervous systems. In the "omic" era, it is now possible to establish fully functional genomics laboratories aboard of oceanic ships and perform sequencing and real-time analyses of data at any oceanic location (named here as Ship-Seq). In doing so, fragile, rare, cryptic, and planktonic organisms, or even entire marine ecosystems, are becoming accessible directly to experimental and physiological analyses by modern analytical tools. Thus, we are now in a position to take full advantages from countless "experiments" Nature performed for us in the course of 3.5 billion years of biological evolution. Together with progress in computational and comparative genomics, evolutionary neuroscience, proteomic and developmental biology, a new surprising picture is emerging that reveals many ways of how nervous systems evolved. As a result, this symposium provides a unique opportunity to revisit old questions about the origins of biological complexity. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  14. Evolution, brain, and the nature of language.

    Science.gov (United States)

    Berwick, Robert C; Friederici, Angela D; Chomsky, Noam; Bolhuis, Johan J

    2013-02-01

    Language serves as a cornerstone for human cognition, yet much about its evolution remains puzzling. Recent research on this question parallels Darwin's attempt to explain both the unity of all species and their diversity. What has emerged from this research is that the unified nature of human language arises from a shared, species-specific computational ability. This ability has identifiable correlates in the brain and has remained fixed since the origin of language approximately 100 thousand years ago. Although songbirds share with humans a vocal imitation learning ability, with a similar underlying neural organization, language is uniquely human. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  16. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  17. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  18. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  19. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  20. Reanalyzing Head et al. (2015: investigating the robustness of widespread p-hacking

    Directory of Open Access Journals (Sweden)

    Chris H.J. Hartgerink

    2017-03-01

    Full Text Available Head et al. (2015 provided a large collection of p-values that, from their perspective, indicates widespread statistical significance seeking (i.e., p-hacking. This paper inspects this result for robustness. Theoretically, the p-value distribution should be a smooth, decreasing function, but the distribution of reported p-values shows systematically more reported p-values for .01, .02, .03, .04, and .05 than p-values reported to three decimal places, due to apparent tendencies to round p-values to two decimal places. Head et al. (2015 correctly argue that an aggregate p-value distribution could show a bump below .05 when left-skew p-hacking occurs frequently. Moreover, the elimination of p = .045 and p = .05, as done in the original paper, is debatable. Given that eliminating p = .045 is a result of the need for symmetric bins and systematically more p-values are reported to two decimal places than to three decimal places, I did not exclude p = .045 and p = .05. I conducted Fisher’s method .04 < p < .05 and reanalyzed the data by adjusting the bin selection to .03875 < p ≤ .04 versus .04875 < p ≤ .05. Results of the reanalysis indicate that no evidence for left-skew p-hacking remains when we look at the entire range between .04 < p < .05 or when we inspect the second-decimal. Taking into account reporting tendencies when selecting the bins to compare is especially important because this dataset does not allow for the recalculation of the p-values. Moreover, inspecting the bins that include two-decimal reported p-values potentially increases sensitivity if strategic rounding down of p-values as a form of p-hacking is widespread. Given the far-reaching implications of supposed widespread p-hacking throughout the sciences Head et al. (2015, it is important that these findings are robust to data analysis choices if the conclusion is to be considered unequivocal. Although no evidence of widespread left-skew p

  1. Parallel computing by Monte Carlo codes MVP/GMVP

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Nakagawa, Masayuki; Mori, Takamasa

    2001-01-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of parallel computing platforms or by using a standard parallelization library MPI. The platforms used for benchmark calculations are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel paragon and a distributed-memory scalar-parallel computer Hitachi SR2201, IBM SP2. As mentioned generally, linear speedup could be obtained for large-scale problems but parallelization efficiency decreased as the batch size per a processing element(PE) was smaller. It was also found that the statistical uncertainty for assembly powers was less than 0.1% by the PWR full-core calculation with more than 10 million histories and it took about 1.5 hours by massively parallel computing. (author)

  2. Rapid and Parallel Adaptive Evolution of the Visual System of Neotropical Midas Cichlid Fishes.

    Science.gov (United States)

    Torres-Dowdall, Julián; Pierotti, Michele E R; Härer, Andreas; Karagic, Nidal; Woltering, Joost M; Henning, Frederico; Elmer, Kathryn R; Meyer, Axel

    2017-10-01

    Midas cichlid fish are a Central American species flock containing 13 described species that has been dated to only a few thousand years old, a historical timescale infrequently associated with speciation. Their radiation involved the colonization of several clear water crater lakes from two turbid great lakes. Therefore, Midas cichlids have been subjected to widely varying photic conditions during their radiation. Being a primary signal relay for information from the environment to the organism, the visual system is under continuing selective pressure and a prime organ system for accumulating adaptive changes during speciation, particularly in the case of dramatic shifts in photic conditions. Here, we characterize the full visual system of Midas cichlids at organismal and genetic levels, to determine what types of adaptive changes evolved within the short time span of their radiation. We show that Midas cichlids have a diverse visual system with unexpectedly high intra- and interspecific variation in color vision sensitivity and lens transmittance. Midas cichlid populations in the clear crater lakes have convergently evolved visual sensitivities shifted toward shorter wavelengths compared with the ancestral populations from the turbid great lakes. This divergence in sensitivity is driven by changes in chromophore usage, differential opsin expression, opsin coexpression, and to a lesser degree by opsin coding sequence variation. The visual system of Midas cichlids has the evolutionary capacity to rapidly integrate multiple adaptations to changing light environments. Our data may indicate that, in early stages of divergence, changes in opsin regulation could precede changes in opsin coding sequence evolution. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. The parallel processing of EGS4 code on distributed memory scalar parallel computer:Intel Paragon XP/S15-256

    Energy Technology Data Exchange (ETDEWEB)

    Takemiya, Hiroshi; Ohta, Hirofumi; Honma, Ichirou

    1996-03-01

    The parallelization of Electro-Magnetic Cascade Monte Carlo Simulation Code, EGS4 on distributed memory scalar parallel computer: Intel Paragon XP/S15-256 is described. EGS4 has the feature that calculation time for one incident particle is quite different from each other because of the dynamic generation of secondary particles and different behavior of each particle. Granularity for parallel processing, parallel programming model and the algorithm of parallel random number generation are discussed and two kinds of method, each of which allocates particles dynamically or statically, are used for the purpose of realizing high speed parallel processing of this code. Among four problems chosen for performance evaluation, the speedup factors for three problems have been attained to nearly 100 times with 128 processor. It has been found that when both the calculation time for each incident particles and its dispersion are large, it is preferable to use dynamic particle allocation method which can average the load for each processor. And it has also been found that when they are small, it is preferable to use static particle allocation method which reduces the communication overhead. Moreover, it is pointed out that to get the result accurately, it is necessary to use double precision variables in EGS4 code. Finally, the workflow of program parallelization is analyzed and tools for program parallelization through the experience of the EGS4 parallelization are discussed. (author).

  4. Space station evolution: Planning for the future

    Science.gov (United States)

    Diaz, Alphonso V.; Askins, Barbara S.

    1987-06-01

    The need for permanently manned presence in space has been recognized by the United States and its international partners for many years. The development of this capability was delayed due to the concurrent recognition that reusable earth-to-orbit transportation was also needed and should be developed first. While the decision to go ahead with a permanently manned Space Station was on hold, requirements for the use of the Station were accumulating as ground-based research and the data from unmanned spacecraft sparked the imagination of both scientists and entrepreneurs. Thus, by the time of the Space Station implementation decision in the early 1980's, a variety of disciplines, with a variety of requirements, needed to be accommodated on one Space Station. Additional future requirements could be forecast for advanced missions that were still in the early planning stages. The logical response was the development of a multi-purpose Space Station with the ability to evolve on-orbit to new capabilities as required by user needs and national or international decisions, i.e., to build an evolutionary Space Station. Planning for evolution is conducted in parallel with the design and development of the baseline Space Station. Evolution planning is a strategic management process to facilitate change and protect future decisions. The objective is not to forecast the future, but to understand the future options and the implications of these on today's decisions. The major actions required now are: (1) the incorporation of evolution provisions (hooks and scars) in the baseline Space Station; and (2) the initiation of an evolution advanced development program.

  5. Space station evolution: Planning for the future

    Science.gov (United States)

    Diaz, Alphonso V.; Askins, Barbara S.

    1987-01-01

    The need for permanently manned presence in space has been recognized by the United States and its international partners for many years. The development of this capability was delayed due to the concurrent recognition that reusable earth-to-orbit transportation was also needed and should be developed first. While the decision to go ahead with a permanently manned Space Station was on hold, requirements for the use of the Station were accumulating as ground-based research and the data from unmanned spacecraft sparked the imagination of both scientists and entrepreneurs. Thus, by the time of the Space Station implementation decision in the early 1980's, a variety of disciplines, with a variety of requirements, needed to be accommodated on one Space Station. Additional future requirements could be forecast for advanced missions that were still in the early planning stages. The logical response was the development of a multi-purpose Space Station with the ability to evolve on-orbit to new capabilities as required by user needs and national or international decisions, i.e., to build an evolutionary Space Station. Planning for evolution is conducted in parallel with the design and development of the baseline Space Station. Evolution planning is a strategic management process to facilitate change and protect future decisions. The objective is not to forecast the future, but to understand the future options and the implications of these on today's decisions. The major actions required now are: (1) the incorporation of evolution provisions (hooks and scars) in the baseline Space Station; and (2) the initiation of an evolution advanced development program.

  6. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  7. Evolution of the Pseudomonas aeruginosa mutational resistome in an international Cystic Fibrosis clone

    DEFF Research Database (Denmark)

    López-Causapé, Carla; Madsen Sommer, Lea Mette; Cabot, Gabriel

    2017-01-01

    ) and resistome of a widespread clone (CC274), in isolates from two highly-distant countries, Australia and Spain, covering an 18-year period. The coexistence of two divergent CC274 clonal lineages was revealed, but without evident geographical barrier; phylogenetic reconstructions and mutational resistome...... for the first time that high-level aminoglycoside resistance in CF is likely driven by mutations in fusA1/fusA2, coding for elongation factor G. Altogether, our results provide valuable information for understanding the evolution of the mutational resistome of CF P. aeruginosa....

  8. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  9. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  10. Morphology and behaviour: functional links in development and evolution

    Science.gov (United States)

    Bertossa, Rinaldo C.

    2011-01-01

    Development and evolution of animal behaviour and morphology are frequently addressed independently, as reflected in the dichotomy of disciplines dedicated to their study distinguishing object of study (morphology versus behaviour) and perspective (ultimate versus proximate). Although traits are known to develop and evolve semi-independently, they are matched together in development and evolution to produce a unique functional phenotype. Here I highlight similarities shared by both traits, such as the decisive role played by the environment for their ontogeny. Considering the widespread developmental and functional entanglement between both traits, many cases of adaptive evolution are better understood when proximate and ultimate explanations are integrated. A field integrating these perspectives is evolutionary developmental biology (evo-devo), which studies the developmental basis of phenotypic diversity. Ultimate aspects in evo-devo studies—which have mostly focused on morphological traits—could become more apparent when behaviour, ‘the integrator of form and function’, is integrated into the same framework of analysis. Integrating a trait such as behaviour at a different level in the biological hierarchy will help to better understand not only how behavioural diversity is produced, but also how levels are connected to produce functional phenotypes and how these evolve. A possible framework to accommodate and compare form and function at different levels of the biological hierarchy is outlined. At the end, some methodological issues are discussed. PMID:21690124

  11. A spring forward for hominin evolution in East Africa.

    Science.gov (United States)

    Cuthbert, Mark O; Ashley, Gail M

    2014-01-01

    Groundwater is essential to modern human survival during drought periods. There is also growing geological evidence of springs associated with stone tools and hominin fossils in the East African Rift System (EARS) during a critical period for hominin evolution (from 1.8 Ma). However it is not known how vulnerable these springs may have been to climate variability and whether groundwater availability may have played a part in human evolution. Recent interdisciplinary research at Olduvai Gorge, Tanzania, has documented climate fluctuations attributable to astronomic forcing and the presence of paleosprings directly associated with archaeological sites. Using palaeogeological reconstruction and groundwater modelling of the Olduvai Gorge paleo-catchment, we show how spring discharge was likely linked to East African climate variability of annual to Milankovitch cycle timescales. Under decadal to centennial timescales, spring flow would have been relatively invariant providing good water resource resilience through long droughts. For multi-millennial periods, modelled spring flows lag groundwater recharge by 100 s to 1000 years. The lag creates long buffer periods allowing hominins to adapt to new habitats as potable surface water from rivers or lakes became increasingly scarce. Localised groundwater systems are likely to have been widespread within the EARS providing refugia and intense competition during dry periods, thus being an important factor in natural selection and evolution, as well as a vital resource during hominin dispersal within and out of Africa.

  12. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  13. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  14. The relative frequencies of causes of widespread ground-glass opacity: A retrospective cohort

    International Nuclear Information System (INIS)

    Hewitt, Michael G.; Miller, Wallace T.; Reilly, Thomas J.; Simpson, Scott

    2014-01-01

    Highlights: • The most common cause of widespread ground-glass opacities is hydrostatic pulmonary edema. • Associated findings such as air-trapping and centrilobular nodules are highly specific for hypersensitivity pneumonitis. • The clinical setting (outpatient versus inpatient) will alter the order of the differential diagnosis. - Abstract: Purpose: The purpose of our study was to determine the relative frequencies of causes of widespread ground-glass opacity (GGO) in an unselected, consecutive patient population and to identify any associated imaging findings that can narrow or reorganize the differential. Materials and methods: The study was approved by the center's IRB and is HIPPA compliant. Cases with widespread GGO in the radiology report were identified by searching the Radiology Information System. Medical records and CT scan examinations were reviewed for the causes of widespread GGO. Associations between a less dominant imaging finding and a particular diagnosis were analyzed with the chi square test. Our study group consisted of 234 examinations with 124 women and 110 men and a mean age of 53.7 years. Results: A cause was established in 204 (87.2%) cases. Hydrostatic pulmonary edema was most common with 131 cases (56%). Interstitial lung diseases (ILD) were the next most common, most often hypersensitivity pneumonitis (HP) (n = 12, 5%) and connective tissue disease related ILD (n = 7, 3%). Infection accounted for 5% (12 cases). A few miscellaneous diseases accounted for 5 cases (2.1%). The combination of septal thickening and pleural effusions had a specificity of 0.91 for hydrostatic pulmonary edema (P < .001) while centrilobular nodules and air trapping had a specificity of 1.0 for HP. In 24 (10.2%) patients, increased opacification from expiration was incorrectly interpreted as representing widespread ground glass opacity. The relative frequency of disease dramatically changed according to the setting. In the inpatient setting, diffuse

  15. The relative frequencies of causes of widespread ground-glass opacity: A retrospective cohort

    Energy Technology Data Exchange (ETDEWEB)

    Hewitt, Michael G., E-mail: Mike_hewitt@me.com; Miller, Wallace T., E-mail: Wallace.miller@uphs.upenn.edu; Reilly, Thomas J., E-mail: thomasjreilly@comcast.net; Simpson, Scott, E-mail: Simpson80@gmail.com

    2014-10-15

    Highlights: • The most common cause of widespread ground-glass opacities is hydrostatic pulmonary edema. • Associated findings such as air-trapping and centrilobular nodules are highly specific for hypersensitivity pneumonitis. • The clinical setting (outpatient versus inpatient) will alter the order of the differential diagnosis. - Abstract: Purpose: The purpose of our study was to determine the relative frequencies of causes of widespread ground-glass opacity (GGO) in an unselected, consecutive patient population and to identify any associated imaging findings that can narrow or reorganize the differential. Materials and methods: The study was approved by the center's IRB and is HIPPA compliant. Cases with widespread GGO in the radiology report were identified by searching the Radiology Information System. Medical records and CT scan examinations were reviewed for the causes of widespread GGO. Associations between a less dominant imaging finding and a particular diagnosis were analyzed with the chi square test. Our study group consisted of 234 examinations with 124 women and 110 men and a mean age of 53.7 years. Results: A cause was established in 204 (87.2%) cases. Hydrostatic pulmonary edema was most common with 131 cases (56%). Interstitial lung diseases (ILD) were the next most common, most often hypersensitivity pneumonitis (HP) (n = 12, 5%) and connective tissue disease related ILD (n = 7, 3%). Infection accounted for 5% (12 cases). A few miscellaneous diseases accounted for 5 cases (2.1%). The combination of septal thickening and pleural effusions had a specificity of 0.91 for hydrostatic pulmonary edema (P < .001) while centrilobular nodules and air trapping had a specificity of 1.0 for HP. In 24 (10.2%) patients, increased opacification from expiration was incorrectly interpreted as representing widespread ground glass opacity. The relative frequency of disease dramatically changed according to the setting. In the inpatient setting, diffuse

  16. Performance Analysis of Parallel Mathematical Subroutine library PARCEL

    International Nuclear Information System (INIS)

    Yamada, Susumu; Shimizu, Futoshi; Kobayashi, Kenichi; Kaburaki, Hideo; Kishida, Norio

    2000-01-01

    The parallel mathematical subroutine library PARCEL (Parallel Computing Elements) has been developed by Japan Atomic Energy Research Institute for easy use of typical parallelized mathematical codes in any application problems on distributed parallel computers. The PARCEL includes routines for linear equations, eigenvalue problems, pseudo-random number generation, and fast Fourier transforms. It is shown that the results of performance for linear equations routines exhibit good parallelization efficiency on vector, as well as scalar, parallel computers. A comparison of the efficiency results with the PETSc (Portable Extensible Tool kit for Scientific Computations) library has been reported. (author)

  17. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  18. He{sup 2+} HEATING VIA PARAMETRIC INSTABILITIES OF PARALLEL PROPAGATING ALFVÉN WAVES WITH AN INCOHERENT SPECTRUM

    Energy Technology Data Exchange (ETDEWEB)

    He, Peng; Gao, Xinliang; Lu, Quanming; Wang, Shui, E-mail: gaoxl@mail.ustc.edu.cn [CAS Key Laboratory of Geospace Environment, Department of Geophysics and Planetary Science, University of Science and Technology of China, Hefei 230026 (China)

    2016-08-10

    The preferential heating of heavy ions in the solar corona and solar wind has been a long-standing hot topic. In this paper we use a one-dimensional hybrid simulation model to investigate the heating of He{sup 2+} particles during the parametric instabilities of parallel propagating Alfvén waves with an incoherent spectrum. The evolution of the parametric instabilities has two stages and involves the heavy ion heating during the entire evolution. In the first stage, the density fluctuations are generated by the modulation of the pump Alfvén waves with a spectrum, which then results in rapid coupling with the pump Alfvén waves and the cascade of the magnetic fluctuations. In the second stage, each pump Alfvén wave decays into a forward density mode and a backward daughter Alfvén mode, which is similar to that of a monochromatic pump Alfvén wave. In both stages the perpendicular heating of He{sup 2+} particles occurs. This is caused by the cyclotron resonance between He{sup 2+} particles and the high-frequency magnetic fluctuations, whereas the Landau resonance between He{sup 2+} particles and the density fluctuations leads to the parallel heating of He{sup 2+} particles. The influence of the drift velocity between the protons and the He{sup 2+} particles on the heating of He{sup 2+} particles is also discussed in this paper.

  19. Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

    International Nuclear Information System (INIS)

    Pedron, Antoine

    2013-01-01

    This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterise possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform. Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purpose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms. The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed. (author) [fr

  20. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  1. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  2. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  3. International arrivals: widespread bioinvasions in European Seas.

    Science.gov (United States)

    Galil, B S; Marchini, A; Occhipinti-Ambrogi, A; Minchin, D; Narščius, A; Ojaveer, H; Olenin, S

    2014-04-01

    The European Union lacks a comprehensive framework to address the threats posed by the introduction and spread of marine non-indigenous species (NIS). Current efforts are fragmented and suffer substantial gaps in coverage. In this paper we identify and discuss issues relating to the assessment of spatial and temporal patterns of introductions in European Seas (ES), based on a scientifically validated information system of aquatic non-indigenous and cryptogenic species, AquaNIS. While recognizing the limitations of the existing data, we extract information that can be used to assess the relative risk of introductions for different taxonomic groups, geographic regions and likely vectors. The dataset comprises 879 multicellular NIS. We applied a country-based approach to assess patterns of NIS richness in ES, and identify the principal introduction routes and vectors, the most widespread NIS and their spatial and temporal spread patterns. Between 1970 and 2013, the number of recorded NIS has grown by 86, 173 and 204% in the Baltic, Western European margin and the Mediterranean, respectively; 52 of the 879 NIS were recorded in 10 or more countries, and 25 NIS first recorded in European seas since 1990 have since been reported in five or more countries. Our results highlight the ever-rising role of shipping (commercial and recreational) as a vector for the widespread and recently spread NIS. The Suez Canal, a corridor unique to the Mediterranean, is responsible for the increased introduction of new thermophilic NIS into this warming sea. The 2020 goal of the EU Biodiversity Strategy concerning marine Invasive Alien Species may not be fully attainable. The setting of a new target date should be accompanied by scientifically robust, sensible and pragmatic plans to minimize introductions of marine NIS and to study those present.

  4. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  5. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  6. A Tutorial on Parallel and Concurrent Programming in Haskell

    Science.gov (United States)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  7. The origin and evolution of the sexes: Novel insights from a distant eukaryotic linage.

    Science.gov (United States)

    Mignerot, Laure; Coelho, Susana M

    2016-01-01

    Sexual reproduction is an extraordinarily widespread phenomenon that assures the production of new genetic combinations in nearly all eukaryotic lineages. Although the core features of sexual reproduction (meiosis and syngamy) are highly conserved, the control mechanisms that determine whether an individual is male or female are remarkably labile across eukaryotes. In genetically controlled sexual systems, gender is determined by sex chromosomes, which have emerged independently and repeatedly during evolution. Sex chromosomes have been studied in only a handful of classical model organism, and empirical knowledge on the origin and evolution of the sexes is still surprisingly incomplete. With the advent of new generation sequencing, the taxonomic breadth of model systems has been rapidly expanding, bringing new ideas and fresh views on this fundamental aspect of biology. This mini-review provides a quick state of the art of how the remarkable richness of the sexual characteristics of the brown algae is helping to increase our knowledge about the evolution of sex determination. Copyright © 2016 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  8. GRAVIDY, a GPU modular, parallel direct-summation N-body integrator: dynamics with softening

    Science.gov (United States)

    Maureira-Fredes, Cristián; Amaro-Seoane, Pau

    2018-01-01

    A wide variety of outstanding problems in astrophysics involve the motion of a large number of particles under the force of gravity. These include the global evolution of globular clusters, tidal disruptions of stars by a massive black hole, the formation of protoplanets and sources of gravitational radiation. The direct-summation of N gravitational forces is a complex problem with no analytical solution and can only be tackled with approximations and numerical methods. To this end, the Hermite scheme is a widely used integration method. With different numerical techniques and special-purpose hardware, it can be used to speed up the calculations. But these methods tend to be computationally slow and cumbersome to work with. We present a new graphics processing unit (GPU), direct-summation N-body integrator written from scratch and based on this scheme, which includes relativistic corrections for sources of gravitational radiation. GRAVIDY has high modularity, allowing users to readily introduce new physics, it exploits available computational resources and will be maintained by regular updates. GRAVIDY can be used in parallel on multiple CPUs and GPUs, with a considerable speed-up benefit. The single-GPU version is between one and two orders of magnitude faster than the single-CPU version. A test run using four GPUs in parallel shows a speed-up factor of about 3 as compared to the single-GPU version. The conception and design of this first release is aimed at users with access to traditional parallel CPU clusters or computational nodes with one or a few GPU cards.

  9. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  10. Chemical Evolution of Ozone and Its Precursors in Asian Pacific Rim Outflow During TRACE-P

    Science.gov (United States)

    Hamlin, A.; Crawford, J.; Olson, J.; Pippin, M.; Avery, M.; Sachse, G.; Barrick, J.; Blake, D.; Tan, D.; Sandholm, S.; Kondo, Y.; Singh, H.; Eisele, F.; Zondlo, M.; Flocke, F.; Talbot, R.

    2002-12-01

    During NASA's GTE/TRACE-P (Transport and Chemical Evolution over the Pacific) mission, a widespread stagnant pollution layer was observed between 2 and 4 km over the central Pacific. In this region, high levels of O3 (70~ppbv), CO (210~ppbv), and NOx (130~pptv) were observed. Back trajectories suggest this airmass had been rapidly transported from the Asian coast near the Yellow Sea to the central Pacific where it underwent subsidence. The chemical evolution of ozone and its precursors for this airmass is examined using lagrangian photochemical box model calculations. Simulations are conducted along trajectories which intersect the flight path where predicted mixing ratios are compared to measurements. An analysis of the photochemical processes controlling the cycling of nitrogen oxides and ozone production and destruction during transport will be presented.

  11. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Haiyan Gu

    2018-04-01

    Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.

  12. Current distribution characteristics of superconducting parallel circuits

    International Nuclear Information System (INIS)

    Mori, K.; Suzuki, Y.; Hara, N.; Kitamura, M.; Tominaka, T.

    1994-01-01

    In order to increase the current carrying capacity of the current path of the superconducting magnet system, the portion of parallel circuits such as insulated multi-strand cables or parallel persistent current switches (PCS) are made. In superconducting parallel circuits of an insulated multi-strand cable or a parallel persistent current switch (PCS), the current distribution during the current sweep, the persistent mode, and the quench process were investigated. In order to measure the current distribution, two methods were used. (1) Each strand was surrounded with a pure iron core with the air gap. In the air gap, a Hall probe was located. The accuracy of this method was deteriorated by the magnetic hysteresis of iron. (2) The Rogowski coil without iron was used for the current measurement of each path in a 4-parallel PCS. As a result, it was shown that the current distribution characteristics of a parallel PCS is very similar to that of an insulated multi-strand cable for the quench process

  13. The evolution of an ancient technology.

    Science.gov (United States)

    Buckley, Christopher D; Boudot, Eric

    2017-05-01

    We investigate pattern and process in the transmission of traditional weaving cultures in East and Southeast Asia. Our investigation covers a range of scales, from the experiences of individual weavers ('micro') to the broad-scale patterns of loom technologies across the region ('macro'). Using published sources, we build an empirical model of cultural transmission (encompassing individual weavers, the household and the community), focussing on where cultural information resides and how it is replicated and how transmission errors are detected and eliminated. We compare this model with macro-level outcomes in the form of a new dataset of weaving loom technologies across a broad area of East and Southeast Asia. The lineages of technologies that we have uncovered display evidence for branching, hybridization (reticulation), stasis in some lineages, rapid change in others and the coexistence of both simple and complex forms. There are some striking parallels with biological evolution and information theory. There is sufficient detail and resolution in our findings to enable us to begin to critique theoretical models and assumptions that have been produced during the last few decades to describe the evolution of culture.

  14. Alike but different: the evolution of the Tubifex tubifex species complex (Annelida, Clitellata) through polyploidization.

    Science.gov (United States)

    Marotta, Roberto; Crottini, Angelica; Raimondi, Elena; Fondello, Cristina; Ferraguti, Marco

    2014-04-02

    Tubifex tubifex is a widespread annelid characterized by considerable variability in its taxonomic characteristics and by a mixed reproductive strategy, with both parthenogenesis and biparental reproduction. In a molecular phylogenetic analysis, we detected substantial genetic variability among sympatric Tubifex spp. from the Lambro River (Milano, Italy), which we suggested comprise several cryptic species. To gain insights into the evolutionary events that generated this differentiation, we performed a cytogenetic analysis in parallel with a molecular assay. Approximately 80 cocoons of T. tubifex and T. blanchardi were collected and dissected. For each cocoon, we sequenced a fragment of the 16S rRNA from half of the sibling embryos and karyotyped the other half. To generate a robust phylogeny enabling the reconstruction of the evolutionary processes shaping the diversity of these sympatric lineages, we complemented our original 16S rRNA gene sequences with additional COI sequences. The chromosome number distribution was consistent with the presence of at least six sympatric euploid chromosome complements (one diploid, one triploid, three tetraploids and one hexaploid), as confirmed by a FISH assay performed with an homologous 18S rDNA probe. All the worms with 2n = 50 chromosomes belonged to an already identified sibling species of T. tubifex, T. blanchardi. The six euploid sets were coherently arranged in the phylogeny, with each lineage grouping specimens with the same chromosome complement. These results are compatible with the hypothesis that multiple polyploidization events, possibly enhanced by parthenogenesis, may have driven the evolution of the T. tubifex species complex.

  15. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  16. 6th International Parallel Tools Workshop

    CERN Document Server

    Brinkmann, Steffen; Gracia, José; Resch, Michael; Nagel, Wolfgang

    2013-01-01

    The latest advances in the High Performance Computing hardware have significantly raised the level of available compute performance. At the same time, the growing hardware capabilities of modern supercomputing architectures have caused an increasing complexity of the parallel application development. Despite numerous efforts to improve and simplify parallel programming, there is still a lot of manual debugging and  tuning work required. This process  is supported by special software tools, facilitating debugging, performance analysis, and optimization and thus  making a major contribution to the development of  robust and efficient parallel software. This book introduces a selection of the tools, which were presented and discussed at the 6th International Parallel Tools Workshop, held in Stuttgart, Germany, 25-26 September 2012.

  17. Angular parallelization of a curvilinear Sn transport theory method

    International Nuclear Information System (INIS)

    Haghighat, A.

    1991-01-01

    In this paper a parallel algorithm for angular domain decomposition (or parallelization) of an r-dependent spherical S n transport theory method is derived. The parallel formulation is incorporated into TWOTRAN-II using the IBM Parallel Fortran compiler and implemented on an IBM 3090/400 (with four processors). The behavior of the parallel algorithm for different physical problems is studied, and it is concluded that the parallel algorithm behaves differently in the presence of a fission source as opposed to the absence of a fission source; this is attributed to the relative contributions of the source and the angular redistribution terms in the S s algorithm. Further, the parallel performance of the algorithm is measured for various problem sizes and different combinations of angular subdomains or processors. Poor parallel efficiencies between ∼35 and 50% are achieved in situations where the relative difference of parallel to serial iterations is ∼50%. High parallel efficiencies between ∼60% and 90% are obtained in situations where the relative difference of parallel to serial iterations is <35%

  18. Magnetohydrodynamics: Parallel computation of the dynamics of thermonuclear and astrophysical plasmas. 1. Annual report of massively parallel computing pilot project 93MPR05

    International Nuclear Information System (INIS)

    1994-08-01

    This is the first annual report of the MPP pilot project 93MPR05. In this pilot project four research groups with different, complementary backgrounds collaborate with the aim to develop new algorithms and codes to simulate the magnetohydrodynamics of thermonuclear and astrophysical plasmas on massively parallel machines. The expected speed-up is required to simulate the dynamics of the hot plasmas of interest which are characterized by very large magnetic Reynolds numbers and, hence, require high spatial and temporal resolutions (for details see section 1). The four research groups that collaborated to produce the results reported here are: The MHD group of Prof. Dr. J.P. Goedbloed at the FOM-Institute for Plasma Physics 'Rijnhuizen' in Nieuwegein, the group of Prof. Dr. H. van der Vorst at the Mathematics Institute of Utrecht University, the group of Prof. Dr. A.G. Hearn at the Astronomical Institute of Utrecht University, and the group of Dr. Ir. H.J.J. te Riele at the CWI in Amsterdam. The full project team met frequently during this first project year to discuss progress reports, current problems, etc. (see section 2). The main results of the first project year are: - Proof of the scalability of typical linear and nonlinear MHD codes - development and testing of a parallel version of the Arnoldi algorithm - development and testing of alternative methods for solving large non-Hermitian eigenvalue problems - porting of the 3D nonlinear semi-implicit time evolution code HERA to an MPP system. The steps that were scheduled to reach these intended results are given in section 3. (orig./WL)

  19. Magnetohydrodynamics: Parallel computation of the dynamics of thermonuclear and astrophysical plasmas. 1. Annual report of massively parallel computing pilot project 93MPR05

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-08-01

    This is the first annual report of the MPP pilot project 93MPR05. In this pilot project four research groups with different, complementary backgrounds collaborate with the aim to develop new algorithms and codes to simulate the magnetohydrodynamics of thermonuclear and astrophysical plasmas on massively parallel machines. The expected speed-up is required to simulate the dynamics of the hot plasmas of interest which are characterized by very large magnetic Reynolds numbers and, hence, require high spatial and temporal resolutions (for details see section 1). The four research groups that collaborated to produce the results reported here are: The MHD group of Prof. Dr. J.P. Goedbloed at the FOM-Institute for Plasma Physics `Rijnhuizen` in Nieuwegein, the group of Prof. Dr. H. van der Vorst at the Mathematics Institute of Utrecht University, the group of Prof. Dr. A.G. Hearn at the Astronomical Institute of Utrecht University, and the group of Dr. Ir. H.J.J. te Riele at the CWI in Amsterdam. The full project team met frequently during this first project year to discuss progress reports, current problems, etc. (see section 2). The main results of the first project year are: - Proof of the scalability of typical linear and nonlinear MHD codes - development and testing of a parallel version of the Arnoldi algorithm - development and testing of alternative methods for solving large non-Hermitian eigenvalue problems - porting of the 3D nonlinear semi-implicit time evolution code HERA to an MPP system. The steps that were scheduled to reach these intended results are given in section 3. (orig./WL).

  20. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  1. Parallelization characteristics of the DeCART code

    International Nuclear Information System (INIS)

    Cho, J. Y.; Joo, H. G.; Kim, H. Y.; Lee, C. C.; Chang, M. H.; Zee, S. Q.

    2003-12-01

    This report is to describe the parallelization characteristics of the DeCART code and also examine its parallel performance. Parallel computing algorithms are implemented to DeCART to reduce the tremendous computational burden and memory requirement involved in the three-dimensional whole core transport calculation. In the parallelization of the DeCART code, the axial domain decomposition is first realized by using MPI (Message Passing Interface), and then the azimuthal angle domain decomposition by using either MPI or OpenMP. When using the MPI for both the axial and the angle domain decomposition, the concept of MPI grouping is employed for convenient communication in each communication world. For the parallel computation, most of all the computing modules except for the thermal hydraulic module are parallelized. These parallelized computing modules include the MOC ray tracing, CMFD, NEM, region-wise cross section preparation and cell homogenization modules. For the distributed allocation, most of all the MOC and CMFD/NEM variables are allocated only for the assigned planes, which reduces the required memory by a ratio of the number of the assigned planes to the number of all planes. The parallel performance of the DeCART code is evaluated by solving two problems, a rodded variation of the C5G7 MOX three-dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In the aspect of parallel performance, the DeCART code shows a good speedup of about 40.1 and 22.4 in the ray tracing module and about 37.3 and 20.2 in the total computing time when using 48 CPUs on the IBM Regatta and 24 CPUs on the LINUX cluster, respectively. In the comparison between the MPI and OpenMP, OpenMP shows a somewhat better performance than MPI. Therefore, it is concluded that the first priority in the parallel computation of the DeCART code is in the axial domain decomposition by using MPI, and then in the angular domain using OpenMP, and finally the angular

  2. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  3. Programming cells by multiplex genome engineering and accelerated evolution.

    Science.gov (United States)

    Wang, Harris H; Isaacs, Farren J; Carr, Peter A; Sun, Zachary Z; Xu, George; Forest, Craig R; Church, George M

    2009-08-13

    The breadth of genomic diversity found among organisms in nature allows populations to adapt to diverse environments. However, genomic diversity is difficult to generate in the laboratory and new phenotypes do not easily arise on practical timescales. Although in vitro and directed evolution methods have created genetic variants with usefully altered phenotypes, these methods are limited to laborious and serial manipulation of single genes and are not used for parallel and continuous directed evolution of gene networks or genomes. Here, we describe multiplex automated genome engineering (MAGE) for large-scale programming and evolution of cells. MAGE simultaneously targets many locations on the chromosome for modification in a single cell or across a population of cells, thus producing combinatorial genomic diversity. Because the process is cyclical and scalable, we constructed prototype devices that automate the MAGE technology to facilitate rapid and continuous generation of a diverse set of genetic changes (mismatches, insertions, deletions). We applied MAGE to optimize the 1-deoxy-D-xylulose-5-phosphate (DXP) biosynthesis pathway in Escherichia coli to overproduce the industrially important isoprenoid lycopene. Twenty-four genetic components in the DXP pathway were modified simultaneously using a complex pool of synthetic DNA, creating over 4.3 billion combinatorial genomic variants per day. We isolated variants with more than fivefold increase in lycopene production within 3 days, a significant improvement over existing metabolic engineering techniques. Our multiplex approach embraces engineering in the context of evolution by expediting the design and evolution of organisms with new and improved properties.

  4. Parallel evolutionary computation in bioinformatics applications.

    Science.gov (United States)

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Bayesian Network Constraint-Based Structure Learning Algorithms: Parallel and Optimized Implementations in the bnlearn R Package

    Directory of Open Access Journals (Sweden)

    Marco Scutari

    2017-03-01

    Full Text Available It is well known in the literature that the problem of learning the structure of Bayesian networks is very hard to tackle: Its computational complexity is super-exponential in the number of nodes in the worst case and polynomial in most real-world scenarios. Efficient implementations of score-based structure learning benefit from past and current research in optimization theory, which can be adapted to the task by using the network score as the objective function to maximize. This is not true for approaches based on conditional independence tests, called constraint-based learning algorithms. The only optimization in widespread use, backtracking, leverages the symmetries implied by the definitions of neighborhood and Markov blanket. In this paper we illustrate how backtracking is implemented in recent versions of the bnlearn R package, and how it degrades the stability of Bayesian network structure learning for little gain in terms of speed. As an alternative, we describe a software architecture and framework that can be used to parallelize constraint-based structure learning algorithms (also implemented in bnlearn and we demonstrate its performance using four reference networks and two real-world data sets from genetics and systems biology. We show that on modern multi-core or multiprocessor hardware parallel implementations are preferable over backtracking, which was developed when single-processor machines were the norm.

  6. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  7. Added value of second biopsy target in screen-detected widespread suspicious breast calcifications.

    Science.gov (United States)

    Falkner, Nathalie M; Hince, Dana; Porter, Gareth; Dessauvagie, Ben; Jeganathan, Sanjay; Bulsara, Max; Lo, Glen

    2018-06-01

    There is controversy on the optimal work-up of screen-detected widespread breast calcifications: whether to biopsy a single target or multiple targets. This study evaluates agreement between multiple biopsy targets within the same screen-detected widespread (≥25 mm) breast calcification to determine if the second biopsy adds value. Retrospective observational study of women screened in a statewide general population risk breast cancer mammographic screening program from 2009 to 2016. Screening episodes recalled for widespread calcifications where further views indicated biopsy, and two or more separate target areas were sampled within the same lesion were included. Percentage agreement and Cohen's Kappa were calculated. A total of 293317 women were screened during 761124 separate episodes with recalls for widespread calcifications in 2355 episodes. In 171 women, a second target was biopsied within the same lesion. In 149 (86%) cases, the second target biopsy result agreed with the first biopsy (κ = 0.6768). Agreement increased with increasing mammography score (85%, 86% and 92% for score 3, 4 and 5 lesions). Same day multiple biopsied lesions were three times more likely to yield concordant results compared to post-hoc second target biopsy cases. While a single target biopsy is sufficient to discriminate a benign vs. malignant diagnosis in most cases, in 14% there is added value in performing a second target biopsy. Biopsies performed prospectively are more likely to yield concordant results compared to post-hoc second target biopsy cases, suggesting a single prospective biopsy may be sufficient when results are radiological-pathological concordant; discordance still requires repeat sampling. © 2018 The Royal Australian and New Zealand College of Radiologists.

  8. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  9. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  10. Study of the microstructural evolution and rheological behavior by semisolid compression between parallel plate of the alloy A356 solidified under a continuously rotating magnetic field

    International Nuclear Information System (INIS)

    Leiva L, Ricardo; Sanchez V, Cristian; Mannheim C, Rodolfo; Bustos C, Oscar

    2004-01-01

    This work presents a study of the rheological behavior of the alloy A356, with and without continuous magnetic agitation during its solidification, in semisolid state. The evaluation was performed using a parallel plate compression rheometer with the digital recording of position and time data. The microstructural evolution was also studied at the start and end of the semisolid compression test. The procedure involved tests of short cylinders extracted from billets with a non dendritic microstructure cast under a continuously rotating magnetic field. These pieces were tested in different solid fractions, at constant charges and at constant deformation velocities. When the test is carried out at a constant charge the equation can be determined that governs the rheological behavior of the material in semisolid state following a power grade of two Ostwald-de-Waele parameters. But when the test is done at a constant deformation speed the flow behavior of the material can be described in the semisolid shaping process. The results obtained show that the morphology of the phases present in the microstructure is highly relevant to its rheological behavior. A globular coalesced rosette to rosette type microstructure was found to have the typical behavior of a fluid when shaped in a semisolid state but a cast dendritic structure did not behave this way. Also the Arrhenius type dependence of viscosity with temperature was established (CW)

  11. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  12. Reanalyzing Head et al. : Investigating the robustness of widespread p-hacking

    NARCIS (Netherlands)

    Hartgerink, C.H.J.

    2017-01-01

    Head et al. (2015) provided a large collection of p-values that, from their perspective, indicates widespread statistical significance seeking (i.e., p-hacking). This paper inspects this result for robustness. Theoretically, the p-value distribution should be a smooth, decreasing function, but the

  13. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  14. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  15. Parallel thermal radiation transport in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)

    2003-07-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  16. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  17. Prevalence and risk factors of vitamin D deficiency in patients with widespread musculoskeletal pain

    Directory of Open Access Journals (Sweden)

    Muharrem Çidem

    2013-12-01

    Full Text Available Objective: Vitamin D deficiency is a worldwide common health problems. Vitamin D deficiency in adults has been associated with proximal muscle weakness, skeletal mineralization defect, and an increased risk of falling. Patients with vitamin D deficiency commonly complain of widespread pain in the body. The aim of this study was to examine the prevalence and risk factors of 25-hydroxyvitamin D deficiency in patients complaining of widespread musculoskeletal pain. Methods: In this cross-sectional study, 8457 patients with widespread musculoskeletal pain (7772 females, 685 males, aged 46.7 (range 20-100 years were included. Serum 25-hydroxyvitamin D was measured with ELISA method. Patients were classified into two groups: 1 Patients with vitamin D deficiency (20 ng/ml. Results: Prevalence of vitamin D deficiency was found to be 71.7%. A binary logistic regression model showed that low 25(OHVit D level was associated with gender, age and month in which 25(OH hypovitaminosis was determined. The risk of low 25(OH Vit D was found to be 2.15 times higher in female patients and 1.52 times higher on March and 1.55 times higher on April. Conclusion: This study indicates that Vitamin D deficiency should be taken into consideration in patients with widespread musculoskeletal pain, and some precautions such as sunbathe during summer should be recommended patients having risk of vitamin D deficiency. J Clin Exp Invest 2013; 4 (4: 48-491

  18. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  19. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  20. Implementing Shared Memory Parallelism in MCBEND

    Directory of Open Access Journals (Sweden)

    Bird Adam

    2017-01-01

    Full Text Available MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  1. Biodiversity Meets Neuroscience: From the Sequencing Ship (Ship-Seq) to Deciphering Parallel Evolution of Neural Systems in Omic’s Era

    Science.gov (United States)

    Moroz, Leonid L.

    2015-01-01

    The origins of neural systems and centralized brains are one of the major transitions in evolution. These events might occur more than once over 570–600 million years. The convergent evolution of neural circuits is evident from a diversity of unique adaptive strategies implemented by ctenophores, cnidarians, acoels, molluscs, and basal deuterostomes. But, further integration of biodiversity research and neuroscience is required to decipher critical events leading to development of complex integrative and cognitive functions. Here, we outline reference species and interdisciplinary approaches in reconstructing the evolution of nervous systems. In the “omic” era, it is now possible to establish fully functional genomics laboratories aboard of oceanic ships and perform sequencing and real-time analyses of data at any oceanic location (named here as Ship-Seq). In doing so, fragile, rare, cryptic, and planktonic organisms, or even entire marine ecosystems, are becoming accessible directly to experimental and physiological analyses by modern analytical tools. Thus, we are now in a position to take full advantages from countless “experiments” Nature performed for us in the course of 3.5 billion years of biological evolution. Together with progress in computational and comparative genomics, evolutionary neuroscience, proteomic and developmental biology, a new surprising picture is emerging that reveals many ways of how nervous systems evolved. As a result, this symposium provides a unique opportunity to revisit old questions about the origins of biological complexity. PMID:26163680

  2. Parallel Task Processing on a Multicore Platform in a PC-based Control System for Parallel Kinematics

    Directory of Open Access Journals (Sweden)

    Harald Michalik

    2009-02-01

    Full Text Available Multicore platforms are such that have one physical processor chip with multiple cores interconnected via a chip level bus. Because they deliver a greater computing power through concurrency, offer greater system density multicore platforms provide best qualifications to address the performance bottleneck encountered in PC-based control systems for parallel kinematic robots with heavy CPU-load. Heavy load control tasks are generated by new control approaches that include features like singularity prediction, structure control algorithms, vision data integration and similar tasks. In this paper we introduce the parallel task scheduling extension of a communication architecture specially tailored for the development of PC-based control of parallel kinematics. The Sche-duling is specially designed for the processing on a multicore platform. It breaks down the serial task processing of the robot control cycle and extends it with parallel task processing paths in order to enhance the overall control performance.

  3. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  4. Evolution of the metazoan mitochondrial replicase.

    Science.gov (United States)

    Oliveira, Marcos T; Haukka, Jani; Kaguni, Laurie S

    2015-03-03

    The large number of complete mitochondrial DNA (mtDNA) sequences available for metazoan species makes it a good system for studying genome diversity, although little is known about the mechanisms that promote and/or are correlated with the evolution of this organellar genome. By investigating the molecular evolutionary history of the catalytic and accessory subunits of the mtDNA polymerase, pol γ, we sought to develop mechanistic insight into its function that might impact genome structure by exploring the relationships between DNA replication and animal mitochondrial genome diversity. We identified three evolutionary patterns among metazoan pol γs. First, a trend toward stabilization of both sequence and structure occurred in vertebrates, with both subunits evolving distinctly from those of other animal groups, and acquiring at least four novel structural elements, the most important of which is the HLH-3β (helix-loop-helix, 3 β-sheets) domain that allows the accessory subunit to homodimerize. Second, both subunits of arthropods and tunicates have become shorter and evolved approximately twice as rapidly as their vertebrate homologs. And third, nematodes have lost the gene for the accessory subunit, which was accompanied by the loss of its interacting domain in the catalytic subunit of pol γ, and they show the highest rate of molecular evolution among all animal taxa. These findings correlate well with the mtDNA genomic features of each group described above, and with their modes of DNA replication, although a substantive amount of biochemical work is needed to draw conclusive links regarding the latter. Describing the parallels between evolution of pol γ and metazoan mtDNA architecture may also help in understanding the processes that lead to mitochondrial dysfunction and to human disease-related phenotypes. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  5. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  6. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-01-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. (orig.)

  7. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    Science.gov (United States)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  8. Generation and evolution of anisotropic turbulence and related energy transfer in drifting proton-alpha plasmas

    Science.gov (United States)

    Maneva, Y. G.; Poedts, S.

    2018-05-01

    The power spectra of magnetic field fluctuations in the solar wind typically follow a power-law dependence with respect to the observed frequencies and wave-numbers. The background magnetic field often influences the plasma properties, setting a preferential direction for plasma heating and acceleration. At the same time the evolution of the solar-wind turbulence at the ion and electron scales is influenced by the plasma properties through local micro-instabilities and wave-particle interactions. The solar-wind-plasma temperature and the solar-wind turbulence at sub- and sup-ion scales simultaneously show anisotropic features, with different components and fluctuation power in parallel with and perpendicular to the orientation of the background magnetic field. The ratio between the power of the magnetic field fluctuations in parallel and perpendicular direction at the ion scales may vary with the heliospheric distance and depends on various parameters, including the local wave properties and nonthermal plasma features, such as temperature anisotropies and relative drift speeds. In this work we have performed two-and-a-half-dimensional hybrid simulations to study the generation and evolution of anisotropic turbulence in a drifting multi-ion species plasma. We investigate the evolution of the turbulent spectral slopes along and across the background magnetic field for the cases of initially isotropic and anisotropic turbulence. Finally, we show the effect of the various turbulent spectra for the local ion heating in the solar wind.

  9. Researching the Parallel Process in Supervision and Psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    Reflects upon how to do process research in supervision and in the parallel process. A single case study is presented illustrating how a study on parallel process can be carried out.......Reflects upon how to do process research in supervision and in the parallel process. A single case study is presented illustrating how a study on parallel process can be carried out....

  10. Diversity of two widespread Indo-Pacific demosponge species revisited

    OpenAIRE

    Erpenbeck, D.; Aryasari, R.; Benning, S.; Debitus, Cécile; Kaltenbacher, E.; Al-Aidaroos, A. M.; Schupp, P.; Hall, K.; Hooper, J. N. A.; Voigt, O.; de Voogd, N. J.; Worheide, G.

    2017-01-01

    The Indo-Pacific is the world's largest marine biogeographic region, covering the tropical and subtropical waters from the Red Sea in the Western Indian Ocean to the Easter Islands in the Pacific. It is characterized by a vast degree of biogeographic connectivity in particular in its marine realm. So far, usage of molecular tools rejected the presence of cosmopolitan or very widespread sponge species in several cases, supporting hypotheses on a higher level of endemism among marine invertebra...

  11. The Electrical Resistivity and Acoustic Emission Response Law and Damage Evolution of Limestone in Brazilian Split Test

    Directory of Open Access Journals (Sweden)

    Xinji Xu

    2016-01-01

    Full Text Available The Brazilian split test was performed on two groups of limestone samples with loading directions vertical and parallel to the bedding plane, and the response laws of the electrical resistivity and acoustic emission (AE in the two loading modes were obtained. The test results showed that the Brazilian split test with loading directions vertical and parallel to the bedding showed obviously different results and anisotropic characteristics. On the basis of the response laws of the electrical resistivity and AE, the damage variables based on the electrical resistivity and AE properties were modified, and the evolution laws of the damage variables in the Brazilian split test with different loading directions were obtained. It was found that the damage evolution laws varied with the loading direction. Specifically, in the time-varying curve of the damage variable with the loading direction vertical to the bedding, the damage variable based on electrical resistivity properties showed an obvious damage weakening stage while that based on AE properties showed an abrupt increase under low load.

  12. The Future Evolution of the Fast TracKer Processing Unit

    CERN Document Server

    Gentsos, Christos; The ATLAS collaboration; Magalotti, Daniel; Bertolucci, Federico; Citraro, Saverio; Kordas, Kostantinos; Nikolaidis, Spyridon

    2016-01-01

    Real time tracking is a key ingredient for online event selection at hadron colliders. The Silicon Vertex Tracker at the CDF experiment and the Fast Tracker (FTK) at ATLAS are two successful examples of the importance of dedicated hardware to reconstruct full events at hadron machines. We present the future evolution of this technology, for applications in the High Luminosity runs at the Large Hadron Collider (HL-LHC). Data processing speed is achieved with custom VLSI pattern recognition and linearized track fitting executed inside modern FPGAs, exploiting deep pipelining, extensive parallelism, and efficient use of available resources. In the current system, one large FPGA executed track fitting in full resolution inside low resolution candidate tracks found by a set of custom ASIC devices, called Associative Memories (AM chips). The FTK dual structure, based on the cooperation of VLSI AM and programmable FPGAs, is maintained, but we plan to increase the FPGA parallelism by associating one FPGA to each AM c...

  13. The Future Evolution of the Fast TracKer Processing Unit

    CERN Document Server

    Gentsos, Christos; The ATLAS collaboration; Magalotti, Daniel; Bertolucci, Federico; Citraro, Saverio; Kordas, Kostantinos; Nikolaidis, Spyridon

    2015-01-01

    Real time tracking is a key ingredient for online event selection at hadron colliders. The Silicon Vertex Tracker at the CDF experiment and the Fast Tracker (FTK) at ATLAS are two successful examples of the importance of dedicated hardware to reconstruct full events at hadron machines. We present the future evolution of this technology, for applications in the High Luminosity runs at the Large Hadron Collider (HL-LHC). Data processing speed is achieved with custom VLSI pattern recognition and linearized track fitting executed inside modern FPGAs, exploiting deep pipelining, extensive parallelism, and efficient use of available resources. In the current system, one large FPGA executed track fitting in full resolution inside low resolution candidate tracks found by a set of custom ASIC devices, called Associative Memories (AM chips). The FTK dual structure, based on the cooperation of VLSI AM and programmable FPGAs, is maintained, but we plan to increase the FPGA parallelism by associating one FPGA to each AM c...

  14. Development of parallel/serial program analyzing tool

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Nagao, Saichi; Takigawa, Yoshio; Kumakura, Toshimasa

    1999-03-01

    Japan Atomic Energy Research Institute has been developing 'KMtool', a parallel/serial program analyzing tool, in order to promote the parallelization of the science and engineering computation program. KMtool analyzes the performance of program written by FORTRAN77 and MPI, and it reduces the effort for parallelization. This paper describes development purpose, design, utilization and evaluation of KMtool. (author)

  15. Simulation Exploration through Immersive Parallel Planes: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve

    2016-03-01

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  16. Chronic widespread pain in spondyloarthritis

    Directory of Open Access Journals (Sweden)

    F. Atzeni

    2014-06-01

    Full Text Available The pain associated with spondyloarthritis (SpA can be intense, persistent and disabling. It frequently has a multifactorial, simultaneously central and peripheral origin, and may be due to currently active inflammation, or joint damage and tissue destruction arising from a previous inflammatory condition. Inflammatory pain symptoms can be reduced by non-steroidal anti-inflammatory drugs, but many patients continue to experience moderate pain due to alterations in the mechanisms that regulate central pain, as in the case of the chronic widespread pain (CWP that characterises fibromyalgia (FM. The importance of distinguishing SpA and FM is underlined by the fact that SpA is currently treated with costly drugs such as tumour necrosis factor (TNF inhibitors, and direct costs are higher in patients with concomitant CWP or FM than in those with FM or SpA alone. Optimal treatment needs to take into account symptoms such as fatigue, mood, sleep, and the overall quality of life, and is based on the use of tricyclic antidepressants or selective serotonin reuptake inhibitors such as fluoxetine, rather than adjustments in the dose of anti-TNF agents or disease-modifying drugs.

  17. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  18. Online transition matrix identification of the state evolution model for the extended Kalman filter in electrical impedance tomography

    International Nuclear Information System (INIS)

    Moura, Fernando S; Aya, Julio C C; Lima, Raul G; Fleury, Agenor T

    2008-01-01

    One of the electrical impedance tomography objectives is to estimate the electrical resistivity distribution in a domain based only on contour electrical potential measurements caused by an imposed electrical current distribution into the boundary. In biomedical applications, the random walk model is frequently used as evolution model and, under this conditions, it is observed poor tracking ability of the Extended Kalman Filter (EKF). An analytically developed evolution model is not feasible at this moment. The present work investigates the possibility of identifying the evolution model in parallel to the EKF and updating the evolution model with certain periodicity. The evolution model is identified using the history of resistivity distribution obtained by a sensitivity matrix based algorithm. To numerically identify the linear evolution model, it is used the Ibrahim Time Domain Method, normally used to identify the transition matrix on structural dynamics. The investigation was performed by numerical simulations of a time varying domain with the addition of noise. Numerical dificulties to compute the transition matrix were solved using a Tikhonov regularization. The EKF numerical simulations suggest that the tracking ability is significantly improved.

  19. Parallelization of Subchannel Analysis Code MATRA

    International Nuclear Information System (INIS)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk

    2014-01-01

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems

  20. Broadcasting a message in a parallel computer

    Science.gov (United States)

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  1. Modeling evolution of crosstalk in noisy signal transduction networks

    Science.gov (United States)

    Tareen, Ammar; Wingreen, Ned S.; Mukhopadhyay, Ranjan

    2018-02-01

    Signal transduction networks can form highly interconnected systems within cells due to crosstalk between constituent pathways. To better understand the evolutionary design principles underlying such networks, we study the evolution of crosstalk for two parallel signaling pathways that arise via gene duplication. We use a sequence-based evolutionary algorithm and evolve the network based on two physically motivated fitness functions related to information transmission. We find that one fitness function leads to a high degree of crosstalk while the other leads to pathway specificity. Our results offer insights on the relationship between network architecture and information transmission for noisy biomolecular networks.

  2. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  3. Phase space simulation of collisionless stellar systems on the massively parallel processor

    International Nuclear Information System (INIS)

    White, R.L.

    1987-01-01

    A numerical technique for solving the collisionless Boltzmann equation describing the time evolution of a self gravitating fluid in phase space was implemented on the Massively Parallel Processor (MPP). The code performs calculations for a two dimensional phase space grid (with one space and one velocity dimension). Some results from calculations are presented. The execution speed of the code is comparable to the speed of a single processor of a Cray-XMP. Advantages and disadvantages of the MPP architecture for this type of problem are discussed. The nearest neighbor connectivity of the MPP array does not pose a significant obstacle. Future MPP-like machines should have much more local memory and easier access to staging memory and disks in order to be effective for this type of problem

  4. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  5. The evolution of CHROMOMETHYLASES and gene body DNA methylation in plants.

    Science.gov (United States)

    Bewick, Adam J; Niederhuth, Chad E; Ji, Lexiang; Rohr, Nicholas A; Griffin, Patrick T; Leebens-Mack, Jim; Schmitz, Robert J

    2017-05-01

    The evolution of gene body methylation (gbM), its origins, and its functional consequences are poorly understood. By pairing the largest collection of transcriptomes (>1000) and methylomes (77) across Viridiplantae, we provide novel insights into the evolution of gbM and its relationship to CHROMOMETHYLASE (CMT) proteins. CMTs are evolutionary conserved DNA methyltransferases in Viridiplantae. Duplication events gave rise to what are now referred to as CMT1, 2 and 3. Independent losses of CMT1, 2, and 3 in eudicots, CMT2 and ZMET in monocots and monocots/commelinids, variation in copy number, and non-neutral evolution suggests overlapping or fluid functional evolution of this gene family. DNA methylation within genes is widespread and is found in all major taxonomic groups of Viridiplantae investigated. Genes enriched with methylated CGs (mCG) were also identified in species sister to angiosperms. The proportion of genes and DNA methylation patterns associated with gbM are restricted to angiosperms with a functional CMT3 or ortholog. However, mCG-enriched genes in the gymnosperm Pinus taeda shared some similarities with gbM genes in Amborella trichopoda. Additionally, gymnosperms and ferns share a CMT homolog closely related to CMT2 and 3. Hence, the dependency of gbM on a CMT most likely extends to all angiosperms and possibly gymnosperms and ferns. The resulting gene family phylogeny of CMT transcripts from the most diverse sampling of plants to date redefines our understanding of CMT evolution and its evolutionary consequences on DNA methylation. Future, functional tests of homologous and paralogous CMTs will uncover novel roles and consequences to the epigenome.

  6. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  7. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    resolution property of the other one, anti-parallel position, is very poor. .... in a wide angular region using BPC mochromator at the MF condition by showing ... and N Nimura, Proceedings of the 7th World Conference on Neutron Radiography,.

  8. Research in Parallel Algorithms and Software for Computational Aerosciences

    Science.gov (United States)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  9. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  10. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro; Nakashima, Jun; Yokota, Rio; Maruyama, Naoya

    2012-01-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM

  11. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  12. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (parallelization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hideo; Kawai, Wataru; Nemoto, Toshiyuki [Fujitsu Ltd., Tokyo (Japan); and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the parallelization. In this parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. In the vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  13. Configuration affects parallel stent grafting results.

    Science.gov (United States)

    Tanious, Adam; Wooster, Mathew; Armstrong, Paul A; Zwiebel, Bruce; Grundy, Shane; Back, Martin R; Shames, Murray L

    2018-05-01

    A number of adjunctive "off-the-shelf" procedures have been described to treat complex aortic diseases. Our goal was to evaluate parallel stent graft configurations and to determine an optimal formula for these procedures. This is a retrospective review of all patients at a single medical center treated with parallel stent grafts from January 2010 to September 2015. Outcomes were evaluated on the basis of parallel graft orientation, type, and main body device. Primary end points included parallel stent graft compromise and overall endovascular aneurysm repair (EVAR) compromise. There were 78 patients treated with a total of 144 parallel stents for a variety of pathologic processes. There was a significant correlation between main body oversizing and snorkel compromise (P = .0195) and overall procedural complication (P = .0019) but not with endoleak rates. Patients were organized into the following oversizing groups for further analysis: 0% to 10%, 10% to 20%, and >20%. Those oversized into the 0% to 10% group had the highest rate of overall EVAR complication (73%; P = .0003). There were no significant correlations between any one particular configuration and overall procedural complication. There was also no significant correlation between total number of parallel stents employed and overall complication. Composite EVAR configuration had no significant correlation with individual snorkel compromise, endoleak, or overall EVAR or procedural complication. The configuration most prone to individual snorkel compromise and overall EVAR complication was a four-stent configuration with two stents in an antegrade position and two stents in a retrograde position (60% complication rate). The configuration most prone to endoleak was one or two stents in retrograde position (33% endoleak rate), followed by three stents in an all-antegrade position (25%). There was a significant correlation between individual stent configuration and stent compromise (P = .0385), with 31

  14. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    International Nuclear Information System (INIS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-01-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines

  15. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    Science.gov (United States)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  16. Scalable parallel prefix solvers for discrete ordinates transport

    International Nuclear Information System (INIS)

    Pautz, S.; Pandya, T.; Adams, M.

    2009-01-01

    The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)

  17. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  18. Xyce parallel electronic simulator : users' guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is

  19. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  20. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed