WorldWideScience

Sample records for core parallels climate

  1. Exploitation of Parallelism in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Baer, F.; Tribbia, J.J.; Williamson, D.L.

    1999-03-01

    The US Department of Energy (DOE), through its CHAMMP initiative, hopes to develop the capability to make meaningful regional climate forecasts on time scales exceeding a decade, such capability to be based on numerical prediction type models. We propose research to contribute to each of the specific items enumerated in the CHAMMP announcement (Notice 91-3); i.e., to consider theoretical limits to prediction of climate and climate change on appropriate time scales, to develop new mathematical techniques to utilize massively parallel processors (MPP), to actually utilize MPPs as a research tool, and to develop improved representations of some processes essential to climate prediction. In particular, our goals are to: (1) Reconfigure the prediction equations such that the time iteration process can be compressed by use of MMP architecture, and to develop appropriate algorithms. (2) Develop local subgrid scale models which can provide time and space dependent parameterization for a state- of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics. (3) Capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. By careful choice of initial states, many realizations of the climate system can be determined concurrently and more realistic assessments of the climate prediction can be made in a realistic time frame. To explore these initiatives, we will exploit all available computing technology, and in particular MPP machines. We anticipate that significant improvements in modeling of climate on the decadal and longer time scales for regional space scales will result from our efforts.

  2. ParCAT: Parallel Climate Analysis Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Steed, Chad A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ricciuto, Daniel M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Thornton, Peter E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wehner, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    Climate science is employing increasingly complex models and simulations to analyze the past and predict the future of Earth s climate. This growth in complexity is creating a widening gap between the data being produced and the ability to analyze the datasets. Parallel computing tools are necessary to analyze, compare, and interpret the simulation data. The Parallel Climate Analysis Toolkit (ParCAT) provides basic tools to efficiently use parallel computing techniques to make analysis of these datasets manageable. The toolkit provides the ability to compute spatio-temporal means, differences between runs or differences between averages of runs, and histograms of the values in a data set. ParCAT is implemented as a command-line utility written in C. This allows for easy integration in other tools and allows for use in scripts. This also makes it possible to run ParCAT on many platforms from laptops to supercomputers. ParCAT outputs NetCDF files so it is compatible with existing utilities such as Panoply and UV-CDAT. This paper describes ParCAT and presents results from some example runs on the Titan system at ORNL.

  3. Improved Parallel Apriori Algorithm for Multi-cores

    Directory of Open Access Journals (Sweden)

    Swati Rustogi

    2017-04-01

    Full Text Available Apriori algorithm is one of the most popular data mining techniques, which is used for mining hidden relationship in large data. With parallelism, a large data set can be mined in less amount of time. Apart from the costly distributed systems, a computer supporting multi core environment can be used for applying parallelism. In this paper an improved Apriori algorithm for multi-core environment is proposed. The main contributions of this paper are:  An efficient Apriori algorithm that applies data parallelism in multi-core environment by reducing the time taken to count the frequency of candidate item sets.  The performance of proposed algorithm is evaluated for multiple cores on basis of speedup.  The performance of the proposed algorithm is compared with the other such parallel algorithm and it shows an improvement by more than 15% preliminary experiment

  4. Developing Parallel Application on Multi-core Mobile Phone

    Directory of Open Access Journals (Sweden)

    Dhuha Basheer Abdullah

    2013-12-01

    Full Text Available One cannot imagine daily life today without mobile devices such as mobile phones or PDAs. They tend to become your mobile computer offering all features one might need on the way. As a result devices are less expensive and include a huge amount of high end technological components. Thus they also become attractive for scientific research. Today multi-core mobile phones are taking all the attention. Relying on the principles of tasks and data parallelism, we propose in this paper a real-time mobile lane departure warning system (M-LDWS based on a carefully designed parallel programming framework on a quad-core mobile phone, and show how to increase the utilization of processors to achieve improvement on the system’s runtime.

  5. TECA: A Parallel Toolkit for Extreme Climate Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Prabhat, Mr; Ruebel, Oliver; Byna, Surendra; Wu, Kesheng; Li, Fuyu; Wehner, Michael; Bethel, E. Wes

    2012-03-12

    We present TECA, a parallel toolkit for detecting extreme events in large climate datasets. Modern climate datasets expose parallelism across a number of dimensions: spatial locations, timesteps and ensemble members. We design TECA to exploit these modes of parallelism and demonstrate a prototype implementation for detecting and tracking three classes of extreme events: tropical cyclones, extra-tropical cyclones and atmospheric rivers. We process a modern TB-sized CAM5 simulation dataset with TECA, and demonstrate good runtime performance for the three case studies.

  6. Parallel Algorithm Core: A Novel IPSec Algorithm Engine for Both Exploiting Parallelism and Improving Scalability

    Institute of Scientific and Technical Information of China (English)

    Dong-Nian Cheng; Yu-Xiang Hu; Cai-Xia Liu

    2008-01-01

    To deal with the challenges of both computation-complexity and algorithm-scalability posed to the design of an IPSec engine, we develop PAC (parallel algorithm core), called PAC, employed in an IPSec engine, which can meet requirements of both exploiting parallelism existing in IPSec packets and offering scalability in both the scales and types of cryptographic algorithms. With three kinds of parallelism and two kinds of transparency defined, a novel hierarchy of the specifically-designed parallel structure for PAC is presented, followed by corresponding mechanisms. With a simulation, the scalability of PAC is examined. For the purpose of performance evaluation, a Quasi Birth-and-Death (QBD) process is then established to model a simplified version of the proposed PAC. Performance evaluation of PAC in terms of two representative measures, throughput and mean packet waiting time, is numerically investigated. A comparison study is done on a simulation basis. Conclusions are finally drawn for providing a helpful guideline for both the design and implementation of our proposal.

  7. Parallel Access of Out-Of-Core Dense Extendible Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Rotem, Doron

    2007-07-26

    Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements.

  8. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  9. ParCAT: A Parallel Climate Analysis Toolkit

    Science.gov (United States)

    Haugen, B.; Smith, B.; Steed, C.; Ricciuto, D. M.; Thornton, P. E.; Shipman, G.

    2012-12-01

    Climate science has employed increasingly complex models and simulations to analyze the past and predict the future of our climate. The size and dimensionality of climate simulation data has been growing with the complexity of the models. This growth in data is creating a widening gap between the data being produced and the tools necessary to analyze large, high dimensional data sets. With single run data sets increasing into 10's, 100's and even 1000's of gigabytes, parallel computing tools are becoming a necessity in order to analyze and compare climate simulation data. The Parallel Climate Analysis Toolkit (ParCAT) provides basic tools that efficiently use parallel computing techniques to narrow the gap between data set size and analysis tools. ParCAT was created as a collaborative effort between climate scientists and computer scientists in order to provide efficient parallel implementations of the computing tools that are of use to climate scientists. Some of the basic functionalities included in the toolkit are the ability to compute spatio-temporal means and variances, differences between two runs and histograms of the values in a data set. ParCAT is designed to facilitate the "heavy lifting" that is required for large, multidimensional data sets. The toolkit does not focus on performing the final visualizations and presentation of results but rather, reducing large data sets to smaller, more manageable summaries. The output from ParCAT is provided in commonly used file formats (NetCDF, CSV, ASCII) to allow for simple integration with other tools. The toolkit is currently implemented as a command line utility, but will likely also provide a C library for developers interested in tighter software integration. Elements of the toolkit are already being incorporated into projects such as UV-CDAT and CMDX. There is also an effort underway to implement portions of the CCSM Land Model Diagnostics package using ParCAT in conjunction with Python and gnuplot. Par

  10. Parallelizing Climate Data Management System, version 3 (CDMS3)

    Science.gov (United States)

    Nadeau, D.; Williams, D. N.; Painter, J.; Doutriaux, C.

    2015-12-01

    The Climate Data Management System is an object-oriented data management system, specialized for organizing multidimensional, gridded data used in climate analyses for data observation and simulation. The basic unit of computation in CDMS3 is the variable, which consist of a multidimensional array that represents climate information in four dimensions corresponding to: time, pressure levels, latitudes, and longitudes. As model become more precise in their computation, the volume of data generated becomes bigger and difficult to handle due to the limit of computational resources. Model today can produce data a time frequency of one hourly, three hourly, or six hourly for spatial footprint close to satellite data used run models. The amount of time for scientists to analyze the data and retrieve useful information is more and more unmanageable. Parallelizing libraries such as CMDS3 would ease the burden of working with such big datasets. Multiple approaches of parallelizing are possible. The most obvious one is embarrassingly parallel or pleasingly parallel programming where each computer node processes one file at a time. A more challenging approach is to send a piece of the data to each node for computation and each node will save the results at its right place in a file as a slab of data. This is possible with Hierarchical Data Format 5 (HDF5) using the Message Passing Interface (MPI). A final approach would be the use of Open Multi-Processing API (OpenMP) where a master thread is split in multiple threads for different sections of the main code. Each method has its advantages and disadvantages. This poster bring to light each benefit of these methods and seek to find an optimal solution to compute climate data analyses in a efficient fashion using one or a mixtures of these parallelized methods.

  11. Exploitation of parallelism in climate models. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Baer, Ferdinand; Tribbia, Joseph J.; Williamson, David L.

    2001-02-05

    This final report includes details on the research accomplished by the grant entitled 'Exploitation of Parallelism in Climate Models' to the University of Maryland. The purpose of the grant was to shed light on (a) how to reconfigure the atmospheric prediction equations such that the time iteration process could be compressed by use of MPP architecture; (b) how to develop local subgrid scale models which can provide time and space dependent parameterization for a state-of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics; and (c) how to capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. In the process of addressing these issues, we created parallel algorithms with spectral accuracy; we developed a process for concurrent climate simulations; we established suitable model reconstructions to speed up computation; we identified and tested optimum realization statistics; we undertook a number of parameterization studies to better understand model physics; and we studied the impact of subgrid scale motions and their parameterization in atmospheric models.

  12. Parallel Performance of MPI Sorting Algorithms on Dual-Core Processor Windows-Based Systems

    CERN Document Server

    Elnashar, Alaa Ismail

    2011-01-01

    Message Passing Interface (MPI) is widely used to implement parallel programs. Although Windowsbased architectures provide the facilities of parallel execution and multi-threading, little attention has been focused on using MPI on these platforms. In this paper we use the dual core Window-based platform to study the effect of parallel processes number and also the number of cores on the performance of three MPI parallel implementations for some sorting algorithms.

  13. Load-balancing algorithms for the parallel community climate model

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.T.; Toonen, B.R.

    1995-01-01

    Implementations of climate models on scalable parallel computer systems can suffer from load imbalances resulting from temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we describe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the Community Climate Model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers. The load-balancing library developed in this work is available for use in other climate models.

  14. Parallel VLSI design for the fast -D DWT core algorithm

    Institute of Scientific and Technical Information of China (English)

    WEI Benjie; LIU Mingye; ZHOU Yihua; CHENG Baodong

    2007-01-01

    By studying the core algorithm of a three-dimensional discrete wavelet transform (3-D DWT) in depth,this Paper divides it into three one-dimensional discrete wavelet transforms (1-D DWTs).Based on the implementation of a 3-D DWT software,a parallel architecture design of a very large-scale integration(VLSI)is produced.It needs three dual-port random-access memory(RAM)to store the temporary results and transpose the matrix,then builds up a pipeline model composed of the three 1-D DWTs.In the design.the finite state machine(FSM)is used well to control the flow.Compared with the serial mode.the experimental results of the post synthesized simulation show that the design method is correct and effective.It can increase the processing speed by about 66%.work at 59 MHz,and meet the real-time needs of the video encoder.

  15. MPI-hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-03-20

    This work studies the performance and scalability characteristics of"hybrid'" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  16. Tomograph: Highlighting query parallelism in a multi-core system

    NARCIS (Netherlands)

    Gawade, M.M.; Kersten, M.L.

    2013-01-01

    Query parallelism improves serial query execution performance by orders of magnitude. Getting optimal performance from an already parallelized query plan is however difficult due to its dependency on run time factors such as correct operator scheduling, memory pressure, disk io performance, and oper

  17. Tomograph: highlighting query parallelism in a multi-core system

    NARCIS (Netherlands)

    M. Gawade; M. Kersten

    2013-01-01

    Query parallelism improves serial query execution performance by orders of magnitude. Getting optimal performance from an already parallelized query plan is however difficult due to its dependency on run time factors such as correct operator scheduling, memory pressure, disk io performance, and oper

  18. Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    Science.gov (United States)

    Howison, M.; Bethel, E. W.; Childs, H.

    2011-10-01

    This work studies the performance and scalability characteristics of "hybrid" parallel programming and execution as applied to raycasting volume rendering - a staple visualization algorithm - on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today, as well as processors capable of running hundreds of concurrent threads (GPUs), we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  19. Parallel Likelihood Function Evaluation on Heterogeneous Many-core Systems

    CERN Document Server

    Jarp, Sverre; Leduc, Julien; Nowak, Andrzej; Sneen Lindal, Yngve

    2011-01-01

    This paper describes a parallel implementation that allows the evaluations of the likelihood function for data analysis methods to run cooperatively on heterogeneous computational devices (i.e. CPU and GPU) belonging to a single computational node. The implementation is able to split and balance the workload needed for the evaluation of the likelihood function in corresponding sub-workloads to be executed in parallel on each computational device. The CPU parallelization is implemented using OpenMP, while the GPU implementation is based on OpenCL. The comparison of the performance of these implementations for different configurations and different hardware systems are reported. Tests are based on a real data analysis carried out in the high energy physics community.

  20. Ice core melt features in relation to Antarctic coastal climate

    NARCIS (Netherlands)

    Kaczmarska, M.; Isaksson, E.; Karlöf, L.; Brandt, O.; Winther, J.G.; van de Wal, R.S.W.; van den Broeke, M.R.; Johnsen, S.J.

    2006-01-01

    Measurement of light intensity transmission was carried out on an ice core S100 from coastal Dronning Maud Land (DML). Ice lenses were observed in digital pictures of the core and recorded as peaks in the light transmittance record. The frequency of ice layer occurrence was compared with climate pro

  1. Scalable Parallelization of Skyline Computation for Multi-core Processors

    DEFF Research Database (Denmark)

    Chester, Sean; Sidlauskas, Darius; Assent, Ira

    2015-01-01

    The skyline is an important query operator for multi-criteria decision making. It reduces a dataset to only those points that offer optimal trade-offs of dimensions. In general, it is very expensive to compute. Recently, multi-core CPU algorithms have been proposed to accelerate the computation o...

  2. POTENTIAL: A Highly Adaptive Core of Parallel Database System

    Institute of Scientific and Technical Information of China (English)

    文继荣; 陈红; 王珊

    2000-01-01

    POTENTIAL is a virtual database machine based on general computing platforms, especially parallel computing platforms. It provides a complete solution to high-performance database systems by a 'virtual processor + virtual data bus + virtual memory' architecture. Virtual processors manage all CPU resources in the system, on which various operations are running. Virtual data bus is responsible for the management of datatransmission between associated operations, which forms the hinges of the entire system. Virtual memory provides efficient data storage and buffering mechanisms that conform to data reference behaviors in database systems. The architecture of POTENTIAL is very clear and has many good features,including high efficiency, high scalability, high extensibility, high portability, etc.

  3. Parallel Algorithms for Medical Informatics on Data-Parallel Many-Core Processors

    OpenAIRE

    Moazeni, Maryam

    2013-01-01

    The extensive use of medical monitoring devices has resulted in the generation of tremendous amounts of data. Storage, retrieval, and analysis of such data require platforms that can scale with data growth and adapt to the various behavior of the analysis and processing algorithms. In recent years, many-core processors and more specifically many-core Graphical Processing Units (GPUs) have become one of the most promising platforms for high performance processing of data, due to the massive pa...

  4. First results from core-edge parallel composition in the FACETS project

    Energy Technology Data Exchange (ETDEWEB)

    Cary, John R. [Tech-X Corporation; Candy, Jeff [General Atomics; Cohen, Ronald H. [Lawrence Livermore National Laboratory (LLNL); Krasheninnikov, Sergei [University of California, San Diego; McCune, Douglas [Princeton Plasma Physics Laboratory (PPPL); Estep, Donald J [Colorado State University, Fort Collins; Larson, Jay [Argonne National Laboratory (ANL); Malony, Allen [University of Oregon; Pankin, A. [Lehigh University, Bethlehem, PA; Worley, Patrick H [ORNL; Carlsson, Johann [Tech-X Corporation; Hakim, A H [Tech-X Corporation; Hamill, P [Tech-X Corporation; Kruger, Scott [Tech-X Corporation; Miah, Mahmood [Tech-X Corporation; Muzsala, S [Tech-X Corporation; Pletzer, Alexander [Tech-X Corporation; Shasharina, Svetlana [Tech-X Corporation; Wade-Stein, D [Tech-X Corporation; Wang, N [Tech-X Corporation; Balay, Satish [Argonne National Laboratory (ANL); McInnes, Lois [Argonne National Laboratory (ANL); Zhang, Hong [Argonne National Laboratory (ANL); Casper, T. A. [Lawrence Livermore National Laboratory (LLNL); Diachin, Lori [Lawrence Livermore National Laboratory (LLNL); Epperly, Thomas [Lawrence Livermore National Laboratory (LLNL); Rognlien, T. D. [Lawrence Livermore National Laboratory (LLNL); Fahey, Mark R [ORNL; Cobb, John W [ORNL; Morris, A [University of Oregon; Shende, Sameer [University of Oregon; Hammett, Greg [Princeton Plasma Physics Laboratory (PPPL); Indireshkumar, K [Tech-X Corporation; Stotler, D. [Princeton Plasma Physics Laboratory (PPPL); Pigarov, A [University of California, San Diego

    2008-01-01

    FACETS (Framework Application for Core-Edge Transport Simulations), now in its second year, has achieved its first coupled core-edge transport simulations. In the process, a number of accompanying accomplishments were achieved. These include a new parallel core component, a new wall component, improvements in edge and source components, and the framework for coupling all of this together. These accomplishments were a result of an interdisciplinary collaboration among computational physics, computer scientists, and applied mathematicians on the team.

  5. First results from core-edge parallel composition in the FACETS project

    Science.gov (United States)

    Cary, J. R.; Candy, J.; Cohen, R. H.; Krasheninnikov, S.; McCune, D. C.; Estep, D. J.; Larson, J.; Malony, A. D.; Pankin, A.; Worley, P. H.; Carlsson, J. A.; Hakim, A. H.; Hamill, P.; Kruger, S.; Miah, M.; Muzsala, S.; Pletzer, A.; Shasharina, S.; Wade-Stein, D.; Wang, N.; Balay, S.; McInnes, L.; Zhang, H.; Casper, T.; Diachin, L.; Epperly, T.; Rognlien, T. D.; Fahey, M. R.; Cobb, J.; Morris, A.; Shende, S.; Hammett, G. W.; Indireshkumar, K.; Stotler, D.; Pigarov, A. Y.

    2008-07-01

    FACETS (Framework Application for Core-Edge Transport Simulations), now in its second year, has achieved its first coupled core-edge transport simulations. In the process, a number of accompanying accomplishments were achieved. These include a new parallel core component, a new wall component, improvements in edge and source components, and the framework for coupling all of this together. These accomplishments were a result of an interdisciplinary collaboration among computational physics, computer scientists, and applied mathematicians on the team.

  6. Hybrid Parallelism for Volume Rendering on Large, Multi- and Many-core Systems

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2011-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells. The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.

  7. Parallelization of a three-dimensional whole core transport code DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Jin Young, Cho; Han Gyu, Joo; Ha Yong, Kim; Moon-Hee, Chang [Korea Atomic Energy Research Institute, Yuseong-gu, Daejon (Korea, Republic of)

    2003-07-01

    Parallelization of the DeCART (deterministic core analysis based on ray tracing) code is presented that reduces the computational burden of the tremendous computing time and memory required in three-dimensional whole core transport calculations. The parallelization employs the concept of MPI grouping and the MPI/OpenMP mixed scheme as well. Since most of the computing time and memory are used in MOC (method of characteristics) and the multi-group CMFD (coarse mesh finite difference) calculation in DeCART, variables and subroutines related to these two modules are the primary targets for parallelization. Specifically, the ray tracing module was parallelized using a planar domain decomposition scheme and an angular domain decomposition scheme. The parallel performance of the DeCART code is evaluated by solving a rodded variation of the C5G7MOX three dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In C5G7MOX problem with 24 CPUs, a speedup of maximum 21 is obtained on an IBM Regatta machine and 22 on a LINUX Cluster in the MOC kernel, which indicates good parallel performance of the DeCART code. In the simplified SMART problem, the memory requirement of about 11 GBytes in the single processor cases reduces to 940 Mbytes with 24 processors, which means that the DeCART code can now solve large core problems with affordable LINUX clusters. (authors)

  8. Ice Core Records of Recent Northwest Greenland Climate

    Science.gov (United States)

    Osterberg, E. C.; Wong, G. J.; Ferris, D.; Lutz, E.; Howley, J. A.; Kelly, M. A.; Axford, Y.; Hawley, R. L.

    2014-12-01

    Meteorological station data from NW Greenland indicate a 3oC temperature rise since 1990, with most of the warming occurring in fall and winter. According to remote sensing data, the NW Greenland ice sheet (GIS) and coastal ice caps are responding with ice mass loss and margin retreat, but the cryosphere's response to previous climate variability is poorly constrained in this region. We are developing multi-proxy records (lake sediment cores, ice cores, glacial geologic data, glaciological models) of Holocene climate change and cryospheric response in NW Greenland to improve projections of future ice loss and sea level rise in a warming climate. As part of our efforts to develop a millennial-length ice core paleoclimate record from the Thule region, we collected and analyzed snow pit samples and short firn cores (up to 21 m) from the coastal region of the GIS (2Barrel site; 76.9317o N, 63.1467o W, 1685 m el.) and the summit of North Ice Cap (76.938o N, 67.671o W, 1273 m el.) in 2011, 2012 and 2014. The 2Barrel ice core record has statistically significant relationships with regional spring and fall Baffin Bay sea ice extent, summertime temperature, and annual precipitation. Here we evaluate relationships between the 2014 North Ice Cap firn core glaciochemical record and climate variability from regional instrumental stations and reanalysis datasets. We compare the coastal North Ice Cap record to more inland records from 2Barrel, Camp Century and NEEM to evaluate spatial and elevational gradients in recent NW Greenland climate change.

  9. Multi-Core DSP Based Parallel Architecture for FMCW SAR Real-Time Imaging

    Directory of Open Access Journals (Sweden)

    C. F. Gu

    2015-12-01

    Full Text Available This paper presents an efficient parallel processing architecture using multi-core Digital Signal Processor (DSP to improve the capability of real-time imaging for Frequency Modulated Continuous Wave Synthetic Aperture Radar (FMCW SAR. With the application of the proposed processing architecture, the imaging algorithm is modularized, and each module is efficiently realized by the proposed processing architecture. In each module, the data processing of different cores is executed in parallel, also the data transmission and data processing of each core are synchronously carried out, so that the processing time for SAR imaging is reduced significantly. Specifically, the time of corner turning operation, which is very time-consuming, is ignored under computationally intensive case. The proposed parallel architecture is applied to a compact Ku-band FMCW SAR prototype to achieve real-time imageries with 34 cm x 51 cm (range x azimuth resolution.

  10. Efficient Parallelization of Short-Range Molecular Dynamics Simulations on Many-Core Systems

    CERN Document Server

    Meyer, R

    2013-01-01

    This article describes an algorithm for the parallelization of molecular-dynamics simulations with short-range forces on many-core systems with shared-memory. The algorithm is designed to achieve high parallel speedups for strongly inhomogeneous systems like nanodevices or nanostructured materials. In the proposed scheme the calculation of the forces and the generation of neighbor lists is divided into small tasks. The tasks are then executed by a thread pool according to a dependent task schedule. This schedule is constructed in such a way that a particle is never accessed by two threads at the same time. Results from benchmark simulations show that the described algorithm achieves excellent parallel speedups above 80 % per processor core for different kinds of systems and all numbers of cores. For inhomogeneous systems the speedups are strongly superior to those obtained with spatial decomposition.

  11. Climatic signals from 76 shallow firn cores in Dronning Maud Land, East Antarctica

    Directory of Open Access Journals (Sweden)

    S. Altnau

    2014-12-01

    Full Text Available The spatial and temporal distribution of surface mass balance (SMB and δ18O were investigated in the first comprehensive study of a set of 76 firn cores retrieved by various expeditions during the past three decades in Dronning Maud Land, East Antarctica. The large number of cores was used to calculate stacked records of SMB and δ18O, which considerably increased the signal-to-noise ratio compared to earlier studies and facilitated the detection of climatic signals. Considerable differences between cores from the interior plateau and the coastal cores were found. The δ18O of both the plateau and the ice shelf cores exhibit a slight positive trend over the second half of the 20th century. In the corresponding period, the SMB has a negative trend in the ice shelf cores, but increases on the plateau. Comparison with meteorological data from Neumayer Station revealed that for the ice shelf regions atmospheric dynamic effects are more important than thermodynamics, while on the plateau, the temporal variations of SMB and δ18O occur mostly in parallel, thus can be explained by thermodynamic effects. The Southern Annular Mode (SAM exhibits a positive trend since the mid-1960s, which is assumed to lead to a cooling of East Antarctica. This is not confirmed by the firn core data in our data set. Changes in the atmospheric circulation that result in a changed seasonal distribution of precipitation/accumulation could partly explain the observed features in the ice shelf cores.

  12. Adaptive data-driven parallelization of multi-view video coding on multi-core processor

    Institute of Scientific and Technical Information of China (English)

    PANG Yi; HU WeiDong; SUN LiFeng; YANG ShiQiang

    2009-01-01

    Multi-view video coding (MVC) comprises rich 3D information and is widely used in new visual media, such as 3DTV and free viewpoint TV (FTV). However, even with mainstream computer manufacturers migrating to multi-core processors, the huge computational requirement of MVC currently prohibits its wide use in consumer markets. In this paper, we demonstrate the design and implementation of the first parallel MVC system on Cell Broadband EngineTM processor which is a state-of-the-art multi-core processor. We propose a task-dispatching algorithm which is adaptive data-driven on the frame level for MVC, and implement a parallel multi-view video decoder with modified H.264/AVC codec on real machine. This approach provides scalable speedup (up to 16 times on sixteen cores) through proper local store management, utilization of code locality and SIMD improvement. Decoding speed, speedup and utilization rate of cores are expressed in experimental results.

  13. Streamline Integration using MPI-Hybrid Parallelism on a Large Multi-Core Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Camp, David; Garth, Christoph; Childs, Hank; Pugmire, Dave; Joy, Kenneth I.

    2010-11-01

    Streamline computation in a very large vector field data set represents a significant challenge due to the non-local and datadependentnature of streamline integration. In this paper, we conduct a study of the performance characteristics of hybrid parallel programmingand execution as applied to streamline integration on a large, multicore platform. With multi-core processors now prevalent in clustersand supercomputers, there is a need to understand the impact of these hybrid systems in order to make the best implementation choice.We use two MPI-based distribution approaches based on established parallelization paradigms, parallelize-over-seeds and parallelize-overblocks,and present a novel MPI-hybrid algorithm for each approach to compute streamlines. Our findings indicate that the work sharing betweencores in the proposed MPI-hybrid parallel implementation results in much improved performance and consumes less communication andI/O bandwidth than a traditional, non-hybrid distributed implementation.

  14. Alpine ice cores and ground penetrating radar: combined investigations for glaciological and climatic interpretations of a cold Alpine ice body

    Energy Technology Data Exchange (ETDEWEB)

    Eisen, Olaf; Nixdorf, Uwe [Alfred-Wegener-Inst. fuer Polar- und Meeresforschung, Bremerhaven (Germany); Keck, Lothar; Wagenbach, Dietmar [Univ. Heidelberg (Germany). Inst. fuer Umweltphysik

    2003-11-01

    Accurate interpretation of ice cores as climate archives requires detailed knowledge of their past and present geophysical environment. Different techniques facilitate the determination and reconstruction of glaciological settings surrounding the drilling location. During the ALPCLIM1 project, two ice cores containing long-term climate information were retrieved from Colle Gnifetti, Swiss-Italian Alps. Here, we investigate the potential of ground penetrating radar (GPR) surveys, in conjunction with ice core data, to obtain information about the internal structure of the cold Alpine ice body to improve climatic interpretations. Three drill sites are connected by GPR profiles, running parallel and perpendicular to the flow line, thus yielding a three-dimensional picture of the subsurface and enabling the tracking of internal reflection horizons between the locations. As the observed reflections are of isochronic origin, they permit the transfer of age-depth relations between the ice cores. The accuracy of the GPR results is estimated by comparison of transferred timescales with original core datings, independent information from an older ice core, and, based on glaciological surface data, findings from flow modeling. Our study demonstrates that GPR is a mandatory tool for Alpine ice core studies, as it permits mapping of major transitions in physical-chemical properties, transfer of age-depth relations between sites, correlate signals in core records for interpretation, and establish a detailed picture of the flow regime surrounding the climate archive.

  15. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Directory of Open Access Journals (Sweden)

    Fadi N. Sibai

    2014-01-01

    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  16. Earth's Climate History from Glaciers and Ice Cores

    Science.gov (United States)

    Thompson, Lonnie

    2013-03-01

    Glaciers serve both as recorders and early indicators of climate change. Over the past 35 years our research team has recovered climatic and environmental histories from ice cores drilled in both Polar Regions and from low to mid-latitude, high-elevation ice fields. Those ice core -derived proxy records extending back 25,000 years have made it possible to compare glacial stage conditions in the Tropics with those in the Polar Regions. High-resolution records of δ18O (in part a temperature proxy) demonstrate that the current warming at high elevations in the mid- to lower latitudes is unprecedented for the last two millennia, although at many sites the early Holocene was warmer than today. Remarkable similarities between changes in the highland and coastal cultures of Peru and regional climate variability, especially precipitation, imply a strong connection between prehistoric human activities and regional climate. Ice cores retrieved from shrinking glaciers around the world confirm their continuous existence for periods ranging from hundreds to thousands of years, suggesting that current climatological conditions in those regions today are different from those under which these ice fields originated and have been sustained. The ongoing widespread melting of high-elevation glaciers and ice caps, particularly in low to middle latitudes, provides strong evidence that a large-scale, pervasive and, in some cases, rapid change in Earth's climate system is underway. Observations of glacier shrinkage during the 20th and 21st century girdle the globe from the South American Andes, the Himalayas, Kilimanjaro (Tanzania, Africa) and glaciers near Puncak Jaya, Indonesia (New Guinea). The history and fate of these ice caps, told through the adventure, beauty and the scientific evidence from some of world's most remote mountain tops, provide a global perspective for contemporary climate. NSF Paleoclimate Program

  17. Par@Graph – a parallel toolbox for the construction and analysis of large complex climate networks

    Directory of Open Access Journals (Sweden)

    H. Ihshaish

    2015-01-01

    Full Text Available In this paper, we present Par@Graph, a software toolbox to reconstruct and analyze complex climate networks having a large number of nodes (up to at least O (106 and of edges (up to at least O (1012. The key innovation is an efficient set of parallel software tools designed to leverage the inherited hybrid parallelism in distributed-memory clusters of multi-core machines. The performance of the toolbox is illustrated through networks derived from sea surface height (SSH data of a global high-resolution ocean model. Less than 8 min are needed on 90 Intel Xeon E5-4650 processors to construct a climate network including the preprocessing and the correlation of 3 × 105 SSH time series, resulting in a weighted graph with the same number of vertices and about 3 × 106 edges. In less than 5 min on 30 processors, the resulted graph's degree centrality, strength, connected components, eigenvector centrality, entropy and clustering coefficient metrics were obtained. These results indicate that a complete cycle to construct and analyze a large-scale climate network is available under 13 min. Par@Graph therefore facilitates the application of climate network analysis on high-resolution observations and model results, by enabling fast network construction from the calculation of statistical similarities between climate time series. It also enables network analysis at unprecedented scales on a variety of different sizes of input data sets.

  18. A global database with parallel measurements to study non-climatic changes

    Science.gov (United States)

    Venema, Victor; Auchmann, Renate; Aguilar, Enric

    2015-04-01

    n this work we introduce the rationale behind the ongoing compilation of a parallel measurements database, under the umbrella of the International Surface Temperatures Initiative (ISTI) and with the support of the World Meteorological Organization. We intend this database to become instrumental for a better understanding of inhomogeneities affecting the evaluation of long term changes in daily climate data. Long instrumental climate records are usually affected by non-climatic changes, due to, e.g., relocations and changes in instrumentation, instrument height or data collection and manipulation procedures. These so-called inhomogeneities distort the climate signal and can hamper the assessment of trends and variability. Thus to study climatic changes we need to accurately distinguish non-climatic and climatic signals. .The most direct way to study the influence of non-climatic changes on the distribution and to understand the reasons for these biases is the analysis of parallel measurements representing the old and new situation (in terms of e.g. instruments, location). According to the limited number of available studies and our understanding of the causes of inhomogeneity, we expect that they will have a strong impact on the tails of the distribution of temperatures and most likely of other climate elements. Our abilities to statistically homogenize daily data will be increased by systematically studying different causes of inhomogeneity replicated through parallel measurements. Current studies of non-climatic changes using parallel data are limited to local and regional case studies. However, the effect of specific transitions depends on the local climate and the most interesting climatic questions are about the systematic large-scale biases produced by transitions that occurred in many regions. Important potentially biasing transitions are the adoption of Stevenson screens, efforts to reduce undercatchment of precipitation or the move to automatic weather

  19. Data Parallel Bin-Based Indexing for Answering Queries on Multi-Core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Gosink, Luke; Wu, Kesheng; Bethel, E. Wes; Owens, John D.; Joy, Kenneth I.

    2009-06-02

    The multi-core trend in CPUs and general purpose graphics processing units (GPUs) offers new opportunities for the database community. The increase of cores at exponential rates is likely to affect virtually every server and client in the coming decade, and presents database management systems with a huge, compelling disruption that will radically change how processing is done. This paper presents a new parallel indexing data structure for answering queries that takes full advantage of the increasing thread-level parallelism emerging in multi-core architectures. In our approach, our Data Parallel Bin-based Index Strategy (DP-BIS) first bins the base data, and then partitions and stores the values in each bin as a separate, bin-based data cluster. In answering a query, the procedures for examining the bin numbers and the bin-based data clusters offer the maximum possible level of concurrency; each record is evaluated by a single thread and all threads are processed simultaneously in parallel. We implement and demonstrate the effectiveness of DP-BIS on two multi-core architectures: a multi-core CPU and a GPU. The concurrency afforded by DP-BIS allows us to fully utilize the thread-level parallelism provided by each architecture--for example, our GPU-based DP-BIS implementation simultaneously evaluates over 12,000 records with an equivalent number of concurrently executing threads. In comparing DP-BIS's performance across these architectures, we show that the GPU-based DP-BIS implementation requires significantly less computation time to answer a query than the CPU-based implementation. We also demonstrate in our analysis that DP-BIS provides better overall performance than the commonly utilized CPU and GPU-based projection index. Finally, due to data encoding, we show that DP-BIS accesses significantly smaller amounts of data than index strategies that operate solely on a column's base data; this smaller data footprint is critical for parallel processors

  20. Multi-core and Many-core Shared-memory Parallel Raycasting Volume Rendering Optimization and Tuning

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark

    2012-01-31

    Given the computing industry trend of increasing processing capacity by adding more cores to a chip, the focus of this work is tuning the performance of a staple visualization algorithm, raycasting volume rendering, for shared-memory parallelism on multi-core CPUs and many-core GPUs. Our approach is to vary tunable algorithmic settings, along with known algorithmic optimizations and two different memory layouts, and measure performance in terms of absolute runtime and L2 memory cache misses. Our results indicate there is a wide variation in runtime performance on all platforms, as much as 254% for the tunable parameters we test on multi-core CPUs and 265% on many-core GPUs, and the optimal configurations vary across platforms, often in a non-obvious way. For example, our results indicate the optimal configurations on the GPU occur at a crossover point between those that maintain good cache utilization and those that saturate computational throughput. This result is likely to be extremely difficult to predict with an empirical performance model for this particular algorithm because it has an unstructured memory access pattern that varies locally for individual rays and globally for the selected viewpoint. Our results also show that optimal parameters on modern architectures are markedly different from those in previous studies run on older architectures. And, given the dramatic performance variation across platforms for both optimal algorithm settings and performance results, there is a clear benefit for production visualization and analysis codes to adopt a strategy for performance optimization through auto-tuning. These benefits will likely become more pronounced in the future as the number of cores per chip and the cost of moving data through the memory hierarchy both increase.

  1. Scalable High-Performance Parallel Design for Network Intrusion Detection Systems on Many-Core Processors

    OpenAIRE

    Jiang, Hayang; Xie, Gaogang; Salamatian, Kavé; Mathy, Laurent

    2013-01-01

    Network Intrusion Detection Systems (NIDSes) face significant challenges coming from the relentless network link speed growth and increasing complexity of threats. Both hardware accelerated and parallel software-based NIDS solutions, based on commodity multi-core and GPU processors, have been proposed to overcome these challenges. Network Intrusion Detection Systems (NIDSes) face significant challenges coming from the relentless network link speed growth and increasing complexity of threats. ...

  2. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    Science.gov (United States)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking

  3. Parallel computing of discrete element method on multi-core processors

    Institute of Scientific and Technical Information of China (English)

    Yusuke Shigeto; Mikio Sakai

    2011-01-01

    This paper describes parallel simulation techniques for the discrete element method (DEM) on multi-core processors.Recently,multi-core CPU and GPU processors have attracted much attention in accelerating computer simulations in various fields.We propose a new algorithm for multi-thread parallel computation of DEM,which makes effective use of the available memory and accelerates the computation.This study shows that memory usage is drastically reduced by using this algorithm.To show the practical use of DEM in industry,a large-scale powder system is simulated with a complicated drive unit.We compared the performance of the simulation between the latest GPU and CPU processors with optimized programs for each processor.The results show that the difference in performance is not substantial when using either GPUs or CPUs with a multi-thread parallel algorithm.In addition,DEM algorithm is shown to have high scalability in a multi-thread parallel computation on a CPU.

  4. A global database with parallel measurements to study non-climatic changes

    Science.gov (United States)

    Venema, Victor; Auchmann, Renate; Aguilar, Enric; Auer, Ingeborg; Azorin-Molina, Cesar; Brandsma, Theo; Brunetti, Michele; Dienst, Manuel; Domonkos, Peter; Gilabert, Alba; Lindén, Jenny; Milewska, Ewa; Nordli, Øyvind; Prohom, Marc; Rennie, Jared; Stepanek, Petr; Trewin, Blair; Vincent, Lucie; Willett, Kate; Wolff, Mareile

    2016-04-01

    In this work we introduce the rationale behind the ongoing compilation of a parallel measurements database, in the framework of the International Surface Temperatures Initiative (ISTI) and with the support of the World Meteorological Organization. We intend this database to become instrumental for a better understanding of inhomogeneities affecting the evaluation of long-term changes in daily climate data. Long instrumental climate records are usually affected by non-climatic changes, due to, e.g., (i) station relocations, (ii) instrument height changes, (iii) instrumentation changes, (iv) observing environment changes, (v) different sampling intervals or data collection procedures, among others. These so-called inhomogeneities distort the climate signal and can hamper the assessment of long-term trends and variability of climate. Thus to study climatic changes we need to accurately distinguish non-climatic and climatic signals. The most direct way to study the influence of non-climatic changes on the distribution and to understand the reasons for these biases is the analysis of parallel measurements representing the old and new situation (in terms of e.g. instruments, location, different radiation shields, etc.). According to the limited number of available studies and our understanding of the causes of inhomogeneity, we expect that they will have a strong impact on the tails of the distribution of air temperatures and most likely of other climate elements. Our abilities to statistically homogenize daily data will be increased by systematically studying different causes of inhomogeneity replicated through parallel measurements. Current studies of non-climatic changes using parallel data are limited to local and regional case studies. However, the effect of specific transitions depends on the local climate and the most interesting climatic questions are about the systematic large-scale biases produced by transitions that occurred in many regions. Important

  5. Climatic signals from 76 shallow firn cores in Dronning Maud Land, East Antarctica

    Directory of Open Access Journals (Sweden)

    S. Altnau

    2015-05-01

    Full Text Available The spatial and temporal distribution of surface mass balance (SMB and δ18O were investigated in the first comprehensive study of a set of 76 firn cores retrieved by various expeditions during the past 3 decades in Dronning Maud Land, East Antarctica. The large number of cores was used to calculate stacked records of SMB and δ18O, which considerably increased the signal-to-noise ratio compared to earlier studies and facilitated the detection of climatic signals. Considerable differences between cores from the interior plateau and the coastal cores were found. The δ18O of both the plateau and the ice shelf cores exhibit a slight positive trend over the second half of the 20th century. In the corresponding period, the SMB has a negative trend in the ice shelf cores, but increases on the plateau. Comparison with meteorological data from Neumayer Station revealed that for the ice shelf regions, atmospheric dynamic effects are more important than thermodynamics while on the plateau; the temporal variations of SMB and δ18O occur mostly in parallel, and thus can be explained by thermodynamic effects. The Southern Annular Mode (SAM has exhibited a positive trend since the mid-1960s, which is assumed to lead to a cooling of East Antarctica. This is not confirmed by the firn core data in our data set. Changes in the atmospheric circulation that result in a changed seasonal distribution of precipitation/accumulation could partly explain the observed features in the ice shelf cores.

  6. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs

    Directory of Open Access Journals (Sweden)

    Vaughn Matthew

    2010-11-01

    Full Text Available Abstract Background Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ messages (Σ being the size of the alphabet. Results In this paper we present a Θ(n/p time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/BBlog(M/B (M being the main memory size and B being the size of the disk block. We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster - both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. Conclusions The bi

  7. Recent North West Greenland climate variability documented by NEEM shallow ice cores

    Science.gov (United States)

    Masson-Delmotte, Valérie; Steen-Larsen, Hans-Christian; Popp, Trevor; Vinther, Bo; Oerter, Hans; Ortega, Pablo; White, Jim; Orsi, Anais; Falourd, Sonia; Minster, Benedicte; Jouzel, Jean; Landais, Amaelle; Risi, Camille; Werner, Martin; Swingedouw, Didier; Fettweis, Xavier; Gallée, Hubert; Sveinbjornsdottir, Arny; Gudlaugsdottir, Hera; Box, Jason

    2014-05-01

    Short water stable isotope records obtained from NEEM ice cores (North West Greenland) have been shown to be sensitive to NW Greenland temperature variations, and sea-ice extent in the Baffin Bay area (Steen-Larsen et al, JGR, 2011), with maximum snowfall deposition during summer, therefore providing information complementary to other Greenland ice core records. At the NEEM deep drilling camp, several snow pits and shallow ice cores have been retrieved and analysed at high resolution (seasonal to annual) for water stable isotopes using mass spectrometry and laser instruments in order to document recent climate variability, complementing and facilitating the interpretation of the long records obtained from the deep ice core which extends back to the last interglacial period (NEEM, Nature, 2013). The different pits and shallow ice core records allow to document the signal to noise ratio and to produce a robust stack back to 1750, and up to 2011. The stack record of annual mean d18O depicts a recent isotopic enrichment in parallel with the Greenland warming inferred from coastal weather stations, and shows that many features of decadal variations are in fact well captured by the low resolution profiles measured along the deep ice core data. Recent variations can therefore be compared to long-term trends and centennial variations of the last Holocene, documented at about 5 year resolution. For the past decades to centuries, the NEEM isotopic records are compared with estimations and simulations of local temperature for different seasons, results from NEEM borehole temperature inversions, d18O records from other Greenland ice cores, large scale modes of variability (NAO and AMO) and with simulations from atmospheric general circulation models equiped with water stable isotopes.

  8. Ice core and climate reanalysis analogs to predict Antarctic and Southern Hemisphere climate changes

    Science.gov (United States)

    Mayewski, P. A.; Carleton, A. M.; Birkel, S. D.; Dixon, D.; Kurbatov, A. V.; Korotkikh, E.; McConnell, J.; Curran, M.; Cole-Dai, J.; Jiang, S.; Plummer, C.; Vance, T.; Maasch, K. A.; Sneed, S. B.; Handley, M.

    2017-01-01

    A primary goal of the SCAR (Scientific Committee for Antarctic Research) initiated AntClim21 (Antarctic Climate in the 21st Century) Scientific Research Programme is to develop analogs for understanding past, present and future climates for the Antarctic and Southern Hemisphere. In this contribution to AntClim21 we provide a framework for achieving this goal that includes: a description of basic climate parameters; comparison of existing climate reanalyses; and ice core sodium records as proxies for the frequencies of marine air mass intrusion spanning the past ∼2000 years. The resulting analog examples include: natural variability, a continuation of the current trend in Antarctic and Southern Ocean climate characterized by some regions of warming and some cooling at the surface of the Southern Ocean, Antarctic ozone healing, a generally warming climate and separate increases in the meridional and zonal winds. We emphasize changes in atmospheric circulation because the atmosphere rapidly transports heat, moisture, momentum, and pollutants, throughout the middle to high latitudes. In addition, atmospheric circulation interacts with temporal variations (synoptic to monthly scales, inter-annual, decadal, etc.) of sea ice extent and concentration. We also investigate associations between Antarctic atmospheric circulation features, notably the Amundsen Sea Low (ASL), and primary climate teleconnections including the SAM (Southern Annular Mode), ENSO (El Nîno Southern Oscillation), the Pacific Decadal Oscillation (PDO), the AMO (Atlantic Multidecadal Oscillation), and solar irradiance variations.

  9. Parallel structures for disaster risk reduction and climate change adaptation in Southern Africa

    Directory of Open Access Journals (Sweden)

    Per Becker

    2013-01-01

    Full Text Available During the last decade, the interest of the international community in the concepts of disaster risk reduction and climate change adaptation has been growing immensely. Even though an increasing number of scholars seem to view these concepts as two sides of the same coin (at least when not considering the potentially positive effects of climate change, in practice the two concepts have developed in parallel rather than in an integrated manner when it comes to policy, rhetoric and funding opportunities amongst international organisations and donors. This study investigates the extent of the creation of parallel structures for disaster risk reduction and climate change adaptation in the Southern African Development Community (SADC region. The chosen methodology for the study is a comparative case study and the data are collected through focus groups and content analysis of documentary sources, as well as interviews with key informants. The results indicate that parallel structures for disaster risk reduction and climate change adaptation have been established in all but one of the studied countries. The qualitative interviews performed in some of the countries indicate that stakeholders in disaster risk reduction view this duplication of structures as unfortunate, inefficient and a fertile setup for conflict over resources for the implementation of similar activities. Additional research is called for in order to study the concrete effects of having these parallel structures as a foundation for advocacy for more efficient future disaster risk reduction and climate change adaptation.

  10. Evidence for parallel adaptation to climate across the natural range of Arabidopsis thaliana.

    Science.gov (United States)

    Stearns, Frank W; Fenster, Charles B

    2013-07-01

    How organisms adapt to different climate habitats is a key question in evolutionary ecology and biological conservation. Species distributions are often determined by climate suitability. Consequently, the anthropogenic impact on earth's climate is of key concern to conservation efforts because of our relatively poor understanding of the ability of populations to track and evolve to climate change. Here, we investigate the ability of Arabidopsis thaliana to occupy climate space by quantifying the extent to which different climate regimes are accessible to different A. thaliana genotypes using publicly available data from a large-scale genotyping project and from a worldwide climate database. The genetic distance calculated from 149 single-nucleotide polymorphisms (SNPs) among 60 lineages of A. thaliana was compared to the corresponding climate distance among collection localities calculated from nine different climatic factors. A. thaliana was found to be highly labile when adapting to novel climate space, suggesting that populations may experience few constraints when adapting to changing climates. Our results also provide evidence of a parallel or convergent evolution on the molecular level supporting recent generalizations regarding the genetics of adaptation.

  11. COLLABORATIVE RESEARCH: Parallel Analysis Tools and New Visualization Techniques for Ultra-Large Climate Data Set

    Energy Technology Data Exchange (ETDEWEB)

    middleton, Don [Co-PI; Haley, Mary

    2014-12-10

    ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

  12. Parallel analysis tools and new visualization techniques for ultra-large climate data set

    Energy Technology Data Exchange (ETDEWEB)

    Middleton, Don [National Center for Atmospheric Research, Boulder, CO (United States); Haley, Mary [National Center for Atmospheric Research, Boulder, CO (United States)

    2014-12-10

    ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

  13. DYNAMICO, an atmospheric dynamical core for high-performance climate modeling

    Science.gov (United States)

    Dubos, Thomas; Meurdesoif, Yann; Spiga, Aymeric; Millour, Ehouarn; Fita, Lluis; Hourdin, Frédéric; Kageyama, Masa; Traore, Abdoul-Khadre; Guerlet, Sandrine; Polcher, Jan

    2017-04-01

    Institut Pierre Simon Laplace has developed a very scalable atmospheric dynamical core, DYNAMICO, based on energy-conserving finite-difference/finite volume numerics on a quasi-uniform icosahedral-hexagonal mesh. Scalability is achieved by combining hybrid MPI/OpenMP parallelism to asynchronous I/O. This dynamical core has been coupled to radiative transfer physics tailored to the atmosphere of Saturn, allowing unprecedented simulations of the climate of this giant planet. For terrestrial climate studies DYNAMICO is being integrated into the IPSL Earth System Model IPSL-CM. Preliminary aquaplanet and AMIP-style simulations yield reasonable results when compared to outputs from IPSL-CM5. The observed performance suggests that an order of magnitude may be gained with respect to IPSL-CM CMIP5 simulations either on the duration of simulations or on their resolution. Longer simulations would be of interest for the study of paleoclimate, while higher resolution could improve certain aspects of the modeled climate such as extreme events, as will be explored in the HighResMIP project. Following IPSL's strategic vision of building a unified global-regional modelling system, a fully-compressible, non-hydrostatic prototype of DYNAMICO has been developed, enabling future convection-resolving simulations. Work supported by ANR project "HEAT", grant number CE23_2014_HEAT Dubos, T., Dubey, S., Tort, M., Mittal, R., Meurdesoif, Y., and Hourdin, F.: DYNAMICO-1.0, an icosahedral hydrostatic dynamical core designed for consistency and versatility, Geosci. Model Dev., 8, 3131-3150, doi:10.5194/gmd-8-3131-2015, 2015.

  14. Par@Graph - a parallel toolbox for the construction and analysis of large complex climate networks

    NARCIS (Netherlands)

    Tantet, A.J.J.

    2015-01-01

    In this paper, we present Par@Graph, a software toolbox to reconstruct and analyze complex climate networks having a large number of nodes (up to at least 106) and edges (up to at least 1012). The key innovation is an efficient set of parallel software tools designed to leverage the inherited hybrid

  15. A Multi-Core Parallelization Strategy for Statistical Significance Testing in Learning Classifier Systems.

    Science.gov (United States)

    Rudd, James; Moore, Jason H; Urbanowicz, Ryan J

    2013-11-01

    Permutation-based statistics for evaluating the significance of class prediction, predictive attributes, and patterns of association have only appeared within the learning classifier system (LCS) literature since 2012. While still not widely utilized by the LCS research community, formal evaluations of test statistic confidence are imperative to large and complex real world applications such as genetic epidemiology where it is standard practice to quantify the likelihood that a seemingly meaningful statistic could have been obtained purely by chance. LCS algorithms are relatively computationally expensive on their own. The compounding requirements for generating permutation-based statistics may be a limiting factor for some researchers interested in applying LCS algorithms to real world problems. Technology has made LCS parallelization strategies more accessible and thus more popular in recent years. In the present study we examine the benefits of externally parallelizing a series of independent LCS runs such that permutation testing with cross validation becomes more feasible to complete on a single multi-core workstation. We test our python implementation of this strategy in the context of a simulated complex genetic epidemiological data mining problem. Our evaluations indicate that as long as the number of concurrent processes does not exceed the number of CPU cores, the speedup achieved is approximately linear.

  16. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    Energy Technology Data Exchange (ETDEWEB)

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  17. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  18. Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs

    Science.gov (United States)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; Masciovecchio, Mario; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2017-08-01

    For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.

  19. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Courau, T.; Plagne, L.; Ponicot, A. [EDF R and D, 1, Avenue du General de Gaulle, 92141 Clamart Cedex (France); Sjoden, G. [Nuclear and Radiological Engineering, Georgia Inst. of Technology, Atlanta, GA 30332 (United States)

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadrature required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)

  20. Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs

    Directory of Open Access Journals (Sweden)

    Cerati Giuseppe

    2017-01-01

    Full Text Available For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU, ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC, for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.

  1. Parallel scripting for improved performance and productivity in climate model postprocessing, integration, and analysis.

    Science.gov (United States)

    Wilde, M.; Mickelson, S. A.; Jacob, R. L.; Zamboni, L.; Elliott, J.; Yan, E.

    2012-12-01

    Climate models continually increase both in their resolution and structural complexity, resulting in multi-terabyte model outputs. This volume of data overwhelms the current model processing procedures that are used to derive climate averages, perform analysis, produce visualizations, and integrate climate models with other datasets. We describe here the application of a new programming model - implicitly parallel functional dataflow scripting - for expressing the processing steps needed to post-process, analyze, integrate, and visualize the output of climate models. This programming model, implemented in the Swift parallel scripting language, provides a many-fold speedup of processing while reducing the amount of manual effort involved. It is characterized by: - implicit, pervasive parallelism, enabling scientists to leverage diverse parallel resources with reduced programming complexity; - abstraction of computing location and resource types, and automation of high performance data transport; - compact, uniform representation for the processing protocols and procedures of a research group or community under which virtually all existing software tools and languages can be coordinated; and - tracking of the provenance of derived data objects, providing a means for diagnostic interrogation and assessment of computational results. We report here on four model-analysis and/or data integration applications of this approach: 1) Re-coding of the community-standard diagnostic packages used to post-process data from the Community Atmosphere Model and the Parallel Ocean Program in Swift. This has resulted in valuable speedups in model analysis for these heavily used procedures. 2) Processing of model output from HiRAM, the GFDL global HIgh Resolution Atmospheric Model, automating and parallelizing post-processing steps that have in the past been both manually and computationally intensive. Swift automatically processesed 50 HiRAM realizations comprising over 50TB of model

  2. Asymmetry-aware load balancing for parallel applications in single-ISA multi-core systems

    Institute of Scientific and Technical Information of China (English)

    EunsungKIM; Hyeonsang EOM; Heon Y. YEOM

    2012-01-01

    Contemporary operating systems for single-ISA (instruction set architecture) multi-core systems attempt to distribute tasks equally among all the CPUs.This approach works relatively well when there is no difference in CPU capability.However,there are cases in which CPU capability differs from one another.For instance,static capability asymmetry results from the advent of new asymmetric hardware,and dynamic capability asymmetry comes from the operating system (OS) outside noise caused from networking or I/O handling.These asymmetries can make it hard for the OS scheduler to evenly distribute the tasks,resulting in less efficient load balancing.In this paper,we propose a user-level load balancer for parallel applications,called the ‘capability balancer',which recognizes the difference of CPU capability and makes subtasks share the entire CPU capability fairly.The balancer can coexist with the existing kernel-level load balancer without detrimenting the behavior of the kernel balancer.The capability balancer can fairly distribute CPU capability to tasks with very little overhead.For real workloads like the NAS Parallel Benchmark (NPB),we have accomplished speedups of up to 9.8% and 8.5% in dynamic and static asymmetries,respectively.We have also experienced speedups of 13.3 % for dynamic asymmetry and 24.1% for static asymmetry in a competitive environment.The impacts of our task selection policies,FIFO (first in,first out) and cache,were compared.The use of the cache policy led to a speedup of 5.3% in overall execution time and a decrease of 4.7% in the overall cache miss count,compared with the FIFO policy,which is used by default.

  3. Whole planet coupling between climate, mantle, and core: Implications for rocky planet evolution

    Science.gov (United States)

    Foley, Bradford J.; Driscoll, Peter E.

    2016-05-01

    Earth's climate, mantle, and core interact over geologic time scales. Climate influences whether plate tectonics can take place on a planet, with cool climates being favorable for plate tectonics because they enhance stresses in the lithosphere, suppress plate boundary annealing, and promote hydration and weakening of the lithosphere. Plate tectonics plays a vital role in the long-term carbon cycle, which helps to maintain a temperate climate. Plate tectonics provides long-term cooling of the core, which is vital for generating a magnetic field, and the magnetic field is capable of shielding atmospheric volatiles from the solar wind. Coupling between climate, mantle, and core can potentially explain the divergent evolution of Earth and Venus. As Venus lies too close to the sun for liquid water to exist, there is no long-term carbon cycle and thus an extremely hot climate. Therefore, plate tectonics cannot operate and a long-lived core dynamo cannot be sustained due to insufficient core cooling. On planets within the habitable zone where liquid water is possible, a wide range of evolutionary scenarios can take place depending on initial atmospheric composition, bulk volatile content, or the timing of when plate tectonics initiates, among other factors. Many of these evolutionary trajectories would render the planet uninhabitable. However, there is still significant uncertainty over the nature of the coupling between climate, mantle, and core. Future work is needed to constrain potential evolutionary scenarios and the likelihood of an Earth-like evolution.

  4. Parallel Processing Performance on Multi-Core PC Cluster Distributing Communication Load to Multiple Paths

    Science.gov (United States)

    Fukunaga, Takafumi

    Due to advent of powerful Multi-Core PC cluster the computation performance of each node is dramatically increassed and this trend will continue in the future. On the other hand, the use of powerful network systems (Myrinet, Infiniband, etc.) is expensive and tends to increase difficulty of programming and degrades portability because they need dedicated libraries and protocol stacks. This paper proposes a relatively simple method to improve bandwidth-oriented parallel applications by improving the communication performance without the above dedicated hardware, libraries, protocol stacks and IEEE802.3ad (LACP). Although there are similarities between this proposal and IEEE802.3ad in respect to using multiple Ethernet ports, the proposal performs equal to or better than IEEE802.3ad without LACP switches and drivers. Moreover the performance of LACP is influenced by the environment (MAC addresses, IP addresses, etc.) because its distribution algorithm uses these parameters, the proposed method shows the same effect in spite of them.

  5. Climatic variations in the past 140 ka recorded in core RM, east Qinghai-Xizang Plateau

    Institute of Scientific and Technical Information of China (English)

    吴敬禄; 王苏民; 潘红玺; 夏威岚

    1997-01-01

    The sequences of climatic evolution are reconstructed by the analyses of δ13C and δ18O of carbonate from core RM in the Zoige Basin since 140 kaB. P. During the Last Glaciation there existed at least seven warm climatic fluctuations and five cold events correlated with the records of ice core and deep sea, and during the preceding last in-terglacial period there were two cold climatic variations coinciding with the record of ice core GRIP. These results depict climatic instability in east Qinghai-Xizang Plateau over the last interglacial period. In addition, the environmental proxies of the carbonate content and pigments indicate the similar results to the stable isotope record from core RM.

  6. Efficient Parallel Global Optimization for High Resolution Hydrologic and Climate Impact Models

    Science.gov (United States)

    Shoemaker, C. A.; Mueller, J.; Pang, M.

    2013-12-01

    High Resolution hydrologic models are typically computationally expensive, requiring many minutes or perhaps hours for one simulation. Optimization can be used with these models for parameter estimation or for analyzing management alternatives. However Optimization of these computationally expensive simulations requires algorithms that can obtain accurate answers with relatively few simulations to avoid infeasibly long computation times. We have developed a number of efficient parallel algorithms and software codes for optimization of expensive problems with multiple local minimum. This is open source software we are distributing. It runs in Matlab and Python, and has been run on Yellowstone supercomputer. The talk will quickly discuss the characteristics of the problem (e.g. the presence of integer as well as continuous variables, the number of dimensions, the availability of parallel/grid computing, the number of simulations that can be allowed to find a solution, etc. ) that determine which algorithms are most appropriate for each type of problem. A major application of this optimization software is for parameter estimation for nonlinear hydrologic models, including contaminant transport in subsurface (e.g. for groundwater remediation or multi-phase flow for carbon sequestration), nutrient transport in watersheds, and climate models. We will present results for carbon sequestration plume monitoring (multi-phase, multi-constiuent), for groundwater remediation, and for the CLM climate model. The carbon sequestration example is based on the Frio CO2 field site and the groundwater example is for a 50,000 acre remediation site (with model requiring about 1 hour per simulation). Parallel speed-ups are excellent in most cases, and our serial and parallel algorithms tend to outperform alternative methods on complex computationally expensive simulations that have multiple global minima.

  7. State dependence of climatic instability over the past 720,000 years from Antarctic ice cores and climate modeling

    Science.gov (United States)

    Kawamura, Kenji; Abe-Ouchi, Ayako; Motoyama, Hideaki; Ageta, Yutaka; Aoki, Shuji; Azuma, Nobuhiko; Fujii, Yoshiyuki; Fujita, Koji; Fujita, Shuji; Fukui, Kotaro; Furukawa, Teruo; Furusaki, Atsushi; Goto-Azuma, Kumiko; Greve, Ralf; Hirabayashi, Motohiro; Hondoh, Takeo; Hori, Akira; Horikawa, Shinichiro; Horiuchi, Kazuho; Igarashi, Makoto; Iizuka, Yoshinori; Kameda, Takao; Kanda, Hiroshi; Kohno, Mika; Kuramoto, Takayuki; Matsushi, Yuki; Miyahara, Morihiro; Miyake, Takayuki; Miyamoto, Atsushi; Nagashima, Yasuo; Nakayama, Yoshiki; Nakazawa, Takakiyo; Nakazawa, Fumio; Nishio, Fumihiko; Obinata, Ichio; Ohgaito, Rumi; Oka, Akira; Okuno, Jun’ichi; Okuyama, Junichi; Oyabu, Ikumi; Parrenin, Frédéric; Pattyn, Frank; Saito, Fuyuki; Saito, Takashi; Saito, Takeshi; Sakurai, Toshimitsu; Sasa, Kimikazu; Seddik, Hakime; Shibata, Yasuyuki; Shinbori, Kunio; Suzuki, Keisuke; Suzuki, Toshitaka; Takahashi, Akiyoshi; Takahashi, Kunio; Takahashi, Shuhei; Takata, Morimasa; Tanaka, Yoichi; Uemura, Ryu; Watanabe, Genta; Watanabe, Okitsugu; Yamasaki, Tetsuhide; Yokoyama, Kotaro; Yoshimori, Masakazu; Yoshimoto, Takayasu

    2017-01-01

    Climatic variabilities on millennial and longer time scales with a bipolar seesaw pattern have been documented in paleoclimatic records, but their frequencies, relationships with mean climatic state, and mechanisms remain unclear. Understanding the processes and sensitivities that underlie these changes will underpin better understanding of the climate system and projections of its future change. We investigate the long-term characteristics of climatic variability using a new ice-core record from Dome Fuji, East Antarctica, combined with an existing long record from the Dome C ice core. Antarctic warming events over the past 720,000 years are most frequent when the Antarctic temperature is slightly below average on orbital time scales, equivalent to an intermediate climate during glacial periods, whereas interglacial and fully glaciated climates are unfavourable for a millennial-scale bipolar seesaw. Numerical experiments using a fully coupled atmosphere-ocean general circulation model with freshwater hosing in the northern North Atlantic showed that climate becomes most unstable in intermediate glacial conditions associated with large changes in sea ice and the Atlantic Meridional Overturning Circulation. Model sensitivity experiments suggest that the prerequisite for the most frequent climate instability with bipolar seesaw pattern during the late Pleistocene era is associated with reduced atmospheric CO2 concentration via global cooling and sea ice formation in the North Atlantic, in addition to extended Northern Hemisphere ice sheets. PMID:28246631

  8. Simulation of the world ocean climate with a massively parallel numerical model

    Science.gov (United States)

    Ushakov, K. V.; Ibrayev, R. A.; Kalmykov, V. V.

    2015-07-01

    The INM-IO numerical World Ocean model is verified through the calculation of the model ocean climate. The numerical experiment was conducted for a period of 500 years following the CORE-I protocol. We analyze some basic elements of the large-scale ocean circulation and local and integral characteristics of the model solution. The model limitations and ways they are overcome are described. The results generally fit the level of leading models. This experiment is a necessary step preceding the transition to high-resolution diagnostic and prognostic calculations of the state of the World Ocean and its individual basins.

  9. Evidence for general instability of past climate from a 250-KYR ice-core record

    DEFF Research Database (Denmark)

    Johnsen, Sigfus Johann; Clausen, Henrik Brink; Dahl-Jensen, Dorthe

    1993-01-01

     results1,2 from two ice cores drilled in central Greenland have revealed large, abrupt climate changes of at least regional extent during the late stages of the last glaciation, suggesting that climate in the North Atlantic region is able to reorganize itself rapidly, perhaps even within a few...... decades. Here we present a detailed stable-isotope record for the full length of the Greenland Ice-core Project Summit ice core, extending over the past 250 kyr according to a calculated timescale. We find that climate instability was not confined to the last glaciation, but appears also to have been...... marked during the last interglacial (as explored more fully in a companion paper3) and during the previous Saale-Holstein glacial cycle. This is in contrast with the extreme stability of the Holocene, suggesting that recent climate stability may be the exception rather than the rule. The last...

  10. Climate systems modeling on massively parallel processing computers at Lawrence Livermore National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Wehner, W.F.; Mirin, A.A.; Bolstad, J.H. [and others

    1996-09-01

    A comprehensive climate system model is under development at Lawrence Livermore National Laboratory. The basis for this model is a consistent coupling of multiple complex subsystem models, each describing a major component of the Earth`s climate. Among these are general circulation models of the atmosphere and ocean, a dynamic and thermodynamic sea ice model, and models of the chemical processes occurring in the air, sea water, and near-surface land. The computational resources necessary to carry out simulations at adequate spatial resolutions for durations of climatic time scales exceed those currently available. Distributed memory massively parallel processing (MPP) computers promise to affordably scale to the computational rates required by directing large numbers of relatively inexpensive processors onto a single problem. We have developed a suite of routines designed to exploit current generation MPP architectures via domain and functional decomposition strategies. These message passing techniques have been implemented in each of the component models and in their coupling interfaces. Production runs of the atmospheric and oceanic components performed on the National Environmental Supercomputing Center (NESC) Cray T3D are described.

  11. Ice core reconstruction of Antarctic climate change and implications

    OpenAIRE

    Mayewski,Paul Andrew

    2012-01-01

    Antarctica is the Earth’s largest environmental library for ice cores. Examples of the scientific fin-dings of the 21-nation consortium called the International Trans Antarctic Scientific Expedition (ITASE) under the auspices of the Scientific Committee for Antarctic Research (SCAR) are presented with special emphasis on the value of these records in reconstructing atmospheric circulation over Antarctica and the Southern Ocean.

  12. Early Holocene climate oscillations recorded in three Greenland ice cores

    DEFF Research Database (Denmark)

    Rasmussen, Sune Olander; Vinther, Bo Møllesøe; Clausen, Henrik Brink;

    2007-01-01

    A new ice core chronology for the Greenland DYE-3, GRIP, and NGRIP ice cores has been constructed, making it possible to compare the d18O and accumulation signals recorded in the three cores on an almost annual scale throughout the Holocene. We here introduce the new time scale and investigate d18O...... and accumulation anomalies that are common to the three cores in the Early Holocene (7.9–11.7 ka before present). Three time periods with significant and synchronous anomalies in the d18O and accumulation signals stand out: the well-known 8.2 ka event, an event of shorter duration but of almost similar amplitude...... around 9.3 ka before present, and the Preboreal Oscillation during the first centuries of the Holocene. For each of these sections, we present a d18O anomaly curve and a common accumulation signal that represents regional changes in the accumulation rate over the Greenland ice cap....

  13. Asian Ice Core Array (AICA): Climate and Environmental Reconstruction of Asia

    Science.gov (United States)

    Grigholm, B.; Mayewski, P. A.; Aizen, V.; Kang, S.; Kaspari, S.; Maasch, K. A.

    2008-12-01

    The large landmass and relief of the Asian continent has a substantial influence on global atmospheric circulation and the regional climate that supports ~2.5 billion people. Recent changes in climate and environmental conditions may lead to desertification and affect water resources, possibly resulting in serious consequences on humans and ecosystems. To put recent changes into context, it is first necessary to have an understanding of past climate and environmental variability. However, instrumental records of climate and environmental variability over the region are sparse and temporally limited. Fortunately, ice cores from high elevation mountain glaciers in Asia can be used to reconstruct atmospheric chemistry and past climate variability spanning seasonal to millennial time scales. The goal of the Asian Ice Core Array (AICA) is to enhance the spatial and temporal understanding of physical and chemical climate variability, establish a baseline for assessing modern climate variability in the context of human activity, and contribute to the prediction of climate variability in Asia. Highly resolved ice core reconstructions of past climate (e.g. atmospheric circulation, temperature, precipitation, and atmospheric chemistry) will utilize continuous, co-registered, and multi-parameter measurements of major ions, trace elements, and stable isotopes (along with selected sections for radionuclide analysis). AICA sites include cores from the Himalayas, Pamir, Tien Shan, Altai, and the Tibetan Plateau. An overview of the AICA project will be presented, in addition to some early results of AICA including reconstructions of the behavior of the summer South Asian monsoon over the Himalayas and the identification of a potential teleconnection between the central Tibetan Plateau and the Pacific Decadal Oscillation (PDO).

  14. GTfold: Enabling parallel RNA secondary structure prediction on multi-core desktops

    DEFF Research Database (Denmark)

    Swenson, M Shel; Anderson, Joshua; Ash, Andrew

    2012-01-01

    Accurate and efficient RNA secondary structure prediction remains an important open problem in computational molecular biology. Historically, advances in computing technology have enabled faster and more accurate RNA secondary structure predictions. Previous parallelized prediction programs achie...

  15. Parallel computation of a dam-break flow model using OpenMP on a multi-core computer

    Science.gov (United States)

    Zhang, Shanghong; Xia, Zhongxi; Yuan, Rui; Jiang, Xiaoming

    2014-05-01

    High-performance calculations are of great importance to the simulation of dam-break events, as discontinuous solutions and accelerated speed are key factors in the process of dam-break flow modeling. In this study, Roe's approximate Riemann solution of the finite volume method is adopted to solve the interface flux of grid cells and accurately simulate the discontinuous flow, and shared memory technology (OpenMP) is used to realize parallel computing. Because an explicit discrete technique is used to solve the governing equations, and there is no correlation between grid calculations in a single time step, the parallel dam-break model can be easily realized by adding OpenMP instructions to the loop structure of the grid calculations. The performance of the model is analyzed using six computing cores and four different grid division schemes for the Pangtoupao flood storage area in China. The results show that the parallel computing improves precision and increases the simulation speed of the dam-break flow, the simulation of 320 h flood process can be completed within 1.6 h on a 16-kernel computer; a speedup factor of 8.64× is achieved. Further analysis reveals that the models involving a larger number of calculations exhibit greater efficiency and a higher rate of acceleration. At the same time, the model has good extendibility, as the speedup increases with the number of processor cores. The parallel model based on OpenMP can make full use of multi-core processors, making it possible to simulate dam-break flows in large-scale watersheds on a single computer.

  16. Direct North-South synchronization of abrupt climate change record in ice cores using beryllium 10

    Directory of Open Access Journals (Sweden)

    G. M. Raisbeck

    2007-05-01

    Full Text Available A new, decadally resolved record of the 10Be peak at 41 kyr from the EPICA Dome C ice core (Antarctica is used to match it with the same peak in the GRIP ice core (Greenland. This permits a direct synchronisation of the climatic variations around 41 kyr BP, independent of uncertainties related to the ice age-gas age difference in ice cores. Dansgaard-Oeschger event 10 is in the period of best synchronisation and is found to be coeval with an Antarctic temperature maximum. Simulations using a thermal bipolar seesaw model agree reasonably well with the observed relative climate chronology in these two cores. They also reproduce three Antarctic warming events between A1 and A2.

  17. Direct north-south synchronization of abrupt climate change record in ice cores using Beryllium 10

    Directory of Open Access Journals (Sweden)

    G. M. Raisbeck

    2007-09-01

    Full Text Available A new, decadally resolved record of the 10Be peak at 41 kyr from the EPICA Dome C ice core (Antarctica is used to match it with the same peak in the GRIP ice core (Greenland. This permits a direct synchronisation of the climatic variations around this time period, independent of uncertainties related to the ice age-gas age difference in ice cores. Dansgaard-Oeschger event 10 is in the period of best synchronisation and is found to be coeval with an Antarctic temperature maximum. Simulations using a thermal bipolar seesaw model agree reasonably well with the observed relative climate chronology in these two cores. They also reproduce three Antarctic warming events observed between A1 and A2.

  18. GRAPES: a software for parallel searching on biological graphs targeting multi-core architectures.

    Directory of Open Access Journals (Sweden)

    Rosalba Giugno

    Full Text Available Biological applications, from genomics to ecology, deal with graphs that represents the structure of interactions. Analyzing such data requires searching for subgraphs in collections of graphs. This task is computationally expensive. Even though multicore architectures, from commodity computers to more advanced symmetric multiprocessing (SMP, offer scalable computing power, currently published software implementations for indexing and graph matching are fundamentally sequential. As a consequence, such software implementations (i do not fully exploit available parallel computing power and (ii they do not scale with respect to the size of graphs in the database. We present GRAPES, software for parallel searching on databases of large biological graphs. GRAPES implements a parallel version of well-established graph searching algorithms, and introduces new strategies which naturally lead to a faster parallel searching system especially for large graphs. GRAPES decomposes graphs into subcomponents that can be efficiently searched in parallel. We show the performance of GRAPES on representative biological datasets containing antiviral chemical compounds, DNA, RNA, proteins, protein contact maps and protein interactions networks.

  19. Climatic indicators in an ice core from the Yukon [abstract

    OpenAIRE

    Holdsworth, G; Fogarasi, S.; Krouse, H. R.; M. Nosal

    1988-01-01

    EXTRACT (SEE PDF FOR FULL ABSTRACT): Stable isotope data obtained from snow and ice cores retrieved from an altitude of 5340m on Mt. Logan (60°30'N; 140°36'W) indicate that "isotopic seasons" are not generally in phase with calendar seasons. The former are phase lagged with respect to the latter by up to several months and appear to be correlated with SST'S and ocean heat transfer curves and/or the position of the Aleutian low rather than with air temperature or the temperature difference...

  20. An Efficient Parallel SAT Solver Exploiting Multi-Core Environments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The hundreds of stream cores in the latest graphics processors (GPUs), and the possibility to execute non-graphics computations on them, open unprecedented levels of...

  1. An Efficient Parallel SAT Solver Exploiting Multi-Core Environments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The hundreds of stream cores in the latest graphics processors (GPUs), and the possibility to execute non-graphics computations on them, open unprecedented levels of...

  2. High performance parallelism pearls 2 multicore and many-core programming approaches

    CERN Document Server

    Jeffers, Jim

    2015-01-01

    High Performance Parallelism Pearls Volume 2 offers another set of examples that demonstrate how to leverage parallelism. Similar to Volume 1, the techniques included here explain how to use processors and coprocessors with the same programming - illustrating the most effective ways to combine Xeon Phi coprocessors with Xeon and other multicore processors. The book includes examples of successful programming efforts, drawn from across industries and domains such as biomed, genetics, finance, manufacturing, imaging, and more. Each chapter in this edited work includes detailed explanations of t

  3. Microorganisms in the Malan ice core and their relation to climatic and environmental changes

    Science.gov (United States)

    Yao, Tandong; Xiang, Shurong; Zhang, Xiaojun; Wang, Ninglian; Wang, Youqing

    2006-03-01

    A 102-m-long ice core retrieved from the Malan Ice Cap on the Tibetan Plateau provides us with a historical record of the microorganisms trapped in the ice. The microorganisms in the Malan ice core are identified as α, β, and γ-Proteobacteia, and the LGC, HGC, and CFB group by means of the results of 16S rRNA sequence analysis and physiological characteristics, while the eukaryotes in the ice core are mainly composed of Chlamydomonas sp. and Pseudochlorella sp. based on the phylogenetic examination of the 18S rRNA gene. The microbial populations show observable differences at different depths in the ice core, reflecting the effects of climatic and environmental changes on the distribution of the microorganisms in the glacier. Examination of the Malan ice core shows four general periods of microbial concentration, which correspond to four phases of temperature revealed by δ18O values in the core. Observations also indicate that microorganism concentrations tend to be negatively correlated with the temperature at a relatively long timescale and, to some extent, positively correlated with mineral concentrations. The present study demonstrates that more microorganisms are associated with colder periods while fewer microorganisms are associated with warm periods, which provides us with a new proxy for the reconstruction of past climatic and environmental changes by means of ice core analysis.

  4. The Principalship: Essential Core Competencies for Instructional Leadership and Its Impact on School Climate

    Science.gov (United States)

    Ross, Dorrell J.; Cozzens, Jeffry A.

    2016-01-01

    The purpose of this quantitative study was to investigate teachers' perceptions of principals' leadership behaviors influencing the schools' climate according to Green's (2010) ideologies of the 13 core competencies within the four dimensions of principal leadership. Data from the "Leadership Behavior Inventory" (Green, 2014) suggest 314…

  5. ISP: an optimal out-of-core image-set processing streaming architecture for parallel heterogeneous systems.

    Science.gov (United States)

    Ha, Linh Khanh; Krüger, Jens; Dihl Comba, João Luiz; Silva, Cláudio T; Joshi, Sarang

    2012-06-01

    Image population analysis is the class of statistical methods that plays a central role in understanding the development, evolution, and disease of a population. However, these techniques often require excessive computational power and memory that are compounded with a large number of volumetric inputs. Restricted access to supercomputing power limits its influence in general research and practical applications. In this paper we introduce ISP, an Image-Set Processing streaming framework that harnesses the processing power of commodity heterogeneous CPU/GPU systems and attempts to solve this computational problem. In ISP, we introduce specially designed streaming algorithms and data structures that provide an optimal solution for out-of-core multiimage processing problems both in terms of memory usage and computational efficiency. ISP makes use of the asynchronous execution mechanism supported by parallel heterogeneous systems to efficiently hide the inherent latency of the processing pipeline of out-of-core approaches. Consequently, with computationally intensive problems, the ISP out-of-core solution can achieve the same performance as the in-core solution. We demonstrate the efficiency of the ISP framework on synthetic and real datasets.

  6. Parallel DC3 Algorithm for Suffix Array Construction on Many-Core Accelerators

    KAUST Repository

    Liao, Gang

    2015-05-01

    In bioinformatics applications, suffix arrays are widely used to DNA sequence alignments in the initial exact match phase of heuristic algorithms. With the exponential growth and availability of data, using many-core accelerators, like GPUs, to optimize existing algorithms is very common. We present a new implementation of suffix array on GPU. As a result, suffix array construction on GPU achieves around 10x speedup on standard large data sets, which contain more than 100 million characters. The idea is simple, fast and scalable that can be easily scale to multi-core processors and even heterogeneous architectures. © 2015 IEEE.

  7. Long memory effect of past climate change in Vostok ice core records

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Yuuki, E-mail: yyama@ed.yama.tus.ac.jp [Department of Mechanical Engineering, Tokyo University of Science, Yamaguchi (Japan); Kitahara, Naoki [Department of Electronics and Computer Science, Tokyo University of Science, Yamaguchi (Japan); Kano, Makoto [Department of Mechanical Engineering, Tokyo University of Science, Yamaguchi (Japan)

    2012-03-20

    Time series analysis of Vostok ice core data has been done for understanding of palaeoclimate change from a stochastic perspective. The Vostok ice core is one of the proxy data for palaeoclimate in which local temperature and precipitation rate, moisture source conditions, wind strength and aerosol fluxes of marine, volcanic, terrestrial, cosmogenic and anthropogenic origin are indirectly stored. Palaeoclimate data has a periodic feature and a stochastic feature. For the proxy data, spectrum analysis and detrended fluctuation analysis (DFA) were conducted to characterize periodicity and scaling property (long memory effect) in the climate change. The result of spectrum analysis indicates there exist periodicities corresponding to the Milankovitch cycle in past climate change occurred. DFA clarified time variability of scaling exponents (Hurst exponent) is associated with abrupt warming in past climate.

  8. POLLiCE (POLLen in the iCE): climate history from Adamello ice cores

    Science.gov (United States)

    Cristofori, Antonella; Festi, Daniela; Maggi, Valter; Casarotto, Christian; Bertoni, Elena; Vernesi, Cristiano

    2017-04-01

    Glaciers can be viewed as the most complete and effective past climate and environment archives severely threatened by climate change. These threats are particularly dramatic across European Alps. The Adamello glacier is the largest, 16.4 km2, and deepest, 270 m, Italian glacier. We aim at estimating biodiversity changes over the last centuries in relation to climate and human activities in the Adamello catchment area. We, therefore, recently launched the POLLiCE project (pollice.fmach.it) for specifically targeting the biological component (e.g. pollen, leaves, plant remains) trapped in ice cores. Classical morphological pollen analysis will be accompanied by DNA metabarcoding. This approach has the potential to provide a detailed taxonomical identification - at least genus level- thus circumventing the limitations of microscopic analysis such as time-consuming procedures and shared features of pollen grains among different taxa. Moreover, ice cores are subjected to chemical and physical analyses - stable isotopes, ions, hyperspectral imaging, etc.- for stratigraphic and climatic determination of seasonality. A pilot drilling was conducted on March 2015 and the resulting 5 m core has been analysed in terms of pollen spectrum, stable isotopes and ions in order to demonstrate the feasibility of the study. The first encouraging results showed that even in this superficial core a stratigraphy is evident with indication of seasonality as highlighted by both by pollen taxa and stable isotopes. Finally, DNA has been successfully extracted and amplified with specific DNA barcodes. A medium drilling was performed on April 2016 with the extraction of a 45 m ice core. The analysis of this core constitutes the subject of a specific research project, CALICE*, just funded by Euregio Science Fund (IPN57). The entire depth, 270 m, of the Adamello glacier is scheduled to be drilled in 2018 winter to secure the unique memory archived by the ice. * See EGU2017 poster by Festi et al

  9. The Crystal Structures of the N-terminal Photosensory Core Module of Agrobacterium Phytochrome Agp1 as Parallel and Anti-parallel Dimers.

    Science.gov (United States)

    Nagano, Soshichiro; Scheerer, Patrick; Zubow, Kristina; Michael, Norbert; Inomata, Katsuhiko; Lamparter, Tilman; Krauß, Norbert

    2016-09-23

    Agp1 is a canonical biliverdin-binding bacteriophytochrome from the soil bacterium Agrobacterium fabrum that acts as a light-regulated histidine kinase. Crystal structures of the photosensory core modules (PCMs) of homologous phytochromes have provided a consistent picture of the structural changes that these proteins undergo during photoconversion between the parent red light-absorbing state (Pr) and the far-red light-absorbing state (Pfr). These changes include secondary structure rearrangements in the so-called tongue of the phytochrome-specific (PHY) domain and structural rearrangements within the long α-helix that connects the cGMP-specific phosphodiesterase, adenylyl cyclase, and FhlA (GAF) and the PHY domains. We present the crystal structures of the PCM of Agp1 at 2.70 Å resolution and of a surface-engineered mutant of this PCM at 1.85 Å resolution in the dark-adapted Pr states. Whereas in the mutant structure the dimer subunits are in anti-parallel orientation, the wild-type structure contains parallel subunits. The relative orientations between the PAS-GAF bidomain and the PHY domain are different in the two structures, due to movement involving two hinge regions in the GAF-PHY connecting α-helix and the tongue, indicating pronounced structural flexibility that may give rise to a dynamic Pr state. The resolution of the mutant structure enabled us to detect a sterically strained conformation of the chromophore at ring A that we attribute to the tight interaction with Pro-461 of the conserved PRXSF motif in the tongue. Based on this observation and on data from mutants where residues in the tongue region were replaced by alanine, we discuss the crucial roles of those residues in Pr-to-Pfr photoconversion.

  10. Change of bacterial community in the Malan Ice Core and its relation to climate and environment

    Institute of Scientific and Technical Information of China (English)

    XIANG Shurong; YAO Tandong; AN Lizhe; LI Zhen; WU Guangjian; WANG Youqing; XU Baiqing; WANG Junxia

    2004-01-01

    In order to understand the relationship be tween the community structure of bacteria in ice core and the past climate and environment, we initiated the study on the microorganisms in the three selected ice samples from the Malan ice core drilled from the Tibetan Plateau. The 16S ribosomal DNA (rDNA) molecules were directly amplified from the melt water samples, and three 16S rDNA clone libraries were established. Among 94 positive clones, eleven clones with unique restriction pattern were used for partial sequence and compared with eight reported sequences from the same ice core. The phylotypes were divided into 5 groups:alpha, beta, gamma proteobacteria; CFB, and other eubacteria group. Among them, there were many "typical Malan glacial bacteria" pertaining to psychrophilies and new bacteria found in the ice core. At a longer time scale, the concentration distribution of "typical Malan glacial bacteria" with depth showed negative correlation with temperature variations and was coincident with dirty layer. It implied the influence of temperature on the microbial record through impact on the concentrations of the "typical Malan glacial bacteria''. In addition, the nutrition contained in ice was another important factor controlling the distribution of microbial population in ice core section. Moreover, the result displayed an apparent layer distribution of bacterial community in the ice core section, which reflected the microbial response to the past climatic and environmental conditions at the time of deposition.

  11. Parallel numerical simulation of the thermal convection in the Earth's outer core on the cubed-sphere

    Science.gov (United States)

    Yin, Liang; Yang, Chao; Ma, Shi-Zhuang; Huang, Ji-Zu; Cai, Ying

    2017-06-01

    Numerical simulations of the thermal convection in a rotating spherical shell play an important role in dynamo simulations. In this paper, we present a highly scalable parallel solver for the Earth's outer core convection. The solver uses a second-order cell-centred finite volume spatial discretization based on the cubed-sphere grid instead of the traditional latitude-longitude grid to avoid grid singularity and enhance the parallel scalability. A second-order approximate factorization method combined with the Crank-Nicolson scheme is employed for splitting temporal integration. The two resultant sparse linear systems are solved by the Krylov subspace iterative method pre-conditioned with a restricted additive Schwarz pre-conditioner based on the domain decomposition of the cubed-sphere. The numerical results are in good agreement with the benchmark solutions and more accurate than the existing finite volume results. The solvers scale to over 10 000 processor cores with nearly linear scalability on the Sunway TaihuLight supercomputer.

  12. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    Science.gov (United States)

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  13. Hominin Sites and Paleolakes Drilling Project. Chew Bahir, southern Ethiopia: How to get from three tonnes of sediment core to > 500 ka of continuous climate history?

    Science.gov (United States)

    Foerster, Verena; Asrat, Asfawossen; Cohen, Andrew S.; Gromig, Raphael; Günter, Christina; Junginger, Annett; Lamb, Henry F.; Schaebitz, Frank; Trauth, Martin H.

    2016-04-01

    In search of the environmental context of the evolution and dispersal of Homo sapiens and our close relatives within and beyond the African continent, the ICDP-funded Hominin Sites and Paleolakes Drilling Project (HSPDP) has recently cored five fluvio-lacustrine archives of climate change in East Africa. The sediment cores collected in Ethiopia and Kenya are expected to provide valuable insights into East African environmental variability during the last ~3.5 Ma. The tectonically-bound Chew Bahir basin in the southern Ethiopian rift is one of the five sites within HSPDP, located in close proximity to the Lower Omo River valley, the site of the oldest known fossils of anatomically modern humans. In late 2014, the two cores (279 and 266 m long respectively, HSPDP-CHB14-2A and 2B) were recovered, summing up to nearly three tonnes of mostly calcareous clays and silts. Deciphering an environmental record from multiple records, from the source region of modern humans could eventually allow us to reconstruct the pronounced variations of moisture availability during the transition into Middle Stone Age, and its implications for the origin and dispersal of Homo sapiens. Here we present the first results of our analysis of the Chew Bahir cores. Following the HSPDP protocols, the two parallel Chew Bahir sediment cores have been merged into one single, 280 m long and nearly continuous (>90%) composite core on the basis of a high resolution MSCL data set (e.g., magnetic susceptibility, gamma ray density, color intensity transects, core photographs). Based on the obvious cyclicities in the MSCL, correlated with orbital cycles, the time interval covered by our sediment archive of climate change is inferred to span the last 500-600 kyrs. Combining our first results from the long cores with the results from the accomplished pre-study of short cores taken in 2009/10 along a NW-SE transect across the basin (Foerster et al., 2012, Trauth et al., 2015), we have developed a hypothesis

  14. Ice cores record significant 1940s Antarctic warmth related to tropical climate variability.

    Science.gov (United States)

    Schneider, David P; Steig, Eric J

    2008-08-26

    Although the 20th Century warming of global climate is well known, climate change in the high-latitude Southern Hemisphere (SH), especially in the first half of the century, remains poorly documented. We present a composite of water stable isotope data from high-resolution ice cores from the West Antarctic Ice Sheet. This record, representative of West Antarctic surface temperature, shows extreme positive anomalies in the 1936-45 decade that are significant in the context of the background 20th Century warming trend. We interpret these anomalies--previously undocumented in the high-latitude SH--as indicative of strong teleconnections in part driven by the major 1939-42 El Niño. These anomalies are coherent with tropical sea-surface temperature, mean SH air temperature, and North Pacific sea-level pressure, underscoring the sensitivity of West Antarctica's climate, and potentially its ice sheet, to large-scale changes in the global climate.

  15. Carbonaceous aerosol tracers in ice-cores record multi-decadal climate oscillations.

    Science.gov (United States)

    Seki, Osamu; Kawamura, Kimitaka; Bendle, James A P; Izawa, Yusuke; Suzuki, Ikuko; Shiraiwa, Takayuki; Fujii, Yoshiyuki

    2015-09-28

    Carbonaceous aerosols influence the climate via direct and indirect effects on radiative balance. However, the factors controlling the emissions, transport and role of carbonaceous aerosols in the climate system are highly uncertain. Here we investigate organic tracers in ice cores from Greenland and Kamchatka and find that, throughout the period covered by the records (1550 to 2000 CE), the concentrations and composition of biomass burning-, soil bacterial- and plant wax- tracers correspond to Arctic and regional temperatures as well as the warm season Arctic Oscillation (AO) over multi-decadal time-scales. Specifically, order of magnitude decreases (increases) in abundances of ice-core organic tracers, likely representing significant decreases (increases) in the atmospheric loading of carbonaceous aerosols, occur during colder (warmer) phases in the high latitudinal Northern Hemisphere. This raises questions about causality and possible carbonaceous aerosol feedback mechanisms. Our work opens new avenues for ice core research. Translating concentrations of organic tracers (μg/kg-ice or TOC) from ice-cores, into estimates of the atmospheric loading of carbonaceous aerosols (μg/m(3)) combined with new model constraints on the strength and sign of climate forcing by carbonaceous aerosols should be a priority for future research.

  16. Past temperature reconstructions from deep ice cores: relevance for future climate change

    Directory of Open Access Journals (Sweden)

    V. Masson-Delmotte

    2006-01-01

    Full Text Available Ice cores provide unique archives of past climate and environmental changes based only on physical processes. Quantitative temperature reconstructions are essential for the comparison between ice core records and climate models. We give an overview of the methods that have been developed to reconstruct past local temperatures from deep ice cores and highlight several points that are relevant for future climate change. We first analyse the long term fluctuations of temperature as depicted in the long Antarctic record from EPICA Dome C. The long term imprint of obliquity changes in the EPICA Dome C record is highlighted and compared to simulations conducted with the ECBILT-CLIO intermediate complexity climate model. We discuss the comparison between the current interglacial period and the long interglacial corresponding to marine isotopic stage 11, ~400 kyr BP. Previous studies had focused on the role of precession and the thresholds required to induce glacial inceptions. We suggest that, due to the low eccentricity configuration of MIS 11 and the Holocene, the effect of precession on the incoming solar radiation is damped and that changes in obliquity must be taken into account. The EPICA Dome C alignment of terminations I and VI published in 2004 corresponds to a phasing of the obliquity signals. A conjunction of low obliquity and minimum northern hemisphere summer insolation is not found in the next tens of thousand years, supporting the idea of an unusually long interglacial ahead. As a second point relevant for future climate change, we discuss the magnitude and rate of change of past temperatures reconstructed from Greenland (NorthGRIP and Antarctic (Dome C ice cores. Past episodes of temperatures above the present-day values by up to 5°C are recorded at both locations during the penultimate interglacial period. The rate of polar warming simulated by coupled climate models forced by a CO2 increase of 1% per year is compared to ice-core

  17. The ice-core record - Climate sensitivity and future greenhouse warming

    Science.gov (United States)

    Lorius, C.; Raynaud, D.; Jouzel, J.; Hansen, J.; Le Treut, H.

    1990-01-01

    The prediction of future greenhouse-gas-warming depends critically on the sensitivity of earth's climate to increasing atmospheric concentrations of these gases. Data from cores drilled in polar ice sheets show a remarkable correlation between past glacial-interglacial temperature changes and the inferred atmospheric concentration of gases such as carbon dioxide and methane. These and other palaeoclimate data are used to assess the role of greenhouse gases in explaining past global climate change, and the validity of models predicting the effect of increasing concentrations of such gases in the atmosphere.

  18. Abrupt climatic changes on the Tibetan Plateau during the Last Ice Age——Comparative study of the Guliya ice core with the Greenland GRIP ice core

    Institute of Scientific and Technical Information of China (English)

    姚檀栋

    1999-01-01

    Based on a comparative study of the Gtdiya ice core with the Greenland GRIP ice core, the abrupt climatic changes on the Tibetan Plateau during the Last Ice Age have been examined. The major stadial-interstadial events and 7 warm events (BrΦrump, Odderade, Oerel, Glinde, Hengelo, Denekamp, BΦlling) are consistent in the two ice cores. However, there are some unique features in the Guliya ice core records. The transition from warm to cold periods in the Guliya ice core is faster than that in the Greenland GRIP ice core. The magnitude of the climatic changes in the Guliya ice core is also larger than that in the Greenland GRIP ice core. Another significant feature of the Guliya ice core records is that there is a series of cycles of about 200 a from 18 to 35 kaBP. 22 warm events and 20 cold events with a fluctuation magnitude of 7℃ have been distinguished. The warm and cold events with a fluctuation magnitude within 3℃ are as high as 100. It is speculated that the abrupt climatic changes in different

  19. FastFlow: Efficient Parallel Streaming Applications on Multi-core

    CERN Document Server

    Aldinucci, Marco; Meneghin, Massimiliano

    2009-01-01

    Shared memory multiprocessors come back to popularity thanks to rapid spreading of commodity multi-core architectures. As ever, shared memory programs are fairly easy to write and quite hard to optimise; providing multi-core programmers with optimising tools and programming frameworks is a nowadays challenge. Few efforts have been done to support effective streaming applications on these architectures. In this paper we introduce FastFlow, a low-level programming framework based on lock-free queues explicitly designed to support high-level languages for streaming applications. We compare FastFlow with state-of-the-art programming frameworks such as Cilk, OpenMP, and Intel TBB. We experimentally demonstrate that FastFlow is always more efficient than all of them in a set of micro-benchmarks and on a real world application; the speedup edge of FastFlow over other solutions might be bold for fine grain tasks, as an example +35% on OpenMP, +226% on Cilk, +96% on TBB for the alignment of protein P01111 against UniP...

  20. Annually resolved ice core records of tropical climate variability over the past ~1800 years.

    Science.gov (United States)

    Thompson, L G; Mosley-Thompson, E; Davis, M E; Zagorodnov, V S; Howat, I M; Mikhalenko, V N; Lin, P-N

    2013-05-24

    Ice cores from low latitudes can provide a wealth of unique information about past climate in the tropics, but they are difficult to recover and few exist. Here, we report annually resolved ice core records from the Quelccaya ice cap (5670 meters above sea level) in Peru that extend back ~1800 years and provide a high-resolution record of climate variability there. Oxygen isotopic ratios (δ(18)O) are linked to sea surface temperatures in the tropical eastern Pacific, whereas concentrations of ammonium and nitrate document the dominant role played by the migration of the Intertropical Convergence Zone in the region of the tropical Andes. Quelccaya continues to retreat and thin. Radiocarbon dates on wetland plants exposed along its retreating margins indicate that it has not been smaller for at least six millennia.

  1. Climatic variations since the Little Ice Age recorded in the Guliya Ice Core

    Institute of Scientific and Technical Information of China (English)

    姚檀栋; 焦克勤; 田立德; 杨志红; 施维林; Lonnie G. Thompson

    1996-01-01

    The climatic variations since the Little Ice Age recorded in the Guliya Ice Core are discussed based on glacial δ18O and accumulation records in the Guliya Ice Core. Several obvious climate fluctuation events since 1570 can be observed according to the records. In the past 400 years, the 17th and 19th centuries are relatively cool periods with less precipitation, and the 18th and 20th centuries are relatively warm periods with high precipitation. The study has also revealed the close relationship between temperature and precipitation on the plateau. Warming corresponds to high precipitation and cooling corresponds to less precipitation, which is related with the influence of monsoon on this region.

  2. Multi-Core Parallel Gradual Pattern Mining Based on Multi-Precision Fuzzy Orderings

    Directory of Open Access Journals (Sweden)

    Federico Del Razo Lopez

    2013-11-01

    Full Text Available Gradual patterns aim at describing co-variations of data such as the higher the size, the higher the weight. In recent years, such patterns have been studied more and more from the data mining point of view. The extraction of such patterns relies on efficient and smart orderings that can be built among data, for instance, when ordering the data with respect to the size, then the data are also ordered with respect to the weight. However, in many application domains, it is hardly possible to consider that data values are crisply ordered. When considering gene expression, it is not true from the biological point of view that Gene 1 is more expressed than Gene 2, if the levels of expression only differ from the tenth decimal. We thus consider fuzzy orderings and fuzzy gamma rank correlation. In this paper, we address two major problems related to this framework: (i the high memory consumption and (ii the precision, representation and efficient storage of the fuzzy concordance degrees versus the loss or gain of computing power. For this purpose, we consider multi-precision matrices represented using sparse matrices coupled with parallel algorithms. Experimental results show the interest of our proposal.

  3. Global Climate Change: Valuable Insights from Concordant and Discordant Ice Core Histories

    Science.gov (United States)

    Mosley-Thompson, E.; Thompson, L. G.; Porter, S. E.; Goodwin, B. P.; Wilson, A. B.

    2014-12-01

    Earth's ice cover is responding to the ongoing large-scale warming driven in part by anthropogenic forces. The highest tropical and subtropical ice fields are dramatically shrinking and/or thinning and unique climate histories archived therein are now threatened, compromised or lost. Many ice fields in higher latitudes are also experiencing and recording climate system changes although these are often manifested in less evident and spectacular ways. The Antarctic Peninsula (AP) has experienced a rapid, widespread and dramatic warming over the last 60 years. Carefully selected ice fields in the AP allow reconstruction of long histories of key climatic variables. As more proxy climate records are recovered it is clear they reflect a combination of expected and unexpected responses to seemingly similar climate forcings. Recently acquired temperature and precipitation histories from the Bruce Plateau are examined within the context provided by other cores recently collected in the AP. Understanding the differences and similarities among these records provides a better understanding of the forces driving climate variability in the AP over the last century. The Arctic is also rapidly warming. The δ18O records from the Bona-Churchill and Mount Logan ice cores from southeast Alaska and southwest Yukon Territory, respectively, do not record this strong warming. The Aleutian Low strongly influences moisture transport to this geographically complex region, yet its interannual variability is preserved differently in these cores located just 110 km apart. Mount Logan is very sensitive to multi-decadal to multi-centennial climate shifts in the tropical Pacific while low frequency variability on Bona-Churchill is more strongly connected to Western Arctic sea ice extent. There is a natural tendency to focus more strongly on commonalities among records, particularly on regional scales. However, it is also important to investigate seemingly poorly correlated records, particularly

  4. Climate variation since the Last Interglaciation recorded in the Guliya ice core

    Institute of Scientific and Technical Information of China (English)

    姚檀栋; L.G.Thompson; 施雅风; 秦大河; 焦克勤; 杨志红; 田立德; E.M.Thompson

    1997-01-01

    The climatic and environmental variations since the Last Interglaciation are reconstructed based on the study of the upper 268 m of the 309-m-long Guliya ice core. Five stages can be distinguished since the Last Interglaciation from the δ18O record in the Guliya ice core: Stage 1 (Deglaciation), Stage 2 (the Last Glacial Maximum), Stage 3 (interstadial), Stage 4 (interstadial in the early glacial maximum) and Stage 5 (the Last Interglaciation). Stage 5 can be divided further into 5 substages; a, b, c, d, e. The δ18O record in the Guliya ice core indicates clearly the close correlation between the temperature variation on the Tibetan Plateau and the solar activities. The study indicates that the solar activity is a main forcing to the climatic variation on the Tibetan Plateau. Through a comparison of the ice core record in Guliya with that in the Greenland and the Antarctic, it can be found that the variation of large temperature variation events in different parts of the world is generally the same, b

  5. Marine sediment cores database for the Mediterranean Basin: a tool for past climatic and environmental studies

    Science.gov (United States)

    Alberico, I.; Giliberti, I.; Insinga, D. D.; Petrosino, P.; Vallefuoco, M.; Lirer, F.; Bonomo, S.; Cascella, A.; Anzalone, E.; Barra, R.; Marsella, E.; Ferraro, L.

    2017-06-01

    Paleoclimatic data are essential for fingerprinting the climate of the earth before the advent of modern recording instruments. They enable us to recognize past climatic events and predict future trends. Within this framework, a conceptual and logical model was drawn to physically implement a paleoclimatic database named WDB-Paleo that includes the paleoclimatic proxies data of marine sediment cores of the Mediterranean Basin. Twenty entities were defined to record four main categories of data: a) the features of oceanographic cruises and cores (metadata); b) the presence/absence of paleoclimatic proxies pulled from about 200 scientific papers; c) the quantitative analysis of planktonic and benthonic foraminifera, pollen, calcareous nannoplankton, magnetic susceptibility, stable isotopes, radionuclides values of about 14 cores recovered by Institute for Coastal Marine Environment (IAMC) of Italian National Research Council (CNR) in the framework of several past research projects; d) specific entities recording quantitative data on δ18O, AMS 14C (Accelerator Mass Spectrometry) and tephra layers available in scientific papers. Published data concerning paleoclimatic proxies in the Mediterranean Basin are recorded only for 400 out of 6000 cores retrieved in the area and they show a very irregular geographical distribution. Moreover, the data availability decreases when a constrained time interval is investigated or more than one proxy is required. We present three applications of WDB-Paleo for the Younger Dryas (YD) paleoclimatic event at Mediterranean scale and point out the potentiality of this tool for integrated stratigraphy studies.

  6. A Parallel and Concurrent Implementation of Lin-Kernighan Heuristic (LKH-2 for Solving Traveling Salesman Problem for Multi-Core Processors using SPC3 Programming Model

    Directory of Open Access Journals (Sweden)

    Muhammad Ali Ismail

    2011-08-01

    Full Text Available With the arrival of multi-cores, every processor has now built-in parallel computational power and that can be fully utilized only if the program in execution is written accordingly. This study is a part of an on-going research for designing of a new parallel programming model for multi-core processors. In this paper we have presented a combined parallel and concurrent implementation of Lin-Kernighan Heuristic (LKH-2 for Solving Travelling Salesman Problem (TSP using a newly developed parallel programming model, SPC3 PM, for general purpose multi-core processors. This implementation is found to be very simple, highly efficient, scalable and less time consuming in compare to the existing LKH-2 serial implementations in multi-core processing environment. We have tested our parallel implementation of LKH-2 with medium and large size TSP instances of TSBLIB. And for all these tests our proposed approach has shown much improved performance and scalability.

  7. Millennial and sub-millennial scale climatic variations recorded in polar ice cores over the last glacial period

    National Research Council Canada - National Science Library

    Capron, E; Landais, A; Chappellaz, J; Schilt, A; Buiron, D; Dahl-Jensen, D; Johnsen, S. J; Jouzel, J; Lemieux-Dudon, B; Loulergue, L; Leuenberger, M; Masson-Delmotte, V; Meyer, H; Oerter, H; Stenni, B

    2010-01-01

    Since its discovery in Greenland ice cores, the millennial scale climatic variability of the last glacial period has been increasingly documented at all latitudes with studies focusing mainly on Marine Isotopic Stage 3 (MIS 3...

  8. 12-core x 3-mode Dense Space Division Multiplexed Transmission over 40 km Employing Multi-carrier Signals with Parallel MIMO Equalization

    DEFF Research Database (Denmark)

    Mizuno, T.; Kobayashi, T.; Takara, H.;

    2014-01-01

    We demonstrate dense SDM transmission of 20-WDM multi-carrier PDM-32QAM signals over a 40-km 12-core x 3-mode fiber with 247.9-b/s/Hz spectral efficiency. Parallel MIMO equalization enables 21-ns DMD compensation with 61 TDE taps per subcarrier.......We demonstrate dense SDM transmission of 20-WDM multi-carrier PDM-32QAM signals over a 40-km 12-core x 3-mode fiber with 247.9-b/s/Hz spectral efficiency. Parallel MIMO equalization enables 21-ns DMD compensation with 61 TDE taps per subcarrier....

  9. Recent climate tendencies on an East Antarctic ice shelf inferred from a shallow firn core network

    Science.gov (United States)

    Schlosser, E; Anschütz, H; Divine, D; Martma, T; Sinisalo, A; Altnau, S; Isaksson, E

    2014-01-01

    Nearly three decades of stable isotope ratios and surface mass balance (SMB) data from eight shallow firn cores retrieved at Fimbul Ice Shelf, East Antarctica, in the Austral summers 2009–2011 have been investigated. An additional longer core drilled in 2000/2001 extends the series back to the early eighteenth century. Isotope ratios and SMB from the stacked record of all cores were also related to instrumental temperature data from Neumayer Station on Ekström Ice Shelf. Since the second half of the twentieth century, the SMB shows a statistically significant negative trend, whereas the δ18O of the cores shows a significant positive trend. No trend is found in air temperature at the nearest suitable weather station, Neumayer (available since 1981). This does not correspond to the statistically significant positive trend in Southern Annular Mode (SAM) index, which is usually associated with a cooling of East Antarctica. SAM index and SMB are negatively correlated, which might be explained by a decrease in meridional exchange of energy and moisture leading to lower precipitation amounts. Future monitoring of climate change on the sensitive Antarctic ice shelves is necessary to assess its consequences for sea level change. Key Points Mass balance and stable oxygen isotope ratios from shallow firn cores Decreasing trend in surface mass balance, no trend in stable isotopes Negative correlation between SAM and SMB PMID:25821663

  10. North Pacific Climate Variability in Ice Core Accumulation Records From Eclipse Icefield, Yukon, Canada

    Science.gov (United States)

    Yalcin, K.; Wake, C. P.; Kreutz, K. J.

    2005-12-01

    Three annually dated ice cores from Eclipse Icefield, Yukon, Canada provide records of net accumulation spanning the last 100 to 500 years. The ice cores were dated by annual layer counting verified by reference horizons provided by radioactive fallout and volcanic eruptions. Annual layers become progressively thinner with depth in the Eclipse ice cores, requiring reconstruction of original annual layer thicknesses by correcting for ice creep. An empirical approach was used that is based on the observed layer thicknesses from annual layer counting of the Eclipse ice cores. Accumulation records are highly reproducible with 73% of the signal shared between the three cores. The accumulation time-series shows considerable decadal scale variability that can be related to climate regimes that characterize the North Pacific. For example, periods of high accumulation are noted from 1470-1500, 1540-1560, and 1925-1975. Periods of low accumulation are observed between 1500-1540, 1680-1780, and 1875-1925. The strongest multi-year drop in accumulation is seen between 1979 and 1984, although there are isolated years with lower accumulation. This drop in accumulation is possibly related to the 1977 regime shift in the Pacific Decadal Oscillation. However, PDO regime shifts are not always reflected in the accumulation time series, implying a non-linear response or modulation by other modes of climate variability such as ENSO. Its is noteworthy that the Eclipse accumulation time series is out of phase with the accumulation time series from nearby Mount Logan on all time scales for reasons to be investigated.

  11. Objective identification of climate states from Greenland ice cores for the last glacial period

    Directory of Open Access Journals (Sweden)

    D. J. Peavoy

    2010-06-01

    Full Text Available We present statistical methods to systematically determine climate regimes for the last glacial period using three temperature proxy records from Greenland: measurements of δ18O from the Greenland Ice Sheet Project 2 (GISP2, the Greenland Ice Core Project (GRIP and the North Greenland Ice Core Project (NGRIP. By using Bayesian model comparison methods we find that, in two out of three data sets, a model with 3 states is very strongly supported. We interpret these states as corresponding to: a gradual cooling regime due to iceberg influx in the North Atlantic, sudden temperature decrease due to increased freshwater influx following ice sheet collapse and to the Dansgaard-Oeschger events associated with sudden rebound temperature increase after the thermohaline circulation recovers its full flux. We find that these models are far superior to those that differentiate between states based on absolute temperature differences only, which questions the appropriateness of defining stadial and interstadial climate states. We investigate the recurrence properties of these climate regimes and find that the only significant periodicity is within the Greenland Ice Sheet Project 2 data at 1450 years in agreement with previous studies.

  12. Non-climatic signal in ice core records: lessons from Antarctic mega-dunes

    Directory of Open Access Journals (Sweden)

    A. Ekaykin

    2015-12-01

    Full Text Available We present the results of glaciological investigations in the mega-dune area located 30 km to the east from Vostok Station (central East Antarctica implemented during the 58th, 59th and 60th Russian Antarctic Expedition (January 2013–January 2015. Snow accumulation rate and isotope content (δD, δ18O and δ17O were measured along the 2 km profile across the mega-dune ridge accompanied by precise GPS altitude measurements and GPR survey. It is shown that the spatial variability of snow accumulation and isotope content covaries with the surface slope. The accumulation rate regularly changes by one order of magnitude within the distance −1. The full cycle of the dune drift is thus about 410 years. Since the spatial anomalies of snow accumulation and isotopic composition are supposed to drift with the dune, an ice core drilled in the mega-dune area would exhibit the non-climatic 410 year cycle of these two parameters. We simulated a vertical profile of snow isotopic composition with such a non-climatic variability, using the data on the dune size and velocity. This artificial profile is then compared with the real vertical profile of snow isotopic composition obtained from a core drilled in the mega-dune area. We note that the two profiles are very similar. The obtained results are discussed in terms of interpretation of data obtained from ice cores drilled beyond the mega-dune areas.

  13. Meteoric 10Be in Lake Cores as a Measure of Climatic and Erosional Change

    Science.gov (United States)

    Jensen, R. E.; Dixon, J. L.

    2015-12-01

    Utilization of meteoric 10Be as a paleoenvironmental proxy has the potential to offer new insights into paleoprecipitation records and paleoclimate models, as well as to long-term variations in erosion with climate. The delivery of meteoric 10Be to the surface varies with precipitation and its strong adsorption to sediment has already proven useful in studies of erosion. Thus, it is likely meteoric 10Be concentrations in lake sediments vary under both changing climate and changing sediment influx. Assessment of the relative importance of these changes requires the comparison of 10Be concentrations in well-dated lake cores with independent paleoenvironmental proxies, including oxygen isotope, pollen, and charcoal records, as well as variation in geochemical composition of the sediments. Blacktail Pond details 15,000 years of climatic change in the Yellowstone region. We develop a new model framework for predicting meteoric 10Be concentrations with depth in the core, based on sedimentation rates of both lake-derived and terrigenous sediments and changes in the flux of meteoric 10Be with precipitation. Titanium concentrations and previously determined 10Be concentrations in wind-derived loess provide proxies for changing delivery of 10Be to the lake by terrigenous sources. We use existing paleoenvironmental data obtained from this core and the surrounding region to develop models for changing rainfall across the region and predict meteoric 10Be delivery to the lake by precipitation. Based on a suite of ~10 models, sedimentation rate is the primary control of meteoric 10Be in the Blacktail Pond core unless terrestrial input is very high, as it was post-glacial in the early Holocene when the lake experienced a high influx of loess and terrigenous sediments. We used these models to inform sample selection for 10Be analysis along the Blacktail pond core. Core sediments are processed for meteoric 10Be analysis using sequential digestions and standard extraction procedures

  14. Real-time parallel implementation of Pulse-Doppler radar signal processing chain on a massively parallel machine based on multi-core DSP and Serial RapidIO interconnect

    Science.gov (United States)

    Klilou, Abdessamad; Belkouch, Said; Elleaume, Philippe; Le Gall, Philippe; Bourzeix, François; Hassani, Moha M'Rabet

    2014-12-01

    Pulse-Doppler radars require high-computing power. A massively parallel machine has been developed in this paper to implement a Pulse-Doppler radar signal processing chain in real-time fashion. The proposed machine consists of two C6678 digital signal processors (DSPs), each with eight DSP cores, interconnected with Serial RapidIO (SRIO) bus. In this study, each individual core is considered as the basic processing element; hence, the proposed parallel machine contains 16 processing elements. A straightforward model has been adopted to distribute the Pulse-Doppler radar signal processing chain. This model provides low latency, but communication inefficiency limits system performance. This paper proposes several optimizations that greatly reduce the inter-processor communication in a straightforward model and improves the parallel efficiency of the system. A use case of the Pulse-Doppler radar signal processing chain has been used to illustrate and validate the concept of the proposed mapping model. Experimental results show that the parallel efficiency of the proposed parallel machine is about 90%.

  15. Recent climate tendencies on an East Antarctic ice shelf inferred from a shallow firn core network.

    Science.gov (United States)

    Schlosser, E; Anschütz, H; Divine, D; Martma, T; Sinisalo, A; Altnau, S; Isaksson, E

    2014-06-16

    Nearly three decades of stable isotope ratios and surface mass balance (SMB) data from eight shallow firn cores retrieved at Fimbul Ice Shelf, East Antarctica, in the Austral summers 2009-2011 have been investigated. An additional longer core drilled in 2000/2001 extends the series back to the early eighteenth century. Isotope ratios and SMB from the stacked record of all cores were also related to instrumental temperature data from Neumayer Station on Ekström Ice Shelf. Since the second half of the twentieth century, the SMB shows a statistically significant negative trend, whereas the δ(18)O of the cores shows a significant positive trend. No trend is found in air temperature at the nearest suitable weather station, Neumayer (available since 1981). This does not correspond to the statistically significant positive trend in Southern Annular Mode (SAM) index, which is usually associated with a cooling of East Antarctica. SAM index and SMB are negatively correlated, which might be explained by a decrease in meridional exchange of energy and moisture leading to lower precipitation amounts. Future monitoring of climate change on the sensitive Antarctic ice shelves is necessary to assess its consequences for sea level change.

  16. Chemical signals of past climate and environment from polar ice cores and firn air.

    Science.gov (United States)

    Wolff, Eric W

    2012-10-07

    Chemical and isotopic records obtained from polar ice cores have provided some of the most iconic datasets in Earth system science. Here, I discuss how the different records are formed in the ice sheets, emphasising in particular the contrast between chemistry held in the snow/ice phase, and that which is trapped in air bubbles. Air diffusing slowly through the upper firn layers of the ice sheet can also be sampled in large volumes to give more recent historical information on atmospheric composition. The chemical and geophysical issues that have to be solved to interpret ice core data in terms of atmospheric composition and emission changes are also highlighted. Ice cores and firn air have provided particularly strong evidence about recent changes (last few decades to centuries), including otherwise inaccessible data on increases in compounds that are active as greenhouse gases or as agents of stratospheric depletion. On longer timescales (up to 800,000 years in Antarctica), ice cores reveal major changes in biogeochemical cycling, which acted as feedbacks on the very major changes in climate between glacial and interglacial periods.

  17. 10Be climate fingerprints during the Eemian in the NEEM ice core, Greenland

    Science.gov (United States)

    Sturevik-Storm, Anna; Aldahan, Ala; Possnert, Göran; Berggren, Ann-Marie; Muscheler, Raimund; Dahl-Jensen, Dorthe; Vinther, Bo M.; Usoskin, Ilya

    2014-09-01

    Several deep Greenland ice cores have been retrieved, however, capturing the Eemian period has been problematic due to stratigraphic disturbances in the ice. The new Greenland deep ice core from the NEEM site (77.45°N, 51.06°W, 2450 m.a.s.l) recovered a relatively complete Eemian record. Here we discuss the cosmogenic 10Be isotope record from this core. The results show Eemian average 10Be concentrations about 0.7 times lower than in the Holocene which suggests a warmer climate and approximately 65-90% higher precipitation in Northern Greenland compared to today. Effects of shorter solar variations on 10Be concentration are smoothed out due to coarse time resolution, but occurrence of a solar maximum at 115.26-115.36 kyr BP is proposed. Relatively high 10Be concentrations are found in the basal ice sections of the core which may originate from the glacial-interglacial transition and relate to a geomagnetic excursion about 200 kyr BP.

  18. Climatic changes on orbital and sub-orbital time scale recorded by the Guliya ice core in Tibetan Plateau

    Institute of Scientific and Technical Information of China (English)

    姚檀栋; 徐柏青; 蒲健辰

    2001-01-01

    Based on ice core records in the Tibetan Plateau and Greenland, the features and possible causes of climatic changes on orbital and sub-orbital time scale were discussed. Orbital time scale climatic change recorded in ice core from the Tibetan Plateau is typically ahead of that from polar regions, which indicates that climatic change in the Tibetan Plateau might be earlier than polar regions. The solar radiation change is a major factor that dominates the climatic change on orbital time scale. However, climatic events on sub-orbital time scale occurred later in the Tibetan Plateau than in the Arctic Region, indicating a different mechanism. For example, the Younger Dryas and Heinrich events took place earlier in Greenland ice core record than in Guliya ice core record. It is reasonable to propose the hypothesis that these climatic events were affected possibly by the Laurentide Ice Sheet. Therefore, ice sheet is critically important to climatic change on sub-orbital time scale in some ice ages.

  19. Towards understanding North Pacific climate variabilty with instrumental and ice core records

    Science.gov (United States)

    Kelsey, Eric P.

    Reconstructing climate variability prior to the instrumental era is critical to advance our understanding of the Earth's climate system. Although many paleoclimate records from the North Atlantic basin have been studied, relatively few paleoclimate records have been recovered in the North Pacific leaving a gap in our knowledge concerning North Pacific climate variability. The Eclipse and Mount Logan Prospector-Russell ice cores are favorably located in the St. Elias Mountains, Yukon, Canada to document North Pacific climate variability over the late Holocene. Detailed analysis reveals a consistent relationship of surface air temperature (SAT) anomalies associated with extreme Arctic Oscillation (AO) and Pacific-North America (PNA) index values, and a consistent relationship of North Pacific sea level pressure (SLP) anomalies associated with extreme Mt. Logan annual [Na+] and Eclipse cold season accumulation values. Spatial SAT anomaly patterns are most consistent for AO and PNA index values ≥1.5 and ≤-1.5 during the period 1872-2010. The highest and lowest ˜10% of Eclipse warm and cold season stable isotopes are associated with distinct atmospheric circulation patterns. The most-fractionated isotope values occur with a weaker Aleutian Low, and the least-fractionated isotope values occur with an amplification of the Aleutian Low and northwestern North American ridge. The assumption of stationarity between ice core records and sea-level pressure was tested for the Eclipse cold season accumulation and Mt. Logan annual sodium concentration records for 1872-2001. A stationary relationship was found for ≥95% of years when Mt. Logan sodium concentrations were ≤1.32 microg/L, with positive SLP anomalies in the eastern North Pacific. This high frequency supports the use of low sodium values at Mt. Logan for a reconstruction of SLP prior to 1872. Negative SLP anomalies in the North Pacific occurred for extreme high sodium concentration years and positive SLP

  20. Third Pole Glaciers and Ice Core Records of Past, Present and Future Climate

    Science.gov (United States)

    Thompson, L. G.; Yao, T.; Mosley-Thompson, E. S.; Davis, M. E.

    2011-12-01

    Ice core histories collected over the last two decades from across the Tibetan Plateau and the Himalaya demonstrate the climatic complexity and diversity of the Third Pole (TP) region. Proxy climate records spanning more than 500,000 years have been recovered from the Guliya ice cap in the far northwestern Kunlun Shan, which is dominated by westerly air flow over the Eurasian land mass. Shorter records (central TP, and also in the Himalaya to the south where a monsoonal climate regime dominates and the annual accumulation is high. The Himalayan ice fields are sensitive to fluctuations in the intensity of the South Asian Monsoon and are affected by the rising temperatures in the region. We examine the recent climatic changes to earlier distinctive epochs such as the Medieval Climate Anomaly (~950-1250 AD), the early Holocene "Hypsithermal" (~5 to 9 kyr BP) and the Eemian (~114 -130 kyr BP). The Eemian, the most recent period when Earth was significantly warmer than today, can serve in part as an analog for the coming greenhouse world. One thousand-year records of δ18O variations from four of these ice fields illustrate the effect of the recent warming across the TP. Mean values for much of the 20th Century (AD 1938 to 1987) are compared with those for the prior nine centuries (1000 to 1937 AD). The greatest recent enrichment occurs on the highest elevation site (Dasuopu in the Himalaya), presumably where the greatest warming is occurring. These trends are consistent with instrumental temperature records collected since the 1950s across the TP as well as with IPCC (2007) model predictions of a nearly two-fold vertical amplification of temperatures in the Tropics. A fifth ice field, Naimona'nyi (6100 masl), is not included in the study as recent melting from the top of the glacier has obliterated the upper 40 to 50 years of the record. Evidence confirming this will be presented along with recent mass balance measurements indicating that no net accumulation occurs on

  1. A stratigraphic framework for naming and robust correlation of abrupt climatic changes during the last glacial period based on three synchronized Greenland ice core records

    Science.gov (United States)

    Rasmussen, Sune O.

    2014-05-01

    Due to their outstanding resolution and well-constrained chronologies, Greenland ice core records have long been used as a master record of past climatic changes during the last interglacial-glacial cycle in the North Atlantic region. As part of the INTIMATE (INtegration of Ice-core, MArine and TErrestrial records) project, protocols have been proposed to ensure consistent and robust correlation between different records of past climate. A key element of these protocols has been the formal definition of numbered Greenland Stadials (GS) and Greenland Interstadials (GI) within the past glacial period as the Greenland expressions of the characteristic Dansgaard-Oeschger events that represent cold and warm phases of the North Atlantic region, respectively. Using a recent synchronization of the NGRIP, GRIP, and GISP2 ice cores that allows the parallel analysis of all three records on a common time scale, we here present an extension of the GS/GI stratigraphic template to the entire glacial period. This is based on a combination of isotope ratios (δ18O, reflecting mainly local temperature) and calcium concentrations (reflecting mainly atmospheric dust loading). In addition to the well-known sequence of Dansgaard-Oeschger events that were first defined and numbered in the ice core records more than two decades ago, a number of short-lived climatic oscillations have been identified in the three synchronized records. Some of these events have been observed in other studies, but we here propose a consistent scheme for discriminating and naming all the significant climatic events of the last glacial period that are represented in the Greenland ice cores. This is a key step aimed at promoting unambiguous comparison and correlation between different proxy records, as well as a more secure basis for investigating the dynamics and fundamental causes of these climatic perturbations. The work presented is under review for publication in Quaternary Science Reviews. Author team: S

  2. Research of Methods for General Multi-Core Parallel Debugging%通用多核并行调试方法研究

    Institute of Scientific and Technical Information of China (English)

    王敬宇

    2009-01-01

    Mutli-core architectures challenges parallel programming further.High production capability on parallel software can't be archieved with current manual debugging techniques.This paper analyzes currently available debugging methods, present a progressive debugging by the parallel granularity,which could make full use of our incremental experiences on parallel programming.%多核体系结构加深了并行编程的难度.为开发高效的多核并行调试工具,本文分析了传统并行调试技术面临的问题,提出按并行粒度分级的调试方法,该方法可充分利用并行编程的经验,不断优化调试技术.

  3. Is there a connection between Earth's core and climate at multidecadal time scales?

    Science.gov (United States)

    Lambert, Sébastien; Marcus, Steven; de Viron, Olivier

    2017-04-01

    The length-of-day (LOD) undergoes multidecadal variations of several milliseconds (ms) attributed to changes in the fluid outer core angular momentum. These variations resemble a quasi-periodic oscillation of duration 60 to 70 years, although the periodicity (and its accurate length) are disputable because of the relatively short observational time span and the lower quality of the observations before the 20th century. Interestingly, similar variations show up in various measured or reconstructed climate indices including the sea surface (SST) and surface air (SAT) temperatures. It has been shown in several studies that LOD variations lead SST and SAT variations by a few years. No clear scenarios have been raised so far to explain the link between external, astronomical forcing (e.g., Solar wind), Earth's rotation (core-driven torsional) oscillations, and Earth's surface processes (climate variations) at these time scales. Accumulating evidence, however, suggests the centrifugal tides generated by multidecadal LOD variations as a 'valve' to control the transfer of thermal energy from the lithosphere to the surface via geothermal fluxes. This hypothesis is supported by recent studies reporting significant correlations between tidal and rotational excitation and seafloor and surface volcanism. In this study, we extend recent works from us and other independent authors by re-assessing the correlations between multidecadal LOD, climate indices, Solar and magnetic activities, as well as gridded data including SST, SAT, and cloud cover. We pay a special attention to the time lags: when a significant correlation is found, the value of the lag may help to discriminate between various possible scenarios. We locate some `hot spots', particularly in the Atlantic ocean and along the trajectory of the upper branch of the Atlantic meridional overturning circulation (AMOC), where the 70-yr oscillation is strongly marked. In addition, we discuss the possibility for centrifugal

  4. Non-climatic signal in ice core records: lessons from Antarctic megadunes

    Science.gov (United States)

    Ekaykin, Alexey; Eberlein, Lutz; Lipenkov, Vladimir; Popov, Sergey; Scheinert, Mirko; Schröder, Ludwig; Turkeev, Alexey

    2016-06-01

    We present the results of glaciological investigations in the megadune area located 30 km to the east of Vostok Station (central East Antarctica) implemented during the 58th, 59th and 60th Russian Antarctic Expedition (January 2013-2015). Snow accumulation rate and isotope content (δD, δ18O and δ17O) were measured along the 2 km profile across the megadune ridge accompanied by precise GPS altitude measurements and ground penetrating radar (GPR) survey. It is shown that the spatial variability of snow accumulation and isotope content covaries with the surface slope. The accumulation rate regularly changes by 1 order of magnitude within the distance negative correlation with the snow accumulation. Analysing dxs / δD and 17O-excess / δD slopes (where dxs = δD - 8 ṡ δ18O and 17O-excess = ln(δ17O / 1000 + 1) -0.528 ṡ ln (δ18O / 1000 + 1)), we conclude that the spatial variability of the snow isotopic composition in the megadune area could be explained by post-depositional snow modifications. Using the GPR data, we estimated the apparent dune drift velocity (4.6 ± 1.1 m yr-1). The full cycle of the dune drift is thus about 410 years. Since the spatial anomalies of snow accumulation and isotopic composition are supposed to drift with the dune, a core drilled in the megadune area would exhibit the non-climatic 410-year cycle of these two parameters. We simulated a vertical profile of snow isotopic composition with such a non-climatic variability, using the data on the dune size and velocity. This artificial profile is then compared with the real vertical profile of snow isotopic composition obtained from a core drilled in the megadune area. We note that the two profiles are very similar. The obtained results are discussed in terms of interpretation of data obtained from ice cores drilled beyond the megadune areas.

  5. Human and climate impacts on Holocene fire activity recorded in polar and mountain ice cores

    Science.gov (United States)

    Kehrwald, Natalie; Zennaro, Piero; Kirchgeorg, Torben; Li, Quanlian; Wang, Ninglian; Power, Mitchell; Zangrando, Roberta; Gabrielli, Paolo; Thompson, Lonnie; Gambaro, Andrea; Barbante, Carlo

    2014-05-01

    Fire is one of the major influences of biogeochemical change on local to hemispheric scales through emitting greenhouse gases, altering atmospheric chemistry, and changing primary productivity. Levoglucosan (1,6-anhydro-β-D-glucopyranose) is a specific molecular that can only be produced by cellulose burning at temperatures > 300°C, comprises a major component of smoke plumes, and can be transported across > 1000 km distances. Levoglucosan is deposited on and archived in glaciers over glacial interglacial cycles resulting in pyrochemical evidence for exploring interactions between fire, climate and human activity. Ice core records provide records of past biomass burning from regions of the world with limited paleofire data including polar and low-latitude, high-altitude regions. Here, we present Holocene fire activity records from the NEEM, Greenland (77° 27'N; 51° 3'W; 2454 masl), EPICA Dome C, Antarctica (75° 06'S; 123° 21'E; 3233 masl), Kilimanjaro, Tanzania (3° 05'S, 21.2° E, 5893 masl) and the Muztagh, China (87.17° E; 36.35° N; 5780 masl ice cores. The NEEM ice core reflects boreal fire activity from both North American and Eurasian sources. Temperature is the dominant control of NEEM levoglucosan flux over decadal to millennial time scales, while droughts influence fire activity over sub-decadal timescales. Our results demonstrate the prominence of Siberian fire sources during intense multiannual droughts. Unlike the NEEM core, which incorporates the largest land masses in the world as potential fire sources, EPICA Dome C is located far from any possible fire source. However, EPICA Dome C levoglucosan concentrations are consistently above detection limits and demonstrate a substantial 1000-fold increase in fire activity beginning approximately 800 years ago. This significant and sustained increase coincides with Maori arrival and dispersal in New Zealand augmented by later European arrival in Australia. The EPICA Dome C levoglucosan profile is

  6. Mt. Logan Ice Core Record of North Pacific Holocene Climate Variability

    Science.gov (United States)

    Osterberg, E. C.; Mayewski, P. A.; Fisher, D. A.; Kreutz, K. J.; Handley, M. J.; Sneed, S. B.

    2006-12-01

    A >12,000 year-long, continuous, high-resolution (sub-annual to multi-decadal) ice core record from the summit plateau (5300 m asl) of Mt. Logan, Yukon, Canada, reveals large, abrupt fluctuations in North Pacific climate throughout the Holocene with a 1-2 ky periodicity. Co-registered major ion, trace element and stable isotope time series reveal a strong inverse relationship between precipitation δ18O and atmospheric seasalt and dust concentrations over multi-decadal to millennial periods (rMt. Logan represent changes in moisture source region between dominantly cold North Pacific waters (more zonal circulation; enriched stable isotope values) and warmer subtropical waters (more meridional circulation; depleted stable isotope values). Consequently, Holocene millennial-scale stable isotope fluctuations in the Mt. Logan core have a larger amplitude (6-9‰ for δ18O) than those found in Greenland and Canadian Arctic ice core records (e.g. 2-3‰ for GISP2 δ18O). Over the instrumental period (1948-1998), higher Mt. Logan dust concentrations are strongly associated with enhanced springtime cyclonic activity over East Asian desert source regions (rMt. Logan seasalt aerosol concentrations are related to the wintertime strength of the Aleutian Low pressure center (r<-0.45, p<0.001). We use these calibrated proxy relationships to propose a conceptual model of North Pacific atmospheric circulation during the Holocene.

  7. MC64-ClustalWP2: a highly-parallel hybrid strategy to align multiple sequences in many-core architectures.

    Directory of Open Access Journals (Sweden)

    David Díaz

    Full Text Available We have developed the MC64-ClustalWP2 as a new implementation of the Clustal W algorithm, integrating a novel parallelization strategy and significantly increasing the performance when aligning long sequences in architectures with many cores. It must be stressed that in such a process, the detailed analysis of both the software and hardware features and peculiarities is of paramount importance to reveal key points to exploit and optimize the full potential of parallelism in many-core CPU systems. The new parallelization approach has focused into the most time-consuming stages of this algorithm. In particular, the so-called progressive alignment has drastically improved the performance, due to a fine-grained approach where the forward and backward loops were unrolled and parallelized. Another key approach has been the implementation of the new algorithm in a hybrid-computing system, integrating both an Intel Xeon multi-core CPU and a Tilera Tile64 many-core card. A comparison with other Clustal W implementations reveals the high-performance of the new algorithm and strategy in many-core CPU architectures, in a scenario where the sequences to align are relatively long (more than 10 kb and, hence, a many-core GPU hardware cannot be used. Thus, the MC64-ClustalWP2 runs multiple alignments more than 18x than the original Clustal W algorithm, and more than 7x than the best x86 parallel implementation to date, being publicly available through a web service. Besides, these developments have been deployed in cost-effective personal computers and should be useful for life-science researchers, including the identification of identities and differences for mutation/polymorphism analyses, biodiversity and evolutionary studies and for the development of molecular markers for paternity testing, germplasm management and protection, to assist breeding, illegal traffic control, fraud prevention and for the protection of the intellectual property (identification

  8. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-09-30

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  9. Characterization of rapid climate changes through isotope analyses of ice and entrapped air in the NEEM ice core

    DEFF Research Database (Denmark)

    Guillevic, Myriam

    Greenland ice core have revealed the occurrence of rapid climatic instabilities during the last glacial period, known as Dansgaard-Oeschger (DO) events, while marine cores from the North Atlantic have evidenced layers of ice rafted debris deposited by icebergs melt, caused by the collapse...... four Greenland deep ice cores (GRIP, GISP2, NGRIP and NEEM) are investigated over a series of Dansgaard– Oeschger events (DO 8, 9 and 10). Combined with firn modeling, δ15N data allow us to quantify abrupt temperature increases for each drill site (1σ = 0.6°C for NEEM, GRIP and GISP2, 1.5°C for NGRIP...

  10. The Role of the Tropics in Last Glacial Abrupt Climate Change from a West Antarctic Ice Core

    Science.gov (United States)

    Jones, T. R.; White, J. W. C.; Steig, E. J.; Cuffey, K. M.; Vaughn, B. H.; Morris, V. A.; Vasileios, G.; Markle, B. R.; Schoenemann, S. W.

    2014-12-01

    Debate exists as to whether last glacial abrupt climate changes in Greenland, and associated changes in Antarctica, had a high-latitude or tropical trigger. An ultra high-resolution water isotope record from the West Antarctic Ice Sheet Divide (WAIS Divide) Ice Core Project has been developed with three key water isotope parameters that offer insight into this debate: δD, δ18O, and deuterium excess (dxs). δD and δ18O are a proxy for local temperature and regional atmospheric circulation, while dxs is primarily a proxy for sea surface temperature at the ice core's moisture source(s) (relative humidity and wind speed also play a role). We build on past studies that show West Antarctic climate is modulated by El Niño Southern Oscillation (ENSO) teleconnection mechanisms, which originate in the equatorial Pacific Ocean, to infer how past ENSO changes may have influenced abrupt climate change. Using frequency analysis of the water isotope data, we can reconstruct the amplitude of ENSO-scale climate oscillations in the 2-15 year range within temporal windows as low as 100 years. Our analysis uses a back diffusion model that estimates initial amplitudes before decay in the firn column. We combine δD, δ18O, and dxs frequency analysis to evaluate how climate variability at WAIS Divide is influenced by tropical climate forcing. Our results should ultimately offer insight into the role of the tropics in abrupt climate change.

  11. Horde: A framework for parallel programming on multi-core clusters%Horde:面向多核集群的并行编程框架

    Institute of Scientific and Technical Information of China (English)

    薛巍; 张凯; 陈康

    2011-01-01

    并行程序可以充分发掘硬件计算能力并提高程序性能,但是在多核集群环境中编写并行程序十分复杂。该文提出了面向多核集群的并行编程框架,Horde。Horde提供了一组简单易用的消息传递接口和事件驱动(event-driven)编程模型,用以帮助程序员表达算法逻辑中潜在的并行性,将计算分解与底层硬件结构去耦合,从而简化编写并行程序的复杂度,灵活地在不同的底层结构的集群上进行映射并能保持良好的性能。此外,Horde也提供了有效的任务对象迁移机制,可以实现动态负载均衡与在线容错。在128核集群上的实验表明:Horde可以有效执行并行程序,并且可以实现高效的任务对象迁移。%Parallel programming hardware to improve performance. utilizes the capacity of parallel However, parallel applications are difficult to program on multi-core clusters. This paper presents a framework for parallel programming on mult?core clusters called Horde. This framework provides a set of easy to use message-passing interfaces and an event driven programming model while helps programmers express parallelisms in the application level and decouple the computational decomposition strategy from the hardware architecture. As such, Horde releases programmers from the difficulties of building complex parallel programs and accommodates different infrastructures while maintaining reasonable performance. Horde also provides task-object migration, which is the key technology for dynamic load balancing and fault tolerance. Tests on a 128-core cluster demonstrate that this system enables high performance parallel programs as well as effective job migration.

  12. Parallel and lineage-specific molecular adaptation to climate in boreal black spruce.

    Science.gov (United States)

    Prunier, Julien; Gérardi, Sébastien; Laroche, Jérôme; Beaulieu, Jean; Bousquet, Jean

    2012-09-01

    In response to selective pressure, adaptation may follow different genetic pathways throughout the natural range of a species due to historical differentiation in standing genetic variation. Using 41 populations of black spruce (Picea mariana), the objectives of this study were to identify adaptive genetic polymorphisms related to temperature and precipitation variation across the transcontinental range of the species, and to evaluate the potential influence of historical events on their geographic distribution. Population structure was first inferred using 50 control nuclear markers. Then, 47 candidate gene SNPs identified in previous genome scans were tested for relationship with climatic factors using an F(ST) -based outlier method and regressions between allele frequencies and climatic variations. Two main intraspecific lineages related to glacial vicariance were detected at the transcontinental scale. Within-lineage analyses of allele frequencies allowed the identification of 23 candidate SNPs significantly related to precipitation and/or temperature variation, among which seven were common to both lineages, eight were specific to the eastern lineage and eight were specific to the western lineage. The implication of these candidate SNPs in adaptive processes was further supported by gene functional annotations. Multiple evidences indicated that the occurrence of lineage-specific adaptive SNPs was better explained by selection acting on historically differentiated gene pools rather than differential selection due to heterogeneity of interacting environmental factors and pleiotropic effects. Taken together, these findings suggest that standing genetic variation of potentially adaptive nature has been modified by historical events, hence affecting the outcome of recent selection and leading to different adaptive routes between intraspecific lineages. © 2012 Blackwell Publishing Ltd.

  13. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  14. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  15. Climatic and environmental changes over the last millennium recorded in the Malan ice core from the northern Tibetan Plateau

    Institute of Scientific and Technical Information of China (English)

    WANG Ninglian; YAO Tandong; PU Jianchen; ZHANG Yongliang; SUN Weizhen

    2006-01-01

    In this paper, climatic and environmental changes were reconstructed since 1129A.D.based on the Malan ice core from Hol Xil, the northern Tibetan Plateau. The record of δ18O in the Malan ice core indicated that the warm-season air temperature variations displayed a general increase trend, the 20th-century warming was within the range of natural climate variability, and the warmest century was the 17th century while the warmest decade was the 1610s, over the entire study period. The "Medieval Warm Epoch" and "Little Ice Age" were also reflected by the ice core record.The dust ratio in the Malan ice core is a good proxy for dust event frequency. The 870-year record of the dust ratio showed that dust events occurred much frequently in the 19th century. Comparing the variations of δ18O and the dust ratio, it is found that there was a strong negative correlation between them on the time scales of 101 - 102 years. By analyses of all the climatic records of ice cores and tree rings from the northern Tibetan Plateau, it was revealed that dust events were more frequent in the cold and dry periods than in the warm and wet periods.

  16. Characterization of rapid climate changes through isotope analyses of ice and entrapped air in the NEEM ice core

    DEFF Research Database (Denmark)

    Guillevic, Myriam

    Greenland ice core have revealed the occurrence of rapid climatic instabilities during the last glacial period, known as Dansgaard-Oeschger (DO) events, while marine cores from the North Atlantic have evidenced layers of ice rafted debris deposited by icebergs melt, caused by the collapse...... of Northern hemisphere ice sheets, known as Heinrich events. The imprint of DO and Heinrich events is also recorded at mid to low latitudes in different archives of the northern hemisphere. A detailed multi-proxy study of the sequence of these rapid instabilities is essential for understanding the climate...... mechanisms at play. Recent analytical developments have made possible to measure new paleoclimate proxies in Greenland ice cores. In this thesis we first contribute to these analytical developments by measuring the new innovative parameter 17O-excess at LSCE (Laboratoire des Sciences du Climatet de l...

  17. Parallels among the ``music scores'' of solar cycles, space weather and Earth's climate

    Science.gov (United States)

    Kolláth, Zoltán; Oláh, Katalin; van Driel-Gesztelyi, Lidia

    2012-07-01

    Solar variability and its effects on the physical variability of our (space) environment produces complex signals. In the indicators of solar activity at least four independent cyclic components can be identified, all of them with temporal variations in their timescales. Time-frequency distributions (see Kolláth & Oláh 2009) are perfect tools to disclose the ``music scores'' in these complex time series. Special features in the time-frequency distributions, like frequency splitting, or modulations on different timescales provide clues, which can reveal similar trends among different indices like sunspot numbers, interplanetary magnetic field strength in the Earth's neighborhood and climate data. On the pseudo-Wigner Distribution (PWD) the frequency splitting of all the three main components (the Gleissberg and Schwabe cycles, and an ~5.5 year signal originating from cycle asymmetry, i.e. the Waldmeier effect) can be identified as a ``bubble'' shaped structure after 1950. The same frequency splitting feature can also be found in the heliospheric magnetic field data and the microwave radio flux.

  18. Hybrid MPI/OpenMP parallelization of the explicit Volterra integral equation solver for multi-core computer architectures

    KAUST Repository

    Al Jarro, Ahmed

    2011-08-01

    A hybrid MPI/OpenMP scheme for efficiently parallelizing the explicit marching-on-in-time (MOT)-based solution of the time-domain volume (Volterra) integral equation (TD-VIE) is presented. The proposed scheme equally distributes tested field values and operations pertinent to the computation of tested fields among the nodes using the MPI standard; while the source field values are stored in all nodes. Within each node, OpenMP standard is used to further accelerate the computation of the tested fields. Numerical results demonstrate that the proposed parallelization scheme scales well for problems involving three million or more spatial discretization elements. © 2011 IEEE.

  19. 多核平台并行单源最短路径算法%Parallel Single-source Shortest Path Algorithm on Multi-core Platform

    Institute of Scientific and Technical Information of China (English)

    黄跃峰; 钟耳顺

    2012-01-01

    A multi-thread parallel Single-source Shortest Path(SSSP) algorithm is proposed in multi-cores platform. It employs buckets to sort and uses the similar parallel strategy of A-Stepping algorithm. It does edge relaxations of the same bucket in parallel by slave threads, and searches all buckets in sequence by master thread. Experimental results show that this algorithm performs 4 seconds in the USA road network, achieving a higher speedup compared with serial parallel algorithm using same code.%提出一种多核平台并行单源最短路径算法.采用与Δ-Stepping算法相似的并行策略,通过多个子线程对同一个桶中的弧段进行并行松弛,利用主线程控制串行搜索中桶的序列.实验结果表明,该算法求解全美单源最短路径的时间约为4 s,与使用相同代码实现的串行算法相比,加速比更高.

  20. SWIFT: Using task-based parallelism, fully asynchronous communication, and graph partition-based domain decomposition for strong scaling on more than 100,000 cores

    CERN Document Server

    Schaller, Matthieu; Chalk, Aidan B G; Draper, Peter W

    2016-01-01

    We present a new open-source cosmological code, called SWIFT, designed to solve the equations of hydrodynamics using a particle-based approach (Smooth Particle Hydrodynamics) on hybrid shared/distributed-memory architectures. SWIFT was designed from the bottom up to provide excellent strong scaling on both commodity clusters (Tier-2 systems) and Top100-supercomputers (Tier-0 systems), without relying on architecture-specific features or specialized accelerator hardware. This performance is due to three main computational approaches: (1) Task-based parallelism for shared-memory parallelism, which provides fine-grained load balancing and thus strong scaling on large numbers of cores. (2) Graph-based domain decomposition, which uses the task graph to decompose the simulation domain such that the work, as opposed to just the data, as is the case with most partitioning schemes, is equally distributed across all nodes. (3) Fully dynamic and asynchronous communication, in which communication is modelled as just anot...

  1. Experiences Using Hybrid MPI/OpenMP in the Real World: Parallelization of a 3D CFD Solver for Multi-Core Node Clusters

    Directory of Open Access Journals (Sweden)

    Gabriele Jost

    2010-01-01

    Full Text Available Today most systems in high-performance computing (HPC feature a hierarchical hardware design: shared-memory nodes with several multi-core CPUs are connected via a network infrastructure. When parallelizing an application for these architectures it seems natural to employ a hierarchical programming model such as combining MPI and OpenMP. Nevertheless, there is the general lore that pure MPI outperforms the hybrid MPI/OpenMP approach. In this paper, we describe the hybrid MPI/OpenMP parallelization of IR3D (Incompressible Realistic 3-D code, a full-scale real-world application, which simulates the environmental effects on the evolution of vortices trailing behind control surfaces of underwater vehicles. We discuss performance, scalability and limitations of the pure MPI version of the code on a variety of hardware platforms and show how the hybrid approach can help to overcome certain limitations.

  2. Climate Driven Changes in the Formation Pathways of Atmospheric Sulfate: A Comparison from Bipolar Ice Core Records

    Science.gov (United States)

    Geng, L.; Alexander, B.

    2013-12-01

    Atmospheric sulfate aerosol affects radiative forcing of the atmosphere and thus climate. The formation pathways of sulfate, through gas-phase or aqueous phase oxidation of SO2, have implications for climate forcing because only sulfate produced in the gas-phase can nucleate new aerosol particles. Thus, constraining the formation pathways of sulfate in different climates is important to assess its climate impact. O-17 excess of sulfate (Δ17O(SO42-)) can be used to distinguish the formation pathways of atmospheric sulfate. Δ17O(SO42-) measured from an Antarctic (Vostok) ice core covering a full climate cycle suggested that gas-phase oxidation was more important in the last glacial period than that in the interglacial periods before and after, though its cause was not fully understood. We present new results of Δ17O(SO42-) measured from a Greenland (GISP2) ice core covering the last glacial period. Compared to the Vostok results, the GISP2 results display a similar Δ17O(SO42-) - temperature/climate relationship, but with much smaller Δ17O(SO42-) values in preindustrial Holocene (PIH). This difference seen in PIH is likely because aqueous-phase oxidation of SO2 by H2O2 is more important in the Northern Hemisphere than in the Southern Hemisphere, due to differences in cloud pH and oxidant abundances. Results from a new chemistry-climate model (ICECAP) suggest that the enhanced gas-phase oxidation in the glacial period in both hemispheres is due to 1) increased tropospheric OH production in mid- to high latitudes caused by enhanced UV-B radiation originating from reduced stratospheric ozone abundance and higher surface albedos over land and sea ice, and 2) reduced cloud fraction in the glacial climate. Implications for the global sulfur budget will be discussed.

  3. Coordinating earth observation data validation for RE-analysis for CLIMAte ServiceS - CORE-CLIMAX

    Science.gov (United States)

    Su, Zhongbo; Timmermans, Wim; Zeng, Yijian; Timmermans, Joris

    2014-05-01

    The purpose of the CORE-CLIMAX project is to coordinate the identification of available physical measurements, which can be reconciled with previously existing data records, to form long time series. In this way the project contributes to monitoring the climate system; detect and attribute climate change; and assess impacts of, and support adaptation to, climate variability and change. As such the project will help to substantiate how COPERNICUS observations and products can contribute to climate change analyses, by establishing the extent to which COPERNICUS observations complement existing Climate Data Records. Since reanalyses are important for improving and synthesizing historical climate records, and for providing regional detail in a global context necessary for policy development and implementation, CORE-CLIMAX will identify the integration of Essential Climate Variables (ECVs) into the reanalysis chain by proposing a feedback mechanism ensuring that the results of the re-analysis process get appropriately reflected into updates of the ECVs. Together with inter-comparing different reanalyses, CORE-CLIMAX will eventually contribute to establish a European truly coupled gridded re-analysis which incorporates full exchanges and interactions between atmosphere, ocean and land, including the hydrological cycle. The CORE-CLIMAX project coordinates the identification of available and future physical measurements, which can be reconciled with previously existing data records, to form long time series. One of the major objectives of the CORE-CLIMAX project is the identification of the capability of ongoing activities, contribute to the formulation of the Copernicus climate service (http://www.copernicus.eu/) and lay the observational basis for service activities. Therefore the project consortium has developed the System Maturity Matrix (SMM); a metric to analyze the so called maturity of the ECV production systems considering the scientific, engineering, information

  4. Holocene climate change in Newfoundland reconstructed using oxygen isotope analysis of lake sediment cores

    Science.gov (United States)

    Finkenbinder, Matthew S.; Abbott, Mark B.; Steinman, Byron A.

    2016-08-01

    Carbonate minerals that precipitate from open-basin lakes can provide archives of past variations in the oxygen isotopic composition of precipitation (δ18Oppt). Holocene δ18Oppt records from the circum- North Atlantic region exhibit large fluctuations during times of rapid ice sheet deglaciation, followed by more stable conditions when interglacial boundary conditions were achieved. However, the timing, magnitude, and climatic controls on century to millennial-scale variations in δ18Oppt in northeastern North America are unclear principally because of a dearth of paleo-proxy data. Here we present a lacustrine sediment oxygen isotope (δ18O) record spanning 10,200 to 1200 calendar years before present (cal yr BP) from Cheeseman Lake, a small, alkaline, hydrologically open lake basin located in west-central Newfoundland, Canada. Stable isotope data from regional lakes, rivers, and precipitation indicate that Cheeseman Lake water δ18O values are consistent with the isotopic composition of inflowing meteoric water. In light of the open-basin hydrology and relatively short water residence time of the lake, we interpret down-core variations in calcite oxygen isotope (δ18Ocal) values to primarily reflect changes in δ18Oppt and atmospheric temperature, although other factors such as changes in the seasonality of precipitation may be a minor influence. We conducted a series of climate sensitivity simulations with a lake hydrologic and isotope mass balance model to investigate theoretical lake water δ18O responses to climate change. Results from these experiments suggest that Cheeseman Lake δ18O values are primarily controlled by temperature and to a much lesser extent, the seasonality of precipitation. Increasing and more positive δ18Ocal values between 10,200 and 8000 cal yr BP are interpreted to reflect the waning influence of the Laurentide Ice Sheet on atmospheric circulation, warming temperatures, and rapidly changing surface ocean δ18O from the input of

  5. An ice core record of near-synchronous global climate changes at the Bølling transition

    Science.gov (United States)

    Rosen, Julia L.; Brook, Edward J.; Severinghaus, Jeffrey P.; Blunier, Thomas; Mitchell, Logan E.; Lee, James E.; Edwards, Jon S.; Gkinis, Vasileios

    2014-06-01

    The abrupt warming that initiated the Bølling-Allerød interstadial was the penultimate warming in a series of climate variations known as Dansgaard-Oeschger events. Despite the clear expression of this transition in numerous palaeoclimate records, the relative timing of climate shifts in different regions of the world and their causes are subject to debate. Here we explore the phasing of global climate change at the onset of the Bølling-Allerød using air preserved in bubbles in the North Greenland Eemian ice core. Specifically, we measured methane concentrations, which act as a proxy for low-latitude climate, and the 15N/14N ratio of N2, which reflects Greenland surface temperature, over the same interval of time. We use an atmospheric box model and a firn air model to account for potential uncertainties in the data, and find that changes in Greenland temperature and atmospheric methane emissions at the Bølling onset occurred essentially synchronously, with temperature leading by 4.5 years. We cannot exclude the possibility that tropical climate could lag changing methane concentrations by up to several decades, if the initial methane rise came from boreal sources alone. However, because even boreal methane-producing regions lie far from Greenland, we conclude that the mechanism that drove abrupt change at this time must be capable of rapidly transmitting climate changes across the globe.

  6. Sensitivity of interglacial Greenland temperature and δ18O: ice core data, orbital and increased CO2 climate simulations

    Directory of Open Access Journals (Sweden)

    D. Swingedouw

    2011-09-01

    Full Text Available The sensitivity of interglacial Greenland temperature to orbital and CO2 forcing is investigated using the NorthGRIP ice core data and coupled ocean-atmosphere IPSL-CM4 model simulations. These simulations were conducted in response to different interglacial orbital configurations, and to increased CO2 concentrations. These different forcings cause very distinct simulated seasonal and latitudinal temperature and water cycle changes, limiting the analogies between the last interglacial and future climate. However, the IPSL-CM4 model shows similar magnitudes of Arctic summer warming and climate feedbacks in response to 2 × CO2 and orbital forcing of the last interglacial period (126 000 years ago. The IPSL-CM4 model produces a remarkably linear relationship between TOA incoming summer solar radiation and simulated changes in summer and annual mean central Greenland temperature. This contrasts with the stable isotope record from the Greenland ice cores, showing a multi-millennial lagged response to summer insolation. During the early part of interglacials, the observed lags may be explained by ice sheet-ocean feedbacks linked with changes in ice sheet elevation and the impact of meltwater on ocean circulation, as investigated with sensitivity studies. A quantitative comparison between ice core data and climate simulations requires stability of the stable isotope – temperature relationship to be explored. Atmospheric simulations including water stable isotopes have been conducted with the LMDZiso model under different boundary conditions. This set of simulations allows calculation of a temporal Greenland isotope-temperature slope (0.3–0.4‰ per °C during warmer-than-present Arctic climates, in response to increased CO2, increased ocean temperature and orbital forcing. This temporal slope appears half as large as the modern spatial gradient and is consistent with other ice core estimates. It may, however, be model-dependent, as indicated by

  7. Archive of Geosample Data and Information from the Ohio State University Byrd Polar and Climate Research Center (BPCRC) Sediment Core Repository

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Byrd Polar and Climate Research Center (BPCRC) Sediment Core Repository operated by the Ohio State University is a partner in the Index to Marine and Lacustrine...

  8. Dust-climate couplings over the past 800,000 years from the EPICA Dome C ice core.

    Science.gov (United States)

    Lambert, F; Delmonte, B; Petit, J R; Bigler, M; Kaufmann, P R; Hutterli, M A; Stocker, T F; Ruth, U; Steffensen, J P; Maggi, V

    2008-04-01

    Dust can affect the radiative balance of the atmosphere by absorbing or reflecting incoming solar radiation; it can also be a source of micronutrients, such as iron, to the ocean. It has been suggested that production, transport and deposition of dust is influenced by climatic changes on glacial-interglacial timescales. Here we present a high-resolution record of aeolian dust from the EPICA Dome C ice core in East Antarctica, which provides an undisturbed climate sequence over the past eight climatic cycles. We find that there is a significant correlation between dust flux and temperature records during glacial periods that is absent during interglacial periods. Our data suggest that dust flux is increasingly correlated with Antarctic temperature as the climate becomes colder. We interpret this as progressive coupling of the climates of Antarctic and lower latitudes. Limited changes in glacial-interglacial atmospheric transport time suggest that the sources and lifetime of dust are the main factors controlling the high glacial dust input. We propose that the observed approximately 25-fold increase in glacial dust flux over all eight glacial periods can be attributed to a strengthening of South American dust sources, together with a longer lifetime for atmospheric dust particles in the upper troposphere resulting from a reduced hydrological cycle during the ice ages.

  9. Parallel Structure Based on Multi-Core Computing for Radar System Simulation%基于多核计算的雷达并行仿真结构

    Institute of Scientific and Technical Information of China (English)

    王磊; 卢显良; 陈明燕; 张伟; 张顺生

    2014-01-01

    针对顺序仿真结构下回波生成与信号处理环节软件仿真速度慢等瓶颈问题,提出一种基于多核处理器共享内存的多数据链路计算模型,通过构建多数据链路并行仿真的方法提升软件仿真效率。根据同一调度间隔内各雷达事件相互独立的特性,从数据划分、任务分配、时间同步及负载监测与度量等层面上进行阐述。仿真结果表明,该方法与传统的雷达串行仿真相比,数据帧处理平均时间可以降低37.5%,数据帧处理加速比曲线表现出良好的仿真加速特性,大大缩减雷达系统仿真时间。%To solve the bottle-neck problem of lower efficiency existed in radar echo generation and signal processing with serial simulation architecture, a multi-data links computing model based on multi-core memory-shared platform is proposed. This method could greatly promote simulation efficiency by taking advantage of multi-core. According to the independent characteristic between radar tasks in the same scheduling interval, the model takes data division, task allocation, time synchronization, and load monitoring with measurement into account to discuss its parallel characteristic. The Pentium(R) Dual-Core E5200 CPU with 2 GB memory is used to test the target scene with 20 batches. Simulation results demonstrate that, compared with serial simulation, the data frame average processing time of parallel model decreases 37.5% and the data frame processing speedup ratio curve has good acceleration performance. This parallel algorithm can reduce the simulation time greatly.

  10. Climatic and insolation control on the high-resolution total air content in the NGRIP ice core

    Science.gov (United States)

    Eicher, Olivier; Baumgartner, Matthias; Schilt, Adrian; Schmitt, Jochen; Schwander, Jakob; Stocker, Thomas F.; Fischer, Hubertus

    2016-10-01

    Because the total air content (TAC) of polar ice is directly affected by the atmospheric pressure and temperature, its record in polar ice cores was initially considered as a proxy for past ice sheet elevation changes. However, the Antarctic ice core TAC record is known to also contain an insolation signature, although the underlying physical mechanisms are still a matter of debate. Here we present a high-resolution TAC record over the whole North Greenland Ice Core Project ice core, covering the last 120 000 years, which independently supports an insolation signature in Greenland. Wavelet analysis reveals a clear precession and obliquity signal similar to previous findings on Antarctic TAC, with a different insolation history. In our high-resolution record we also find a decrease of 4-6 % (4-5 mL kg-1) in TAC as a response to Dansgaard-Oeschger events (DO events). TAC starts to decrease in parallel to increasing Greenland surface temperature and slightly before CH4 reacts to the warming but also shows a two-step decline that lasts for several centuries into the warm interstadial. The TAC response is larger than expected considering only changes in air density by local temperature and atmospheric pressure as a driver, pointing to a transient firnification response caused by the accumulation-induced increase in the load on the firn at bubble close-off, while temperature changes deeper in the firn are still small.

  11. Performance and advantages of a soft-core based parallel architecture for energy peak detection in the calorimeter Level 0 trigger for the NA62 experiment at CERN

    Science.gov (United States)

    Ammendola, R.; Barbanera, M.; Bizzarri, M.; Bonaiuto, V.; Ceccucci, A.; Checcucci, B.; De Simone, N.; Fantechi, R.; Federici, L.; Fucci, A.; Lupi, M.; Paoluzzi, G.; Papi, A.; Piccini, M.; Ryjov, V.; Salamon, A.; Salina, G.; Sargeni, F.; Venditti, S.

    2017-03-01

    The NA62 experiment at CERN SPS has started its data-taking. Its aim is to measure the branching ratio of the ultra-rare decay K+ → π+ν ν̅ . In this context, rejecting the background is a crucial topic. One of the main background to the measurement is represented by the K+ → π+π0 decay. In the 1-8.5 mrad decay region this background is rejected by the calorimetric trigger processor (Cal-L0). In this work we present the performance of a soft-core based parallel architecture built on FPGAs for the energy peak reconstruction as an alternative to an implementation completely founded on VHDL language.

  12. Apparent climate-mediated loss and fragmentation of core habitat of the American pika in the Northern Sierra Nevada, California, USA

    Science.gov (United States)

    Joseph A. E. Stewart; David H. Wright; Katherine A. Heckman; Robert Guralnick

    2017-01-01

    Contemporary climate change has been widely documented as the apparent cause of range contraction at the edge of many species distributions but documentation of climate change as a cause of extirpation and fragmentation of the interior of a species' core habitat has been lacking. Here, we report the extirpation of the American pika (Ochotona princeps...

  13. Two ice-core delta O-18 records from Svalbard illustrating climate and sea-ice variability over the last 400 years

    NARCIS (Netherlands)

    Isaksson, E; Kohler, J; Pohjola, [No Value; Moore, J; Igarashi, M; Karlof, L; Martma, T; Meijer, H; Motoyama, H; Vaikmae, R; van de Wal, RSW; Pohjola, Veijo; Karlöf, Lars; Vaikmäe, Rein; Wal, Roderik S.W. van de; Kohler, 27967

    2005-01-01

    Ice cores from the relatively low-lying ice caps in Svalbard have not been widely exploited in climatic studies owing to uncertainties about the effect of meltwater percolation. However, results from two new Svalbard ice cores, at Lomonosovfonna and Austfonna, have shown that with careful site selec

  14. Isotopic and chemical analyses of a temperate firn core from a Chinese alpine glacier and its regional climatic significance

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Mt. Yulong is the southernmost currently glacier-covered area in Eurasia, including China. There are 19 sub-tropical temperate glaciers on the mountain, controlled by the south-western monsoon climate. In the summer of 1999, a firn core, 10. 10 m long, extending down to glacier ice, was recovered in the accumulation area of the largest glacier, Baishui No. 1. Periodic variations of climatic signals above 7. 8 m depth were apparent, and net accumulation of four years was identified by the annual oscillations of isotopic and ionic composition. The boundaries of annual accumulation were confirmed by higher values of electrical conductivity and pH, and by dirty refreezing ice layers at the levels of summer surfaces. Calculated mean annual net accumulation from 1994/1995 to 1997/1998 was about 900 mm water equivalent. The amplitude of isotopic variations in the profile decreased with increasing depth, and isotopic homogenization occurred below 7. 8 m as a result of meltwater percolation. Variations of δ18O above 7. 8 m showed an approximate correlation with the winter climatic trend at Li Jiang Station, 25 km away. Concentrations of Ca2+ and Mg2+ were much higher than those of Na+ and K+ , indicating that the air masses for precipitation were mainly from a continental source, and that the core material accumulated during the winter period. The close correspondence of C1- and Na+ indicated their common origin. Very low concentrations of SO2-4 and NO3- suggest that pollution caused by human activities is quite low in the area. The mean annual net accumulation in the core and the estimated ablation indicate that the average annual precipitation above the glacier's equilibrium line is 2400 - 3150 mm, but this needs to be confirmed by long term observation of mass balance.

  15. Millennial and sub-millennial scale climatic variations recorded in polar ice cores over the last glacial period

    DEFF Research Database (Denmark)

    Capron, E.; Landais, A.; Chappellaz, J.

    2010-01-01

    Since its discovery in Greenland ice cores, the millennial scale climatic variability of the last glacial period has been increasingly documented at all latitudes with studies focusing mainly on Marine Isotopic Stage 3 (MIS 3; 28–60 thousand of years before present, hereafter ka) and characterized...... a succession of abrupt events associated with long Greenland InterStadial phases (GIS) enabling us to highlight a sub-millennial scale climatic variability depicted by (i) short-lived and abrupt warming events preceding some GIS (precursor-type events) and (ii) abrupt warming events at the end of some GIS...... (rebound-type events). The occurrence of these sub-millennial scale events is suggested to be driven by the insolation at high northern latitudes together with the internal forcing of ice sheets. Thanks to a recent NorthGRIP-EPICA Dronning Maud Land (EDML) common timescale over MIS 5, the bipolar sequence...

  16. Climatic records in a firn core from an Alpine temperate glacier on Mt. Yulong, southeastern part of the Tibetan Plateau

    Institute of Scientific and Technical Information of China (English)

    He Yuanqing; Yao Tandong; Cheng Guodong; Yang Meixue

    2001-01-01

    @@ Mt. Yulong is the southernmost glacier-covered area in Eurasia, including China. There are 19 sub-tropical temperate glaciers on the mountain, controlled by the southwestern monsoon climate. In the summer of 1999,a firn core, 10.10 m long, extending down to glacier ice,was recovered in the accumulation area of the largest glacier, Baishui No. 1. Periodic variations of climatic signals above 7.8 m depth were apparent, and net accumulation off our years was identified by the annual oscillations of isotopic and ionic composition. The boundaries of annual accumulation were confirmed by higher values of electrical conductivity and pH, and by dirty refreezing ice layers at the levels of summer surfaces.

  17. Dispersion characteristics of THz surface plasmons in nonlinear graphene-based parallel-plate waveguide with Kerr-type core dielectric

    Science.gov (United States)

    Yarmoghaddam, Elahe; Rakheja, Shaloo

    2017-08-01

    We theoretically model the dispersion characteristics of surface plasmons in a graphene-based parallel-plate waveguide geometry using nonlinear Kerr-type core (inter-plate) dielectric. The optical nonlinearity of graphene in the terahertz band under high light intensity is specifically included in the analysis. By solving Maxwell's equations and applying appropriate boundary conditions, we show that the waveguide supports four guided plasmon modes, each of which can be categorized as either symmetric or anti-symmetric based on the electric field distribution in the structure. Of the four guided modes, two modes are similar in characteristics to the modes obtained in the structure with linear graphene coating, while the two new modes have distinct characteristics as a result of the nonlinearity of graphene. We note that the group velocity of one of the plasmon modes acquires a negative value under high light intensity. Additionally, the optical nonlinearity of the core dielectric leads to a significant enhancement in the localization length of various plasmon modes. The description of the intra-band optical conductivity of graphene incorporates effects of carrier scatterings due to charged impurities, resonant scatterers, and acoustic phonons at 300 K. The proposed structure offers flexibility to tune the waveguide characteristics and the mode index by changing light intensity and electrochemical potential in graphene for reconfigurable plasmonic devices.

  18. The Research of Multi-Core Parallel Program Evaluation%多核并行程序评测相关技术研究

    Institute of Scientific and Technical Information of China (English)

    龚溪东

    2011-01-01

    On the occasion of competing, Multi-Core parallel program evaluation can get more satisfying efficiency while the accuracy of results can also be guaranteed comparing to the traditional single-process program evaluation. It is the machine itself, considering the hardware environment and current system load status, that dynamically decide whether to release a child process in process pool to participate in the program evaluation process in order to take full advantage of multi-core computing resources and improve the evaluation efficiency.%在竞赛场合下,多核并行程序评测技术是在保证程序评测结果满足既定准确度的前提下,对传统单进程程序评测技术的一种改进。它综合考虑机器本身的硬件环境以及系统当前负载状态对评测结果可能的影响,动态决定是否释放进程池中子进程参与到程序评测过程,在保证准确度的情况下,充分利用多核计算资源,提升评测效率。

  19. A stochastic nonlinear oscillator model for glacial millennial-scale climate transitions derived from ice-core data

    Directory of Open Access Journals (Sweden)

    F. Kwasniok

    2012-11-01

    Full Text Available A stochastic Duffing-type oscillator model, i.e noise-driven motion with inertia in a potential landscape, is considered for glacial millennial-scale climate transitions. The potential and noise parameters are estimated from a Greenland ice-core record using a nonlinear Kalman filter. For the period from 60 to 20 ky before present, a bistable potential with a deep well corresponding to a cold stadial state and a shallow well corresponding to a warm interstadial state is found. The system is in the strongly dissipative regime and can be very well approximated by an effective one-dimensional Langevin equation.

  20. The record of Miocene climatic events in AND-2A drill core (Antarctica): Insights from provenance analyses of basement clasts

    Science.gov (United States)

    Sandroni, Sonia; Talarico, Franco M.

    2011-01-01

    This paper includes the results of a detailed quantitative provenance investigation on gravel-size clasts occurring within the late Early to Late Miocene sedimentary glacimarine section recovered for the first time by the AND-2A core in the SW sector of the Ross Sea (southern McMurdo Sound, Antarctica). This period of time is of crucial interest, as it includes two of the major Cenozoic events in the global climatic evolution: the mid-Miocene climatic optimum and the middle Miocene climate transition. Petrographical and mineral chemistry data on basement clasts allow to individuate two different diagnostic clast assemblages, which clearly suggest two specific sectors of southern Victoria Land as the most likely sources: the Mulock-Skelton glacier and the Koettlitz-Blue glacier regions. Distribution patterns reveal high fluctuations of the detritus source areas throughout the investigated core interval, variations which can be interpreted as the direct result of an evolving McMurdo Sound paleogeography during the late Early to Late Miocene. Consistently with sedimentological studies, gravel-fraction clast distribution patterns clearly testify that the Antarctic ice sheet experienced a dramatic contraction at ca. 17.35 ± 0.14 Ma (likely correlated to the onset of the climatic optimum), and in a gravel-fraction clasts show that the variations of paleoenvironmental drivers characterising this period were able to exert deep transformation of the Antarctic ice sheet and reveal the methodology to be a powerful tool for the reconstruction of paleo-glacial-flow direction and paleogeographic scenarios.

  1. An extended climate archive from the Eastern Alpine ice coring site of Mt Ortles

    Science.gov (United States)

    Dreossi, Giuliano; Carturan, Luca; De Blasi, Fabrizio; Paolo, Gabrielli; Spolaor, Andrea; Jacopo, Gabrieli; Barbante, Carlo; Seppi, Roberto; Dinale, Roberto; Zanoner, Thomas; Stenni, Barbara; Fontana Giancarlo, Dalla; Thompson, Lonnie G.

    2016-04-01

    Oxygen and hydrogen stable isotope content of ice cores has been extensively used for temperature reconstruction. The most elevated glaciers of the Alpine area have been utilized for ice coring for more than four decades, but the scarcity of drilling projects in the Eastern Alps and of isotopic records covering a long time period for the entire Alpine region suggest that the paleoclimatic potential of this mountain area is still largely unexploited. In autumn 2011 four deep cores were drilled on Mt Ortles, South Tyrol, Italy, at 3859 m a.s.l. An extensive reconstructed temperature record for the Ortles summit, based on the surrounding meteorological station data, is available for the last 150 years, while an automatic weather station had been operating from 2011 to 2015 in proximity of the drilling site. A preliminary age scale has been utilized for dating the two cores for which the isotopic record is available (core #1 and #2), creating an Ortles stacked record and comparing the Ortles data to temperatures and to other Alpine isotope records. The comparison among different ice core locations shows some similarities in the observed fluctuations, despite the considerable distance between the sites and the substantial geographical variability of temperature, precipitation and moisture source patterns characterizing the Alps.

  2. Comparison of St. Elias Ice Core Accumulation Records and Their Relationships to Climate Indices

    Science.gov (United States)

    Yalcin, K.; Osterberg, E.; Mayewski, P.; Wake, C.; Kreutz, K.; Holdsworth, G.

    2006-12-01

    Recently recovered ice cores from the St Elias Mountains (Yukon) spanning an elevation range of three (Eclipse Icefield) to more than five kilometers (Mount Logan) offer a unique three-dimensional view of paleoclimate and environmental change in the North Pacific region. The record of net accumulation as deduced from the reconstruction of observed annual layer thicknesses in these cores offers a direct view of moisture flux at various altitudes in the St. Elias. However, a potentially large uncertainty in the representativeness of ice core accumulation records exists due to spatial variability in snow accumulation rates. The availability of multiple cores allows us to address this issue. Accumulation records from Eclipse (three cores) are highly reproducible with 78% of the signal shared between the three cores. The proportion of shared signal between accumulation records from the Logan plateau (Prospector-Russell Col and Northwest Col) is lower (52%). In this work we will compare the Eclipse and Logan accumulation records to each other to understand the spatial variability in net accumulation over time at different altitudes. The possible influence of dating errors on these results will be explored using leads and lags of 1-2 years. We also compare our accumulation records to indices of atmospheric circulation (e.g., strength of the Aleutian Low, the Pacific Decadal Oscillation, the El-Nino Southern Oscillation, Arctic Oscillation) to quantify relationships between snow accumulation and large-scale atmospheric circulation features on time-scales of variability ranging from years to centuries.

  3. Palynological evidence for vegetational and climatic changes from the HQ deep drilling core in Yunnan Province, China

    Institute of Scientific and Technical Information of China (English)

    XIAO XiaYun; SHEN Ji; WANG SuMin; XIAO HaiFeng; TONG GuoBang

    2007-01-01

    The high-resolution pollen study of a 737.72-m-long lake sediment core in the Heqing Basin of Yunnan Province shows that the vegetation and climate of mountains around the Heqing Basin went through six obvious changes since 2.780 Ma B.P. Namely, Pinus forest occupied most mountains around the studied area and the structure of vertical vegetational belt was simple between 2.780 and 2.729 Ma B.P.,reflecting a relatively warm and dry climate. During 2.729-2.608 Ma B.P., the areas of cold-temperate conifer forest (CTCF) and Tsuga forest increased and the structure of vertical vegetational belt became clear. According to percentages of tropical and subtropical elements growing in low-altitude regions rifely increased, we speculate that the increase of CTCF and Tsuga forest areas mainly resulted from strong uplift of mountains which provided upward expanding space and growing condition for these plants. Thus, the climate of the low-altitude regions around the basin was relatively warm and humid.Between 2.608 and 1.553 Ma B.P., Pinus forest occupied most mountains around the studied area and forest line of CTCF rose, which reflects a moderately warm-dry climate on the whole. During 1.553-0.876 Ma B.P., the structure of vertical vegetational belt in mountains around the studied area became complicated and the amplitude of vegetational belts shifting up and down enlarged, which implies that the amplitude of climatic change increased, the climatic associational feature was more complex and the climate was moderately cold at a majority of the stage. During 0.876-0.252 Ma B.P., there were all vertical vegetational belts existing at present in mountains around the studied area. The elements of each belt were more abundant and complex than earlier. At different periods in the stage vertical vegetational belts occurred as expanding or shrinking, and alternated each other. The amplitude of vegetational belts shifting up and down was the maximum in the whole section. This change

  4. Palynological evidence for vegetational and climatic changes from the HQ deep drilling core in Yunnan Province, China

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The high-resolution pollen study of a 737.72-m-long lake sediment core in the Heqing Basin of Yunnan Province shows that the vegetation and climate of mountains around the Heqing Basin went through six obvious changes since 2.780 Ma B.P. Namely, Pinus forest occupied most mountains around the studied area and the structure of vertical vegetational belt was simple between 2.780 and 2.729 Ma B.P., reflecting a relatively warm and dry climate. During 2.729―2.608 Ma B.P., the areas of cold-temperate conifer forest (CTCF) and Tsuga forest increased and the structure of vertical vegetational belt became clear. According to percentages of tropical and subtropical elements growing in low-altitude regions rifely increased, we speculate that the increase of CTCF and Tsuga forest areas mainly resulted from strong uplift of mountains which provided upward expanding space and growing condition for these plants. Thus, the climate of the low-altitude regions around the basin was relatively warm and humid. Between 2.608 and 1.553 Ma B.P., Pinus forest occupied most mountains around the studied area and forest line of CTCF rose, which reflects a moderately warm-dry climate on the whole. During 1.553―0.876 Ma B.P., the structure of vertical vegetational belt in mountains around the studied area became complicated and the amplitude of vegetational belts shifting up and down enlarged, which implies that the amplitude of climatic change increased, the climatic associational feature was more complex and the climate was moderately cold at a majority of the stage. During 0.876―0.252 Ma B.P., there were all vertical vegetational belts existing at present in mountains around the studied area. The elements of each belt were more abundant and complex than earlier. At different periods in the stage vertical vegetational belts occurred as expanding or shrinking, and alternated each other. The amplitude of vegetational belts shifting up and down was the maximum in the whole section. This

  5. Optimization of high-resolution continuous flow analysis for transient climate signals in ice cores.

    Science.gov (United States)

    Bigler, Matthias; Svensson, Anders; Kettner, Ernesto; Vallelonga, Paul; Nielsen, Maibritt E; Steffensen, Jørgen Peder

    2011-05-15

    Over the past two decades, continuous flow analysis (CFA) systems have been refined and widely used to measure aerosol constituents in polar and alpine ice cores in very high-depth resolution. Here we present a newly designed system consisting of sodium, ammonium, dust particles, and electrolytic meltwater conductivity detection modules. The system is optimized for high-resolution determination of transient signals in thin layers of deep polar ice cores. Based on standard measurements and by comparing sections of early Holocene and glacial ice from Greenland, we find that the new system features a depth resolution in the ice of a few millimeters which is considerably better than other CFA systems. Thus, the new system can resolve ice strata down to 10 mm thickness and has the potential of identifying annual layers in both Greenland and Antarctic ice cores throughout the last glacial cycle.

  6. Optimization of High-Resolution Continuous Flow Analysis for Transient Climate Signals in Ice Cores

    DEFF Research Database (Denmark)

    Bigler, Matthias; Svensson, Anders; Kettner, Ernesto

    2011-01-01

    Over the past two decades, continuous flow analysis (CFA) systems have been refined and widely used to measure aerosol constituents in polar and alpine ice cores in very high-depth resolution. Here we present a newly designed system consisting of sodium, ammonium, dust particles, and electrolytic...... meltwater conductivity detection modules. The system is optimized for high- resolution determination of transient signals in thin layers of deep polar ice cores. Based on standard measurements and by comparing sections of early Holocene and glacial ice from Greenland, we find that the new system features...

  7. High-resolution Greenland Ice Core data show abrupt climate change happens in few years

    DEFF Research Database (Denmark)

    Steffensen, Jørgen Peder; Andersen, Katrine Krogh; Bigler, Matthias

    2008-01-01

    The last two abrupt warmings at the onset of our present warm interglacial period, interrupted by the Younger Dryas cooling event, were investigated at high temporal resolution from the North Greenland Ice Core Project ice core. The deuterium excess, a proxy of Greenland precipitation moisture......, reflecting the wetting of Asian deserts. A northern shift of the Intertropical Convergence Zone could be the trigger of these abrupt shifts of Northern Hemisphere atmospheric circulation, resulting in changes of 2 to 4 kelvin in Greenland moisture source temperature from one year to the next....

  8. Optimization of High-Resolution Continuous Flow Analysis for Transient Climate Signals in Ice Cores

    DEFF Research Database (Denmark)

    Bigler, Matthias; Svensson, Anders; Kettner, Ernesto

    2011-01-01

    meltwater conductivity detection modules. The system is optimized for high- resolution determination of transient signals in thin layers of deep polar ice cores. Based on standard measurements and by comparing sections of early Holocene and glacial ice from Greenland, we find that the new system features...... a depth resolution in the ice of a few millimeters which is considerably better than other CFA systems. Thus, the new system can resolve ice strata down to 10 mm thickness and has the potential of identifying annual layers in both Greenland and Antarctic ice cores throughout the last glacial cycle....

  9. Whole Planet Coupling from Climate to Core: Implications for the Evolution of Rocky Planets and their Prospects for Habitability

    Science.gov (United States)

    Foley, B. J.; Driscoll, P. E.

    2015-12-01

    Many factors have conspired to make Earth a home to complex life. Earth has abundant water due to a combination of factors, including orbital distance and the climate regulating feedbacks of the long-term carbon cycle. Earth has plate tectonics, which is crucial for maintaining long-term carbon cycling and may have been an important energy source for the origin of life in seafloor hydrothermal systems. Earth also has a strong magnetic field that shields the atmosphere from the solar wind and the surface from high-energy particles. Synthesizing recent work on these topics shows that water, a temperate climate, plate tectonics, and a strong magnetic field are linked together through a series of negative feedbacks that stabilize the system over geologic timescales. Although the physical mechanism behind plate tectonics on Earth is still poorly understood, climate is thought to be important. In particular, temperate surface temperatures are likely necessary for plate tectonics because they allow for liquid water that may be capable of significantly lowering lithospheric strength, increase convective stresses in the lithosphere, and enhance the effectiveness of "damage" processes such as grainsize reduction. Likewise, plate tectonics is probably crucial for maintaining a temperate climate on Earth through its role in facilitating the long-term carbon cycle, which regulates atmospheric CO2 levels. Therefore, the coupling between plate tectonics and climate is a feedback that is likely of first order importance for the evolution of rocky planets. Finally, plate tectonics is thought to be important for driving the geodynamo. Plate tectonics efficiently cools the mantle, leading to vigorous thermo-chemical convection in the outer core and dynamo action; without plate tectonics inefficient mantle cooling beneath a stagnant lid may prevent a long-lived magnetic field. As the magnetic field shields a planet's atmosphere from the solar wind, the magnetic field may be important

  10. Climatic Cycles and Gradients of the El Niño Core Region in North Peru

    Directory of Open Access Journals (Sweden)

    Rütger Rollenbeck

    2015-01-01

    Full Text Available Climatic processes in northern Peru are evaluated on surface observation independent of modelling studies. The region is characterized by regular oscillations, but episodic El Niño-events introduce strong disturbances. Conceptual models based on observations, remote sensing data, and output of regional climate models are compared with data from a new station network. The results show regular oscillations of all climate variables on the annual and daily time scale. The daily cycle is probably associated with thermotidal forcings, causing gravity waves to emanate from the Andes Cordillera. Main factors are the interaction of large scale pressure systems like the Southeast Pacific High and the intertropical convergence zone (ITCZ. Also, there are regional factors: an extended sea-breeze system, the barrier-effect of the Andes, additional energy input by elevated radiation absorption at the mountain slopes, local wind systems, and the variations of the sea surface temperature. At the coast, a low-level jet works as a thermodynamic energy sink, suppressing deep convection and supporting the aridity. Those patterns are found in most of the station data and the processes of this climate can generally be confirmed. The overturning of this stable system with the onset of El Niño-conditions is possibly caused by disruptions of the regional circulation.

  11. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  12. GI-core polymer parallel optical waveguide with high-loss, carbon-black-doped cladding for extra low inter-channel crosstalk.

    Science.gov (United States)

    Uno, Hisashi; Ishigure, Takaaki

    2011-05-23

    Graded-index (GI) polymer parallel optical waveguides with high-absorption, carbon-black-doped cladding are fabricated using the preform method in order to reduce the inter-channel crosstalk. The waveguides exhibit a lower inter-channel crosstalk (clad waveguides (~-33.7 dB) and maintain low propagation loss (0.029 dB/cm). We characterize the waveguides with different concentration of carbon black in order to confirm the required concentration (required absorption loss) for keeping the inter-channel crosstalk low enough. In addition, carbon-black-doped waveguides are fabricated directly on a substrate by means of a soft-lithography method. Crosstalk is sufficiently decreased despite the high scattering loss of the core material, while insertion loss is not increased. Furthermore, we fabricate a waveguide with a high-scattering-loss cladding to confirm the origin of low crosstalk in carbon-black-doped waveguides. We confirm that high scattering loss of cladding is not necessarily as effective for crosstalk reduction as high absorption loss of cladding.

  13. Crystal Structure of Jun a 1, the Major Cedar Pollen Allergen from Juniperus ashei, Reveals a Parallel β-Helical Core*

    Science.gov (United States)

    Czerwinski, Edmund W.; Midoro-Horiuti, Terumi; White, Mark A.; Brooks, Edward G.; Goldblum, Randall M.

    2008-01-01

    Pollen from cedar and cypress trees is a major cause of seasonal hypersensitivity in humans in several regions of the Northern Hemisphere. We report the first crystal structure of a cedar allergen, Jun a 1, from the pollen of the mountain cedar Juniperus ashei (Cupressaceae). The core of the structure consists primarily of a parallel β-helix, which is nearly identical to that found in the pectin/pectate lyases from several plant pathogenic microorganisms. Four IgE epitopes mapped to the surface of the protein are accessible to the solvent. The conserved vWiDH sequence is covered by the first 30 residues of the N terminus. The potential reactive arginine, analogous to the pectin/pectate lyase reaction site, is accessible to the solvent, but the substrate binding groove is blocked by a histidine-aspartate salt bridge, a glutamine, and an α-helix, all of which are unique to Jun a 1. These observations suggest that steric hindrance in Jun a 1 precludes enzyme activity. The overall results suggest that it is the structure of Jun a 1 that makes it a potent allergen. PMID:15539389

  14. Research and Application of Multi-core Image Processing Parallel Design Scheme%多核图像处理并行设计范式的研究与应用

    Institute of Scientific and Technical Information of China (English)

    王成良; 谢克家; 刘昕

    2011-01-01

    In multi-core computing environment, the image processing parallel algorithms can greatly improve the processing speed. However, the existing parallel designs are focused on the specific algorithms such as edge detection and image projection, which can not form a universal design scheme. Thus, it is difficult to extend this application. Based on the in-depth study of the image algorithms parallel processing mechanism and the features of the multi-core architecture, this paper proposes an image processing parallel design scheme in multi-core computing environment, which has five steps, including analysis, modeling, mapping, debugging & performance evaluation and testing & release. The paper takes the algorithm design of parallel image Fourier transforms as an example to testify the effectiveness of this scheme in single-core, double-core, quad-core and eight-core computing environment. Experimental result shows that the proposed multi-core parallel design scheme has good scalability, and this scheme can extend the space of application for image processing.%多核计算环境下采用图像处理并行算法可提高图像处理的速度,但已有的并行设计只针对边缘检测、图像投影等特定算法进行,没有形成通用的并行算法设计范式.为此,在研究图像处理算法可并行处理机制和多核架构特点的基础上,提出分析、建模、映射、调试和性能评价及测试发布等5个设计步骤的基于多核计算环境的图像处理算法并行设计范式,以图像傅里叶变换并行算法设计为例在单核、双核、四核、八核计算环境下验证了该并行范式的有效性.实验结果表明,该范式在图像处理并行设计方面可扩展图像处理的应用空间.

  15. Ice cores

    DEFF Research Database (Denmark)

    Svensson, Anders

    2014-01-01

    Ice cores from Antarctica, from Greenland, and from a number of smaller glaciers around the world yield a wealth of information on past climates and environments. Ice cores offer unique records on past temperatures, atmospheric composition (including greenhouse gases), volcanism, solar activity......, dustiness, and biomass burning, among others. In Antarctica, ice cores extend back more than 800,000 years before present (Jouzel et al. 2007), whereas. Greenland ice cores cover the last 130,000 years...

  16. Ice cores

    DEFF Research Database (Denmark)

    Svensson, Anders

    2014-01-01

    Ice cores from Antarctica, from Greenland, and from a number of smaller glaciers around the world yield a wealth of information on past climates and environments. Ice cores offer unique records on past temperatures, atmospheric composition (including greenhouse gases), volcanism, solar activity......, dustiness, and biomass burning, among others. In Antarctica, ice cores extend back more than 800,000 years before present (Jouzel et al. 2007), whereas. Greenland ice cores cover the last 130,000 years...

  17. Changes in Deep Ocean Circulation During Times of High Climate Variability from Nd Isotopes in South Atlantic Cores

    Science.gov (United States)

    Piotrowski, A. M.; Goldstein, S. L.; Hemming, S. R.; Zylberberg, D. R.

    2003-12-01

    The transition between marine isotope stages (MIS) 5a and 4 appears in the stacked benthic foraminferal d18O SPECMAP record as a gradual increase in ice volume. In contrast, the transition occurs in the Greenland ice core d18O records with two well-developed interstadial events (I19 and I20), which are the first Dansgaard-Oescheger events of the last ice age. The MIS 5b/5a transition appears as a much more rapid warming in both the Greenland ice and benthic d18O records. Recent work (Lehmann et al. 2002, Chapman et al. 1999) indicates that climate variability in MIS 5 as indicated in the Greenland ice record was closely interconnected with iceberg discharges, surface temperature changes, and deep ocean circulation in the North Atlantic. In order to determine the response of deep ocean circulation to climate changes from late in MIS 5 to full glacial MIS 4, we have measured Nd isotope ratios from the Fe-Mn portion of core TNO57-21 from the Cape Basin in the South Atlantic. Nd isotopes, unlike nutrient water mass proxies, are not affected by biological fractionation, and reflect the strength of the North Atlantic Deep Water (NADW) signal in the seawater above the core site. Results from cores TNO57-21 and RC11-83 (also from the Cape Basin) indicate that the NADW export to the Southern Ocean has varied on time scales reflecting glacial-interglacial cycles through MIS 4 (Rutberg et al. 2000) and during interstadial events through MIS 3 (Piotrowski et al., Fall AGU), and was stronger and weaker during warmer and colder Northern Hemisphere climate intervals, respectively. The extension of the Nd isotope record to MIS 5a and 5b indicates an increased NADW signal during MIS 5, therefore the long-term pattern of strong and weak NADW export during warm and cold periods persists beyond the last ice age. The Nd isotope pattern during MIS 4 through 5b generally corresponds to the benthic foraminferal d13C record from Cape Basin cores (Ninnemann et al. 1999), indicating that the

  18. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...

  19. Paleo-Climate and Glaciological Reconstruction in Central Asia through the Collection and Analysis of Ice Cores and Instrumental Data from the Tien Shan

    Energy Technology Data Exchange (ETDEWEB)

    Vladimir Aizen; Donald Bren; Karl Kreutz; Cameron Wake

    2001-05-30

    While the majority of ice core investigations have been undertaken in the polar regions, a few ice cores recovered from carefully selected high altitude/mid-to-low latitude glaciers have also provided valuable records of climate variability in these regions. A regional array of high resolution, multi-parameter ice core records developed from temperate and tropical regions of the globe can be used to document regional climate and environmental change in the latitudes which are home to the vase majority of the Earth's human population. In addition, these records can be directly compared with ice core records available from the polar regions and can therefore expand our understanding of inter-hemispheric dynamics of past climate changes. The main objectives of our paleoclimate research in the Tien Shan mountains of middle Asia combine the development of detailed paleoenvironmental records via the physical and chemical analysis of ice cores with the analysis of modern meteorological and hydrological data. The first step in this research was the collection of ice cores from the accumulation zone of the Inylchek Glacier and the collection of meteorological data from a variety of stations throughout the Tien Shan. The research effort described in this report was part of a collaborative effort with the United State Geological Survey's (USGS) Global Environmental Research Program which began studying radionuclide deposition in mid-latitude glaciers in 1995.

  20. Multi-scale analysis on last millennium climate variations in Greenland by its ice core oxygen isotope

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The empirical mode decomposition method is used for analyzing the paleoclimate proxy δ18O from Greenland GISP2 ice core.The results show that millennium climate change trends in Greenland record the Medieval Warm Period (MWP) from 860AD-1350AD lasting for about 490 years,and the Little Ice Age (LIA) from 1350AD-1920AD lasting about 570 years.During these events,sub cooling-warming variations occurred.Its multi-scale oscillations changed with quasi-period of 3-year,6.5-year,12-year,24-year,49-year,96-year,213-year and 468-year,and are not only affected by ENSO but also by solar activity.The oscillation of intrinsic mode function IMF7,IMF8 and their tendency obviously appear in 1350AD which is considered as the key stage of transformation between MWP and LIA.The results give more detailed changes and their stages of millennium climate change in high latitude areas of the Northern Hemisphere.

  1. Two Extreme Climate Events of the Last 1000 Years Recorded in Himalayan and Andean Ice Cores: Impacts on Humans

    Science.gov (United States)

    Thompson, L. G.; Mosley-Thompson, E. S.; Davis, M. E.; Kenny, D. V.; Lin, P.

    2013-12-01

    In the last few decades numerous studies have linked pandemic influenza, cholera, malaria, and viral pneumonia, as well as droughts, famines and global crises, to the El Niño-Southern Oscillation (ENSO). Two annually resolved ice core records, one from Dasuopu Glacier in the Himalaya and one from the Quelccaya Ice Cap in the tropical Peruvian Andes provide an opportunity to investigate these relationships on opposite sides of the Pacific Basin for the last 1000 years. The Dasuopu record provides an annual history from 1440 to 1997 CE and a decadally resolved record from 1000 to 1440 CE while the Quelccaya ice core provides annual resolution over the last 1000 years. Major ENSO events are often recorded in the oxygen isotope, insoluble dust, and chemical records from these cores. Here we investigate outbreaks of diseases, famines and global crises during two of the largest events recorded in the chemistry of these cores, particularly large peaks in the concentrations of chloride (Cl-) and fluoride (Fl-). One event is centered on 1789 to 1800 CE and the second begins abruptly in 1345 and tapers off after 1360 CE. These Cl- and F- peaks represent major droughts and reflect the abundance of continental atmospheric dust, derived in part from dried lake beds in drought stricken regions upwind of the core sites. For Dasuopu the likely sources are in India while for Quelccaya the sources would be the Andean Altiplano. Both regions are subject to drought conditions during the El Niño phase of the ENSO cycle. These two events persist longer (10 to 15 years) than today's typical ENSO events in the Pacific Ocean Basin. The 1789 to 1800 CE event was associated with a very strong El Niño event and was coincidental with the Boji Bara famine resulting from extended droughts that led to over 600,000 deaths in central India by 1792. Similarly extensive droughts are documented in Central and South America. Likewise, the 1345 to 1360 CE event, although poorly documented

  2. Stable water isotopes of precipitation and firn cores from the northern Antarctic Peninsula region as a proxy for climate reconstruction

    Directory of Open Access Journals (Sweden)

    F. Fernandoy

    2012-03-01

    Full Text Available In order to investigate the climate variability in the northern Antarctic Peninsula region, this paper focuses on the relationship between stable isotope content of precipitation and firn, and main meteorological variables (air temperature, relative humidity, sea surface temperature, and sea ice extent. Between 2008 and 2010, we collected precipitation samples and retrieved firn cores from several key sites in this region. We conclude that the deuterium excess oscillation represents a robust indicator of the meteorological variability on a seasonal to sub-seasonal scale. Low absolute deuterium excess values and the synchronous variation of both deuterium excess and air temperature imply that the evaporation of moisture occurs in the adjacent Southern Ocean. The δ18O-air temperature relationship is complicated and significant only at a (multiseasonal scale. Backward trajectory calculations show that air-parcels arriving at the region during precipitation events predominantly originate at the South Pacific Ocean and Bellingshausen Sea. These investigations will be used as a calibration for ongoing and future research in the area, suggesting that appropriate locations for future ice core research are located above 600 m a.s.l. We selected the Plateau Laclavere, Antarctic Peninsula as the most promising site for a deeper drilling campaign.

  3. Accumulation reconstruction and water isotope analysis for 1736-1997 of an ice core from the Ushkovsky volcano, Kamchatka, and their relationships to North Pacific climate records

    Science.gov (United States)

    Sato, T.; Shiraiwa, T.; Greve, R.; Seddik, H.; Edelmann, E.; Zwinger, T.

    2014-02-01

    An ice core was retrieved in June 1998 from the Gorshkov crater glacier at the top of the Ushkovsky volcano, in central Kamchatka. This ice core is one of only two recovered from Kamchatka so far, thus filling a gap in the regional instrumental climate network. Hydrogen isotope (δD) analyses and past accumulation reconstructions were conducted for the top 140.7 m of the core, spanning 1736-1997. Two accumulation reconstruction methods were developed and applied with the Salamatin and the Elmer/Ice firn-ice dynamics models, revealing a slightly increasing or nearly stable trend, respectively. Wavelet analysis shows that the ice core records have significant decadal and multi-decadal variabilities at different times. Around 1880 the multi-decadal variability of δD became lost and its average value increased by 6‰. The multi-decadal variability of reconstructed accumulation rates changed at around 1850. Reconstructed accumulation variations agree with ages of moraines in Kamchatka. Ice core signals were significantly correlated with North Pacific sea surface temperature (SST) and surface temperature (2 m temperature). δD correlates with the North Pacific Gyre Oscillation (NPGO) index after the climate regime shift in 1976/1977, but not before that. Therefore, our findings imply that the ice core record contains various information on the local, regional and large-scale climate variability in the North Pacific region. Understanding all detailed mechanisms behind the time-dependent connections between these climate patterns is challenging and requires further efforts towards multi-proxy analysis and climate modelling.

  4. Event layers in the Japanese Lake Suigetsu 'SG06' sediment core: description, interpretation and climatic implications

    Science.gov (United States)

    Schlolaut, Gordon; Brauer, Achim; Marshall, Michael H.; Nakagawa, Takeshi; Staff, Richard A.; Bronk Ramsey, Christopher; Lamb, Henry F.; Bryant, Charlotte L.; Naumann, Rudolf; Dulski, Peter; Brock, Fiona; Yokoyama, Yusuke; Tada, Ryuji; Haraguchi, Tsuyoshi

    2014-01-01

    Event layers in lake sediments are indicators of past extreme events, mostly the results of floods or earthquakes. Detailed characterisation of the layers allows the discrimination of the sedimentation processes involved, such as surface runoff, landslides or subaqueous slope failures. These processes can then be interpreted in terms of their triggering mechanisms. Here we present a 40 ka event layer chronology from Lake Suigetsu, Japan. The event layers were characterised using a multi-proxy approach, employing light microscopy and μXRF for microfacies analysis. The vast majority of event layers in Lake Suigetsu was produced by flood events (362 out of 369), allowing the construction of the first long-term, quantitative (with respect to recurrence) and well dated flood chronology from the region. The flood layer frequency shows a high variability over the last 40 ka, and it appears that extreme precipitation events were decoupled from the average long-term precipitation. For instance, the flood layer frequency is highest in the Glacial at around 25 ka BP, at which time Japan was experiencing a generally cold and dry climate. Other cold episodes, such as Heinrich Event 1 or the Late Glacial stadial, show a low flood layer frequency. Both observations together exclude a simple, straightforward relationship with average precipitation and temperature. We argue that, especially during Glacial times, changes in typhoon genesis/typhoon tracks are the most likely control on the flood layer frequency, rather than changes in the monsoon front or snow melts. Spectral analysis of the flood chronology revealed periodic variations on centennial and millennial time scales, with 220 yr, 450 yr and a 2000 yr cyclicity most pronounced. However, the flood layer frequency appears to have not only been influenced by climate changes, but also by changes in erosion rates due to, for instance, earthquakes.

  5. Ice Cores

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Records of past temperature, precipitation, atmospheric trace gases, and other aspects of climate and environment derived from ice cores drilled on glaciers and ice...

  6. SG06, a fully continuous and varved sediment core from Lake Suigetsu, Japan: stratigraphy and potential for improving the radiocarbon calibration model and understanding of late Quaternary climate changes

    Science.gov (United States)

    Nakagawa, Takeshi; Gotanda, Katsuya; Haraguchi, Tsuyoshi; Danhara, Toru; Yonenobu, Hitoshi; Brauer, Achim; Yokoyama, Yusuke; Tada, Ryuji; Takemura, Keiji; Staff, Richard A.; Payne, Rebecca; Bronk Ramsey, Christopher; Bryant, Charlotte; Brock, Fiona; Schlolaut, Gordon; Marshall, Michael; Tarasov, Pavel; Lamb, Henry; Suigetsu 2006 Project Members

    2012-03-01

    The high potential of the varved sediments of Lake Suigetsu, central Japan, to provide a purely terrestrial radiocarbon calibration model and a chronology of palaeoclimatic changes has been widely recognised for the last two decades. However, this potential has not been fully realised since the only available long sediment core from the lake ('SG93') was extracted from a single bore hole and was therefore interrupted by gaps of unknown duration between successive core sections. In the summer of 2006, a new sediment core ('SG06') was recovered from the lake. Four separate boreholes were drilled and the parallel sets of cores recovered were found to overlap completely, without gaps between segments. This new record provides the ability to test existing atmospheric radiocarbon calibration models, as well as to assess the scale of inter-regional leads and lags in palaeoclimatic changes over the last Glacial-Interglacial cycle. Multi-disciplinary analyses from SG06 are still ongoing, but a reliable description of the sedimentary sequence needs to be provided to the wider science community before major outputs from the project are released, thereby allowing fully-informed critical evaluation of all subsequent releases of data based on the SG06 record. In this paper, we report key litho-stratigraphic information concerning the SG06 sediment core, highlighting changes in the clarity of annual laminations (varves) with depth, and possible implications for the mechanism of the climate change. We also discuss the potential of the SG06 record to meet the fundamental goals of the INQUA-INTIMATE project.

  7. Modelling the regional climate and isotopic composition of Svalbard precipitation using REMOiso : a comparison with available GNIP and ice core data

    NARCIS (Netherlands)

    Divine, D. V.; Sjolte, J.; Isaksson, E.; Meijer, H. A. J.; van de Wal, R. S. W.; Martma, T.; Pohjola, V.; Sturm, C.; Godtliebsen, F.

    2011-01-01

    Simulations of a regional (approx. 50 km resolution) circulation model REMOiso with embedded stable water isotope module covering the period 1958-2001 are compared with the two instrumental climate and four isotope series (d18O) from western Svalbard. We examine the data from ice cores drilled on Sv

  8. Resolving climate change in the period 15-23 ka in Greenland ice cores: A new application of spectral trend analysis

    NARCIS (Netherlands)

    de Jong, M.G.G.; Nio, D.S.; Böhm, A.R.; Seijmonsbergen, H.C.; de Graaff, L.W.S.

    2009-01-01

    Northern Hemisphere climate history through and following the Last Glacial Maximum is recorded in detail in ice cores from Greenland. However, the period between Greenland Interstadials 1 and 2 (15-23 ka), i.e. the period of deglaciation following the last major glaciation, has been difficult to res

  9. Modelling the regional climate and isotopic composition of Svalbard precipitation using REMOiso : a comparison with available GNIP and ice core data

    NARCIS (Netherlands)

    Divine, D. V.; Sjolte, J.; Isaksson, E.; Meijer, H. A. J.; van de Wal, R. S. W.; Martma, T.; Pohjola, V.; Sturm, C.; Godtliebsen, F.

    2011-01-01

    Simulations of a regional (approx. 50 km resolution) circulation model REMOiso with embedded stable water isotope module covering the period 1958-2001 are compared with the two instrumental climate and four isotope series (d18O) from western Svalbard. We examine the data from ice cores drilled on Sv

  10. Modelling the regional climate and isotopic composition of Svalbard precipitation using REMOiso: a comparison with available GNIP and ice core data

    NARCIS (Netherlands)

    Divine, D.V.; Sjolte, J.; Isaksson, E.; Meijer, H.A.J.; van de Wal, R.S.W.; Martma, T.; Pohjola, V.; Sturm, C.; Godtliebsen, F.

    2011-01-01

    Simulations of a regional (approx. 50 km resolution) circulation model REMOiso with embedded stable water isotope module covering the period 1958-2001 are compared with the two instrumental climate and four isotope series (δ18O) from western Svalbard. We examine the data from ice cores drilled on Sv

  11. MULTI-CORE PARALLEL PROGRAMMING BASED ON COMMUNICATION ON WIN32 PLATFORM%Win32平台基于通信的多核并行编程方法

    Institute of Scientific and Technical Information of China (English)

    李青; 徐璐娜

    2014-01-01

    随着计算机硬件的发展,多核并行计算在计算机软件及应用领域的出现率也越来越频繁。目前的多核编程模型采用线程级并行模型,现有的多线程并行编程模型主要有线程库、指令模型和任务式模型三种。提出一种与 MPI 并行编程模型相似的基于通信的方法在 Win32平台上来实现并行编程,在此基础上实现 MTI 并行编程模型。通过若干典型的测试给出使用 MTI 进行并行编程的执行结果,结果表明 MTI 是有效、易用的。%With the development of computer science and technology,more and more frequently the term “multi-core parallel computing”appears in computer software and its application field.Current multi-core programming model adopts thread-level parallel model,existing multi-thread parallel programming model mainly includes the thread library,directive models and tasking models.In this paper we propose a communication-based method to implement parallel programming on Win32 platform,which is similar to the multi-thread interface (MTI) parallel programming model,and the MTI parallel programming model is realised based on that.Moreover,we provide the execution outcome of parallel programming using MTI through a couple of typical tests,the results prove that the MTI is effective and easy in use.

  12. Correlation between high-resolution climate records from a Nanjing stalagmite and GRIP ice core during the last glaciation

    Institute of Scientific and Technical Information of China (English)

    WANG; Yongjin

    2001-01-01

    [1]Dansgaard, W., Clausen, H. B., Gundestrup, N. et al., A new Greenland deep ice core, Science, 1982, 218: 1273.[2]Dansgaard, W., Johnsen, S. J., Clausen, H. B. et al., Evidence for general instability of past climate from a 250-Kyr ice-core record, Nature, 1993, 364: 218.[3]Bond, G. C., Broecker, W. S., Johnsen, S. J. et al., Correlation between climate records from North Atlantic sediments and Greenland ice, Nature, 1993, 365: 143.[4]Bond, G. C., Lotti, R., Iceberg discharges into the North Atlantic on millennial time scales during the last glaciation, Science, 1995, 267: 1005.[5]Kotilainen, A. T., Shackleton, N. J., Rapid climate variability in the North Pacific Ocean during the past 950 000 years, Nature, 1995, 267: 323.[6]Lowell, T. V., Heusser, C. J., Sandensrn, B. G. et al., Interhemispheric correlation of late Pleistocene glacial events, Science, 1995, 269: 1541.[7]Porter, S. C., An, Z. S., Correlation between climate events in the North Atlantic and China during the last glaciation, Nature, 1995, 375: 305.[8]Guo, Z. T., Liu, T. S., Wu, N. Q. et al., Heinrich-rhythem pulses of climate recorded in loess of the last two glaciations, Quaternary Science (in Chinese), 1996, (1): 21.[9]Lu, H. Y., Guo, Z. T., Wu, N. Q., Paleomonsoon evolution and Heinrich events: evidences from the Loess Plateau and the South China Sea, Quaternary Science (in Chinese), 1996, (1): 11.[10]Zhang, M. L., Yuan, D. X., Lin, Y. S., Isotopic ages and its paleoclimate significance of a stalagmite from Xiangshui Cave in Guangyang County, Guangxi Province, Carsologica Sinica (in Chinese), 1998, 17(4): 311.[11]Edwards, R. L., Chen, J. H., Wasserburg, G. J., 238U-234U-230Th-232Th systematic and precise measurement of time over the past 500 000 years, Earth and Planetary Science Letter, 1986/1987, 81: 175.[12]McCrea, J. M., The isotopic Chemistry of carbonates and a paleotemperature-scale, Journal of Chemical Physics, 1950, 18: 849.[13

  13. Unveiling exceptional Baltic bog ecohydrology, autogenic succession and climate change during the last 2000 years in CE Europe using replicate cores, multi-proxy data and functional traits of testate amoebae

    Science.gov (United States)

    Gałka, Mariusz; Tobolski, Kazimierz; Lamentowicz, Łukasz; Ersek, Vasile; Jassey, Vincent E. J.; van der Knaap, Willem O.; Lamentowicz, Mariusz

    2017-01-01

    We present the results of high-resolution, multi-proxy palaeoecological investigations of two parallel peat cores from the Baltic raised bog Mechacz Wielki in NE Poland. We aim to evaluate the role of regional climate and autogenic processes of the raised bog itself in driving the vegetation and hydrology dynamics. Based on partly synchronous changes in Sphagnum communities in the two study cores we suggest that extrinsic factors (climate) played an important role as a driver in mire development during the bog stage (500-2012 CE). Using a testate amoebae transfer function, we found exceptionally stable hydrological conditions during the last 2000 years with a relatively high water table and lack of local fire events that allowed for rapid peat accumulation (2.75 mm/year) in the bog. Further, the strong correlation between pH and community-weighted mean of testate amoeba traits suggests that other variables than water-table depth play a role in driving microbial properties under stable hydrological conditions. There is a difference in hydrological dynamics in bogs between NW and NE Poland until ca 1500 CE, after which the water table reconstructions show more similarities. Our results illustrate how various functional traits relate to different environmental variables in a range of trophic and hydrological scenarios on long time scales. Moreover, our data suggest a common regional climatic forcing in Mechacz Wielki, Gązwa and Kontolanrahka. Though it may still be too early to attempt a regional summary of wetness change in the southern Baltic region, this study is a next step to better understand the long-term peatland palaeohydrology in NE Europe.

  14. Roosevelt Island Climate Evolution Project (RICE): A 65 Kyr ice core record of black carbon aerosol deposition to the Ross Ice Shelf, West Antarctica.

    Science.gov (United States)

    Edwards, Ross; Bertler, Nancy; Tuohy, Andrea; Neff, Peter; Proemse, Bernedette; Feiteng, Wang; Goodwin, Ian; Hogan, Chad

    2015-04-01

    Emitted by fires, black carbon aerosols (rBC) perturb the atmosphere's physical and chemical properties and are climatically active. Sedimentary charcoal and other paleo-fire records suggest that rBC emissions have varied significantly in the past due to human activity and climate variability. However, few paleo rBC records exist to constrain reconstructions of the past rBC atmospheric distribution and its climate interaction. As part of the international Roosevelt Island Climate Evolution (RICE) project, we have developed an Antarctic rBC ice core record spanning the past ~65 Kyr. The RICE deep ice core was drilled from the Roosevelt Island ice dome in West Antarctica from 2011 to 2013. The high depth resolution (~ 1 cm) record was developed using a single particle intracavity laser-induced incandescence soot photometer (SP2) coupled to an ice core melter system. The rBC record displays sub-annual variability consistent with both austral dry-season and summer biomass burning. The record exhibits significant decadal to millennial-scale variability consistent with known changes in climate. Glacial rBC concentrations were much lower than Holocene concentrations with the exception of several periods of abrupt increases in rBC. The transition from glacial to interglacial rBC concentrations occurred over a much longer time relative to other ice core climate proxies such as water isotopes and suggests . The protracted increase in rBC during the transition may reflected Southern hemisphere ecosystem / fire regime changes in response to hydroclimate and human activity.

  15. Invariants for Parallel Mapping

    Institute of Scientific and Technical Information of China (English)

    YIN Yajun; WU Jiye; FAN Qinshan; HUANG Kezhi

    2009-01-01

    This paper analyzes the geometric quantities that remain unchanged during parallel mapping (i.e., mapping from a reference curved surface to a parallel surface with identical normal direction). The second gradient operator, the second class of integral theorems, the Gauss-curvature-based integral theorems, and the core property of parallel mapping are used to derive a series of parallel mapping invadants or geometri-cally conserved quantities. These include not only local mapping invadants but also global mapping invari-ants found to exist both in a curved surface and along curves on the curved surface. The parallel mapping invadants are used to identify important transformations between the reference surface and parallel surfaces. These mapping invadants and transformations have potential applications in geometry, physics, biome-chanics, and mechanics in which various dynamic processes occur along or between parallel surfaces.

  16. Sporopollen analysis of Core B10 in the southern Yellow Sea and the reflected characteristics of climate changes

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Eight sporopollen zones have been divided based on the results of high-resolution sporopollen analysis of Core B10 in the southern Yellow Sea. Based on the results along with 14C datings and the subbottom profiling data, climatic and environmental changes since the last stage of late Pleistocene are discussed. The main conclusions are drawn as follows: (1) the vegetation evolved in the process of coniferous forest-grassland containing broad-leaved trees→coniferous and broad-leaved mixed forest→coniferous and broad-leaved mixed forest-grassland prevailed by coniferous trees→coniferous and broad-leaved mixed forest-grassland containing evergreen broad-leaved trees→coniferous and broad-leaved mixed forest-grassland prevailed by broad- leaved trees→deciduous broad-leaved forest-meadow containing evergreen broad-leaved trees→coniferous and broad- leaved mixed forest-grassland prevailed by broad-leaved trees→coniferous and broad-leaved mixed forest containing evergreen broad-leaved trees;(2) eight stages of climate changes are identified as the cold and dry stage, the temperate and wet stage, the cold and dry stage, the warm and dry stage, the temperate and wet stage, the hot and dry stage, the temperate and dry stage, then the warm and dry stage in turn; (3) the sedimentary environment developed from land, to littoral zone, to land again, then to shore-neritic zone; and (4) the Yellow Sea Warm Current formed during early- Holocene rather than Atlantic stage.

  17. Sporopollen analysis of Core B10 in the southern Yellow Sea and the reflected characteristics of climate changes

    Institute of Scientific and Technical Information of China (English)

    FUMingzuo; LIZhen; XUXiaowei; SHIXuefa

    2003-01-01

    Eight sporopollen zones have been divided based on the results of high-resolution sporopollen analysis of Core B10 in the southern Yellow Sea. Based on the results along with 14C datings and the subbottom profiling data,climatic and environmental changes since the last stage of late Pleistocene are discussed. The main conclusions are drawn as follows: (1) the vegetation evolved in the process of coniferous forest-grassland containing broad-leaved trees→coniferous and broad-leaved mixed forest→coniferons and broad-leaved mixed forest-grassland prevailed by coniferous trees→coniferous and broad-leaved mixed forest-grassland containing evergreen broad-leaved trees→coniferons and broad-leaved mixed forest-grassland prevailed by broadleaved trees→deciduous broad-leaved forest-meadow containing evergreen broad-leaved trees→coniferous and broadleaved mixed forest-grassland prevailed by broad-leaved trees→coniferous and broad-leaved mixed forest containing evergreen broad-leaved trees; (2) eight stages of climate changes are identified as the cold and dry stage, the temperate and wet stage, the cold and dry stage, the warm and dry stage, the temperate and wet stage, the hot and dry stage, the temperate and dry stage, then the warm and dry stage in turn; (3) the sedimentary environment developed from land,to littoral zone, to land again, then to shore-neritic zone; and (4) the Yellow Sea Warm Current formed during early-Holocene rather than Atlantic stage.

  18. Historical Associations of Molecular Measurements of Escherichia coli and Enterococci to Anthropogenic Activities and Climate Variables in Freshwater Sediment Cores.

    Science.gov (United States)

    Brooks, Yolanda M; Baustian, Melissa M; Baskaran, Mark; Ostrom, Nathaniel E; Rose, Joan B

    2016-07-05

    This study investigated the long-term associations of anthropogenic (sedimentary P, C, and N concentrations, and human population in the watershed), and climatic variables (air temperature, and river discharge) with Escherichia coli uidA and enterococci 23S rRNA concentrations in sediment cores from Anchor Bay (AB) in Lake St. Clair, and near the mouth of the Clinton River (CR), Michigan. Calendar year was estimated from vertical abundances of (137)Cs. The AB and CR cores spanned c.1760-2012 and c.1895-2012, respectively. There were steady state concentrations of enterococci in AB during c.1760-c.1860 and c.1910-c.2003 at ∼0.1 × 10(5) and ∼2.0 × 10(5) cell equivalents (CE) per g-dry wt, respectively. Enterococci concentrations in CR increased toward present day, and ranged from ∼0.03 × 10(5) to 9.9 × 10(5) CE/g-dry wt. The E. coli concentrations in CR and AB increased toward present day, and ranged from 0.14 × 10(7) to 1.7 × 10(7) CE/g-dry wt, and 1.8 × 10(6) to 8.5 × 10(6) CE/g-dry wt, respectively. Enterococci was associated with population and river discharge, while E. coli was associated with population, air temperature, and N and C concentrations (p < 0.05). Sediments retain records of the abundance of fecal indicator bacteria, and offer a way to evaluate responses to increased population, nutrient loading, and environmental policies.

  19. 多群粒子输运问题在多核集群系统上的混合并行计算%Hybrid Parallel Computation of Multi-Group Particle transport Equations on Multi-Core Cluster Systems

    Institute of Scientific and Technical Information of China (English)

    迟利华; 刘杰; 龚春叶; 徐涵; 蒋杰; 胡庆丰

    2009-01-01

    The parallel performance of solving the multi-group particle transport equations on the unstructure meshes is analyzed Adapting to the characteristics of multi-core cluster systems, this paper desgins a MPI/OpenMP hybrid parallel code. For the meshes, the space domain decomposition is adopted, and MPI between the computations of multi-core CPU nodes is used. When each MPI process begin to compute the variables of the energy groups, several OpenMP threads will be forked, and the threads start to compute simultaneously in the same mutli-core CPU node. Using the MPI/OpenMP hybrid parallel code, we solve a 2D mutli-group particle transport equation on a cluster with mutli-core CPU nodes, and the results show that the code has good scalability and can be scaled to 1024 CPU cores.%本文分析了非结构网格多群粒子输运Sn方程求解的并行性,拟合多核机群系统的特点,设计了MPI/OpenMP混合程序,针对空间网格点采用区域分解划分,计算结点间基于消息传递MPI编程,每个MPI计算进程在计算过程中碰到关于能群的计算,就生成多个OpenMP线程,计算结点内针对能群进行多线程并行计算.数值测试结果表明,非结构网格上的粒子输运问题的混合并行计算能较好地匹配多核机群系统的硬件结构,具有良好的可扩展性,可以扩展到1 024个CPU核.

  20. Influence of climatic teleconnections on the temporal isotopic variability as recorded in a firn core from the coastal Dronning Maud Land, East Antarctica

    Indian Academy of Sciences (India)

    Sushant S Naik; Meloth Thamban; A Rajakumar; Witty D’Souza; C M Laluraj; Arun Chaturvedi

    2010-02-01

    Ice and firn core studies provide one of the most valuable tools for understanding the past climate change. In order to evaluate the temporal isotopic variability recorded in ice and its relevance to environmental changes, stable isotopes of oxygen and hydrogen were studied in a firn core from coastal Dronning Maud Land, East Antarctica. The annual 18O profile of the core shows a close relation to the El Niño Southern Oscillation (ENSO) variability. The ENSO indices show significant correlation with the surface air temperatures and 18O values of this region during the austral summer season and support an additional influence related to the Southern Annular Mode (SAM). The correlation between the combined ENSO-SAM index and the summer 18O record seems to have been caused through an atmospheric mechanism. Snow accumulation in this region illustrates a decreasing trend with opposite relationships with 18O data and surface air temperature prior and subsequent to the year 1997. A reorganization of the local water cycle is further indicated by the deuterium excess data showing a shift around 1997, consistent with a change in evaporation conditions. The present study thus illustrates the utility of ice-core studies in the reconstruction of past climate change and suggests possible influence of climatic teleconnections on the snow accumulation rates and isotopic profiles of snow in the coastal regions of east Antarctica.

  1. The carbon dioxide content in ice cores - climatic curves of carbon dioxide. Zu den CO sub 2 -Klimakurven aus Eisbohrkernen

    Energy Technology Data Exchange (ETDEWEB)

    Heyke, H.E.

    1992-05-01

    The 'greenhouse effect', which implies a temperature of 15 deg C as against -18 deg C, owes its effect to 80% from water (clouds and gaseous phase) and to 10% from carbon dioxide, besides other components. Whereas water is largely unaccounted for, carbon dioxide has been postulated as the main cause of anticipated climatic catastrophe. The carbon dioxide concentration in the atmosphere has risen presently to such levels that all previous figures seem to have been left far behind. The reference point is the concentration of carbon dioxide in the air bubbles trapped in ice cores of Antartic and Greenland ice dated 160 000 years ago, which show much lower values than at present. A review of the most relevant publications indicates that many basic laws of chemistry seem to have been left largely unconsidered and experimental errors have made the results rather doubtful. Appropriate arguments have been presented. The investigations considered should be repeated under improved and more careful conditions. (orig.).

  2. 多核处理器上的并行联机分析处理算法研究%Parallel On-Line Analysis Processing Algorithms Research on Multi-Core CPUs

    Institute of Scientific and Technical Information of China (English)

    周国亮; 王桂兰; 朱永利

    2013-01-01

    Computer hardware technology has greatly developed, especially large memory and multi-core, but the algorithm efficiency dose not improve with the development of hardware. The fundamental reason is the insufficient utilization of CPU cache and the limitation of single-thread programming. In the field of OLAP (on-line analysis processing), data cube computation is an important and time-consuming operation, so how to improve the perfor-mance of data cube computation is a difficult research point in this field. Based on the characteristics of multi-core CPUs, this paper proposes two parallel algorithms, MT-Multi-Way (multi-threading multi-way) and MT-BUC (multi-threading bottom-up computation), which utilize data partition and multi-thread cooperation. All these algo-rithms avoid cache contentions between threads and keep loading balance, so obtain near-linear speedup. Based on these algorithms, this paper suggests one unified framework for cube computation on multi-core CPUs, including how to partition data and resolve recursive program on multi-core CPUs for guiding cube computation parallelization.

  3. Climate change and daily press : Italy vs Usa parallel analysis; Stampa e cambiamento climatico : un confronto internazionale

    Energy Technology Data Exchange (ETDEWEB)

    Borrelli, G.; Mazzotta, V. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Ambiente; Falconi, C.; Grossi, R.; Farabollini, F.

    1996-06-01

    Among ENEA (Italian National Agency for New Technologies, Energy, and the Environment) activities, one deals with analysis and strategies of environmental information. A survey of four daily newspaper coverage, on an issue (Global Climate Change) belonging to this area, has been realized. The involved newspapers are: two Italian ones, namely `La Repubblica` and `Il Corriere della Sera`, two North-American ones, namely `New York Times` and `Washington Post`. Purpose of the work was that of detecting the qualitative and quantitative level of consciousness of the Italian press via a comparison with the North-American press, notoriously sensible and careful on environmental issues. The number of articled analyzed is partitioned in the following numerical data: 319 for the `New York Times`, 309 for the `Washington Post`, 146 for the `Corriere della Sera`, 81 articles for `La Repubblica`. The time period covered for the analysis spans from 1989, initiatic year for the organization of the 1992 Rio Conference, to December 1994, deadline date for the submission of national

  4. 多核处理器并行程序的确定性重放研究∗%Deterministic Replay for Parallel Programs in Multi-Core Processors

    Institute of Scientific and Technical Information of China (English)

    高岚; 王锐; 钱德沛

    2013-01-01

      多核处理器并行程序的确定性重放是实现并行程序调试的有效手段,对并行编程有重要意义。但由于多核架构下存在共享访存不同步问题,并行程序确定性重放的研究依然面临多方面的挑战,给并行程序的调试带来很大困难,严重影响了多核架构下并行程序的普及和发展。分析了多核处理器造成并行程序确定性重放难以实现的关键因素,总结了确定性重放的评价指标,综述了近年来学术界对并行程序确定性重放的研究。根据总结的评价指标,从纯软件方式和硬件支持方式对目前的确定性重放方法进行了分析与对比,并在此基础上对多核架构下并行程序的确定性重放未来的研究趋势和应用前景进行了展望。%The deterministic replay for parallel programs in multi-core processor systems is important for the debugging and dissemination of parallel programs, however, due to the difficulty in tackling unsynchronized accessing of shared memory in multiprocessors, industrial-level deterministic replay for parallel programs have not emerged yet. This paper analyzes non-deterministic events in multi-core processor systems and summarizes metrics of deterministic replay schemes. After studying the research for deterministic multi-core processor replay in recent years, this paper introduces the proposed deterministic replay schemes for parallel programs in multi-core processor systems, investigates characteristics of software-pure and hardware-assisted deterministic replay schemes, analyzes current researches and gives the prospects of deterministic replay for parallel programs in multi-core processor systems.

  5. Transient climate simulations of the deglaciation 21-9 thousand years before present (version 1) - PMIP4 Core experiment design and boundary conditions

    Science.gov (United States)

    Ivanovic, Ruza F.; Gregoire, Lauren J.; Kageyama, Masa; Roche, Didier M.; Valdes, Paul J.; Burke, Andrea; Drummond, Rosemarie; Peltier, W. Richard; Tarasov, Lev

    2016-07-01

    The last deglaciation, which marked the transition between the last glacial and present interglacial periods, was punctuated by a series of rapid (centennial and decadal) climate changes. Numerical climate models are useful for investigating mechanisms that underpin the climate change events, especially now that some of the complex models can be run for multiple millennia. We have set up a Paleoclimate Modelling Intercomparison Project (PMIP) working group to coordinate efforts to run transient simulations of the last deglaciation, and to facilitate the dissemination of expertise between modellers and those engaged with reconstructing the climate of the last 21 000 years. Here, we present the design of a coordinated Core experiment over the period 21-9 thousand years before present (ka) with time-varying orbital forcing, greenhouse gases, ice sheets and other geographical changes. A choice of two ice sheet reconstructions is given, and we make recommendations for prescribing ice meltwater (or not) in the Core experiment. Additional focussed simulations will also be coordinated on an ad hoc basis by the working group, for example to investigate more thoroughly the effect of ice meltwater on climate system evolution, and to examine the uncertainty in other forcings. Some of these focussed simulations will target shorter durations around specific events in order to understand them in more detail and allow for the more computationally expensive models to take part.

  6. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  7. 面向任务的TBB多核集群混合并行编程模型%TBB Task-oriented Mixed-parallel Programming Model for Multi-Core Cluster

    Institute of Scientific and Technical Information of China (English)

    顾慧; 郑晓薇; 张建强; 吴华平

    2011-01-01

    构建了一种适用于多核集群的混合并行编程模型.该模型融合了共享内存的面向任务的TBB编程和基于消息传递的MPI编程两种模式.结合两者的优势,实现进程到处理节点和进程内线程到处理器核的两级并行.相对于单一编程方式下的程序性能,采用这种混合并行编程模型的算法不但可以减少程序执行时间,获得更好的加速比和执行效率,而且明显地提高了集群性能.%To take full advantage of the structural characteristics of multi-core clusters and enhance the use of CPU each core efficiently, a multi-core cluster hybrid parallel programming model is constructed. The model combines the shared memory and the TBB task-oriented programming and MPI message-passing programming mode. Using their advantages, achieve the hierarchical parallel programming of processes to nodes and threads to the processor core.Compare with the algorithm of the single parallel programming model, this programming mode not only can reduce the program execution time, get a better speedup and execution efficiency, but also improve the cluster performance greatly.

  8. New insights on the anatomy of abrupt climate changes based on high-resolution ice core records from NorthGRIP (Greenland)

    Science.gov (United States)

    Capron, E.; Rasmussen, S.; Popp, T. J.; Vaughn, B. H.; Gkinis, V.; Erhardt, T.; Fischer, H.; Blunier, T.; Landais, A.; Pedro, J. B.; Steffensen, J. P.; Svensson, A.; Vinther, B.

    2016-12-01

    The millennial-scale succession of Greenland Stadials (GS) and Greenland Interstadials (GI) illustrates the Greenland expression of the well-known sequence of Dansgaard-Oeschger (DO) events, within which we observe additional climate variations of decadal to centennial-scale duration. Various paradigms, mostly based on interactions between the cryosphere and the ocean, have been proposed to explain the existence and evolution of DO events. Annual to decadal scale records of environmental and climatic regional changes over the rapid transitions are needed to assess whether climate model outputs based on a particular mechanism are consistent with the observed spatial pattern and temporal phasing. Here we present new multiannual resolution stable water isotope measurements (ice δ18O and δD) and annually resolved ion chemistry records from the NorthGRIP ice core. Because these tracers imprint the signatures of different parts of the Northern Hemisphere climate system, we can map the anatomy - the spatial and temporal signature of climate and environmental changes - associated with abrupt transitions (from GS to GI and vice-versa) occurring during Marine Isotopic Stage (MIS) 4. We determine via a statistical approach the timing and duration of the transitions, along with the amplitude of the local and regional changes associated with each Greenland warming and cooling phase. We quantify similarities and differences in the sequences of events through a comparison with results obtained for MIS 3 abrupt transitions and results from the NEEM ice core for selected transitions. The anatomy of abrupt climate changes appears to be different from one event to the next, suggesting that the mechanisms at play are not identical for all of them. We discuss the possible influence of (1) the Heinrich Stadials (i.e. GS during which a Heinrich Event occurred) and of (2) the long term evolution of the climate system on the different decadal to centennial-scale sequences of events

  9. The impact of glacier retreat from the Ross Sea on local climate: Characterization of mineral dust in the Taylor Dome ice core, East Antarctica

    Science.gov (United States)

    Aarons, S. M.; Aciego, S. M.; Gabrielli, P.; Delmonte, B.; Koornneef, J. M.; Wegner, A.; Blakowski, M. A.

    2016-06-01

    Recent declines in ice shelf and sea ice extent experienced in polar regions highlight the importance of evaluating variations in local weather patterns in response to climate change. Airborne mineral particles (dust) transported through the atmosphere and deposited on ice sheets and glaciers in Antarctica and Greenland can provide a robust set of tools for resolving the evolution of climatic systems through time. Here we present the first high time resolution radiogenic isotope (strontium and neodymium) data for Holocene dust in a coastal East Antarctic ice core, accompanied by rare earth element composition, dust concentration, and particle size distribution during the last deglaciation. We aim to use these combined ice core data to determine dust provenance, with variations indicative of shifts in either dust production, sources, and/or transport pathways. We analyzed a series of 17 samples from the Taylor Dome (77°47‧47″S, 158°43‧26″E) ice core, 113-391 m in depth from 1.1-31.4 ka. Radiogenic isotopic and rare earth element compositions of dust during the last glacial period are in good agreement with previously measured East Antarctic ice core dust records. In contrast, the Holocene dust dataset displays a broad range in isotopic and rare earth element compositions, suggesting a shift from long-range transported dust to a more variable, local input that may be linked to the retreat of the Ross Ice Shelf during the last deglaciation. Observed changes in the dust cycle inferred from a coastal East Antarctic ice core can thus be used to infer an evolving local climate.

  10. Isotopic and hydrologic responses of small, closed lakes to climate variability: Comparison of measured and modeled lake level and sediment core oxygen isotope records

    Science.gov (United States)

    Steinman, Byron A.; Abbott, Mark B.; Nelson, Daniel B.; Stansell, Nathan D.; Finney, Bruce P.; Bain, Daniel J.; Rosenmeier, Michael F.

    2013-03-01

    Simulations conducted using a coupled lake-catchment isotope mass balance model forced with continuous precipitation, temperature, and relative humidity data successfully reproduce (within uncertainty limits) long-term (i.e., multidecadal) trends in reconstructed lake surface elevations and sediment core oxygen isotope (δ18O) values at Castor Lake and Scanlon Lake, north-central Washington. Error inherent in sediment core dating methods and uncertainty in climate data contribute to differences in model reconstructed and measured short-term (i.e., sub-decadal) sediment (i.e., endogenic and/or biogenic carbonate) δ18O values, suggesting that model isotopic performance over sub-decadal time periods cannot be successfully investigated without better constrained climate data and sediment core chronologies. Model reconstructions of past lake surface elevations are consistent with estimates obtained from aerial photography. Simulation results suggest that precipitation is the strongest control on lake isotopic and hydrologic dynamics, with secondary influence by temperature and relative humidity. This model validation exercise demonstrates that lake-catchment oxygen isotope mass balance models forced with instrumental climate data can reproduce lake hydrologic and isotopic variability over multidecadal (or longer) timescales, and therefore, that such models could potentially be used for quantitative investigations of paleo-lake responses to hydroclimatic change.

  11. The Chew Bahir Drilling Project (HSPDP). Deciphering climate information from the Chew Bahir sediment cores: Towards a continuous half-million year climate record near the Omo - Turkana key palaeonanthropological Site

    Science.gov (United States)

    Foerster, Verena E.; Asrat, Asfawossen; Chapot, Melissa S.; Cohen, Andrew S.; Dean, Jonathan R.; Deino, Alan; Günter, Christina; Junginger, Annett; Lamb, Henry F.; Leng, Melanie J.; Roberts, Helen M.; Schaebitz, Frank; Trauth, Martin H.

    2017-04-01

    As a contribution towards an enhanced understanding of human-climate interactions, the Hominin Sites and Paleolakes Drilling Project (HSPDP) has successfully completed coring five dominantly lacustrine archives of climate change during the last 3.5 Ma in East Africa. All five sites in Ethiopia and Kenya are adjacent to key paleoanthropological research areas encompassing diverse milestones in human evolution, dispersal episodes, and technological innovation. The 280 m-long Chew Bahir sediment records, recovered from a tectonically-bound basin in the southern Ethiopian rift in late 2014, cover the past 550 ka of environmental history, a time period that includes the transition to the Middle Stone Age, and the origin and dispersal of modern Homo sapiens. Deciphering climate information from lake sediments is challenging, due to the complex relationship between climate parameters and sediment composition. We will present the first results in our efforts to develop a reliable climate-proxy tool box for Chew Bahir by deconvolving the relationship between sedimentological and geochemical sediment composition and strongly climate-controlled processes in the basin, such as incongruent weathering, transportation and authigenic mineral alteration. Combining our first results from the long cores with those from a pilot study of short cores taken in 2009/10 along a NW-SE transect of the basin, we have developed a hypothesis linking climate forcing and paleoenvironmental signal-formation processes in the basin. X-ray diffraction analysis of the first sample sets from the long Chew Bahir record reveals similar processes that have been recognized for the uppermost 20 m during the pilot-study of the project: the diagenetic illitization of smectites during episodes of higher alkalinity and salinity in the closed-basin lake induced by a drier climate. The precise time resolution, largely continuous record and (eventually) a detailed understanding of site specific proxy formation

  12. Hetrogenous Parallel Computing

    OpenAIRE

    2013-01-01

    With processor core counts doubling every 18-24 months and penetrating all markets from high-end servers in supercomputers to desktops and laptops down to even mobile phones, we sit at the dawn of a world of ubiquitous parallelism, one where extracting performance via parallelism is paramount. That is, the "free lunch" to better performance, where programmers could rely on substantial increases in single-threaded performance to improve software, is over. The burden falls on developers to expl...

  13. Mount Logan Ice Core Evidence for Secular Changes in the Climate of the North Pacific Following the End of the Little Ice Age

    Science.gov (United States)

    Moore, K.; Alverson, K.; Holdsworth, G.

    2003-12-01

    The relatively short length of most instrumental climate datasets restricts the study of variability and trends that exists in the climate system. This is particularly true regarding the atmosphere where high quality spatially dense data exists only since the late 1940s. With this data, the Pacific North America pattern (PNA) has been identified as one of the dominant modes of variability in the atmosphere that plays an important role in the climate of North America. This pattern consists of alternating regions of high and low geopotential height anomalies in the middle and upper troposphere arching from the tropical Pacific to North America. It is thought to be the result of a standing Rossby wave pattern forced by the upper-atmospheric convergence associated with the descending branch of the regional Hadley Circulation. We will describe the climate signal contained in a 301-year ice core record from a high elevation site on Mount Logan in the Yukon. This record has a statistically significant and accelerating positive trend in snow accumulation from the middle of the 19th century, the end of the Little Ice Age. As we will show, this record contains an expression of the Pacific North America (PNA) teleconnection as well as the regional Hadley and Walker circulations in the Pacific. We argue that the positive trend in snow accumulation in the ice core is a reflection of secular changes in the intensities of these circulations that has ongoing since the end of the Little Ice Age.

  14. Climatic Seesaws Across The North Pacific As Revealed By High-Mountain Ice Cores Drilled At Kamchatka And Wrangell-St. Elias Mountains

    Science.gov (United States)

    Shiraiwa, T.; Goto-Azuma, K.; Kanamori, S.; Matoba, S.; Benson, C. S.; Muravyev, Y. D.; Salamatin, A. N.

    2004-12-01

    We drilled ca. 210-m deep ice cores at Mt. Ushkovsky (Kamchatka: 1998), King Col of Mt. Logan (2002) and Mt. Wrangell (2004). Thanks to the high accumulation rates up to 2 m per year in these mountains, the ice cores are expected to unveil the climate and atmospheric changes in the northern North Pacific during the last several centuries. The reconstructed annual accumulation rates time-series of Mt. Ushkovsky showed, for example, decadal to interdecadal oscillations which were closely correlated to the Pacific Decadal Oscillations (PDO). Comparison between the reconstructed accumulation rates between the Ushkovsky and our two ice cores from Wrangell-St. Elias mountains suggests that the PDO played an important role in determining the precipitation at both side of the northern North Pacific: positive PDO supplied high precipitation in the Pacific North America and the negative PDO did high in Kamchatka during the last two centuries. Beside the significance of the climate proxy signals, the physical properties of the ice cores and the related glaciological features at the three mountains provided unique feature of glaciers developing on high mountains with complicated topographies and high accumulation rates. It was shown that careful treatment of dynamic behavior in these high mountain glaciers are indispensable for the precise reconstruction of past accumulation time-series.

  15. A Concurrent Collections Based Approach for Parallel Data Compress on Multi-core System%一种适于多核计算机系统的并行压缩方法

    Institute of Scientific and Technical Information of China (English)

    乔峰

    2015-01-01

    Currently, multi-core computer systems are in the mainstream around the world, and software developers need to design for multi-core solutions with increased parallelism. How to effectively utilize the multi-core system is a big issue. It is difficult to use an operating system’s built-in native thread programming model to develop multi-threaded applications. Fortunately, Intel’s TBB, ArBB and Cilk multi-core programming model are now available. A simple and effective multi-core programming language is proposed by Intel named “Concurrent Collections”. The Concurrent Collections (CnC) is a declarative parallel language that allows the application developer to express their parallel application as a collection of high-level computations. In this paper, we describe how to use the new programming model to implement a high-performance parallel data compression application and compared it against existing approaches. On a platform with two Xeon Processor X5460 3.16GHz 8-thread CPUs, the parallelized solution exceeded serial codes performance by up to 8x. Our performance compared with alternative parallelized solutions, including OpenMP, TBB and Cilk. The Concurrent Collections approach got 5%-10% performance gain compared to the existing performance of the paralleled implementation approach of OpenMP, TBB and Cilk .%当前随着多核计算机硬件系统已经成为应用主流,软件开发者需要设计适合多核计算机硬件系统的软件系统。然而如何有效地使用多核硬件系统将成为很大的挑战。开发人员使用基于操作系统线程级开发模型将遇到很大的挑战。为有效地应对以上问题,Intel公司开发出了适合多核计算机硬件系统的开发编程模型:TBB, ArBB and Cilk等编程模型。最近一种新型的简单而有效的适合多核计算机硬件系统编程的模型“Concurrent Collections”简称“CnC”被Intel公司开发出来。CnC采用声明式编程语言允许应用程序

  16. Ice core records of monoterpene- and isoprene-SOA tracers from Aurora Peak in Alaska since 1660s: Implication for climate change variability in the North Pacific Rim

    Science.gov (United States)

    Pokhrel, Ambarish; Kawamura, Kimitaka; Ono, Kaori; Seki, Osamu; Fu, Pingqing; Matoba, Sumio; Shiraiwa, Takayuki

    2016-04-01

    Monoterpene and isoprene secondary organic aerosol (SOA) tracers are reported for the first time in an Alaskan ice core to better understand the biological source strength before and after the industrial revolution in the Northern Hemisphere. We found significantly high concentrations of monoterpene- and isoprene-SOA tracers (e.g., pinic, pinonic, and 2-methylglyceric acids, 2-methylthreitol and 2-methylerythritol) in the ice core, which show historical trends with good correlation to each other since 1660s. They show positive correlations with sugar compounds (e.g., mannitol, fructose, glucose, inositol and sucrose), and anti-correlations with α-dicarbonyls (glyoxal and methylglyoxal) and fatty acids (e.g., C18:1) in the same ice core. These results suggest similar sources and transport pathways for monoterpene- and isoprene-SOA tracers. In addition, we found that concentrations of C5-alkene triols (e.g., 3-methyl-2,3,4-trihydroxy-1-butene, cis-2-methyl 1,3,4-trihydroxy-1-butene and trans-2-methyl-1,3,4-trihydroxy-1-butene) in the ice core have increased after the Great Pacific Climate Shift (late 1970s). They show positive correlations with α-dicarbonyls and fatty acids (e.g., C18:1) in the ice core, suggesting that enhanced oceanic emissions of biogenic organic compounds through the marine boundary layer are recorded in the ice core from Alaska. Photochemical oxidation process for these monoterpene- and isoprene-/sesquiterpene-SOA tracers are suggested to be linked with the periodicity of multi-decadal climate oscillations and retreat of sea ice in the Northern Hemisphere.

  17. Sensitivity of interglacial Greenland temperature and δ18O to orbital and CO2 forcing: climate simulations and ice core data

    Directory of Open Access Journals (Sweden)

    J. Sjolte

    2011-05-01

    Full Text Available The sensitivity of interglacial Greenland temperature to orbital and CO2 forcing is investigated using the NorthGRIP ice core data and coupled ocean-atmosphere IPSL-CM4 model simulations. These simulations were conducted in response to different interglacial orbital configurations, and to increased CO2 concentrations. These different forcings cause very distinct simulated seasonal and latitudinal temperature and water cycle changes, limiting the analogies between the last interglacial and future climate. However, the IPSL-CM4 model shows similar magnitudes of Arctic summer warming and climate feedbacks in response to 2 × CO2 and orbital forcing of the last interglacial period (126 000 yr ago. The IPSL model produces a remarkably linear relationship between top of atmosphere incoming summer solar radiation and simulated changes in summer and annual mean central Greenland temperature. This contrasts with the stable isotope record from the Greenland ice cores, showing a multi-millennial lagged response to summer insolation. During the early part of interglacials, the observed lags may be explained by ice sheet-ocean feedbacks linked with changes in ice sheet elevation and the impact of meltwater on ocean circulation, as investigated with sensitivity studies. A quantitative comparison between ice core data and climate simulations requires to explore the stability of the stable isotope – temperature relationship. Atmospheric simulations including water stable isotopes have been conducted with the LMDZiso model under different boundary conditions. This set of simulations allows to calculate a temporal Greenland isotope-temperature slope (0.3–0.4 ‰ per °C during warmer than present Arctic climates, in response to increased CO2, increased ocean temperature and orbital forcing. This temporal slope appears twice as small as the modern spatial gradient and is consistent with other ice core estimates. A preliminary comparison with other model

  18. 基于OpenMP的电磁场FDTD多核并行程序设计%Design of electromagnetic field FDTD multi-core parallel program based on OpenMP

    Institute of Scientific and Technical Information of China (English)

    吕忠亭; 张玉强; 崔巍

    2013-01-01

    探讨了基于OpenMP的电磁场FDTD多核并行程序设计的方法,以期实现该方法在更复杂的算法中应用具有更理想的性能提升。针对一个一维电磁场FDTD算法问题,对其计算方法与过程做了简单描述。在Fortran语言环境中,采用OpenMP+细粒度并行的方式实现了并行化,即只对循环部分进行并行计算,并将该并行方法在一个三维瞬态场电偶极子辐射FDTD程序中进行了验证。该并行算法取得了较其他并行FDTD算法更快的加速比和更高的效率。结果表明基于OpenMP的电磁场FDTD并行算法具有非常好的加速比和效率。%The method of the electromagnetic field FDTD multi-core parallel programm design based on OpenMP is dis-cussed,in order to implement ideal performance improvement of this method in the application of more sophisticated algorithms. Aiming at a problem existing in one-dimensional electromagnetic FDTD algorithm , its calculation method and process are described briefly. In Fortran language environment,the parallelism is achieved with OpenMP technology and fine-grained parallel way,that is,the parallel computation is performed only for the cycle part. The parallel method was verified in a three-dimensional transient electromagnetic field FDTD program for dipole radiation. The parallel algorithm has achieved faster speedup and higher efficiency than other parallel FDTD algoritms. The results indicate that the electromagnetic field FDTD parallel algorithm based on OpenMP has a good speedup and efficiency.

  19. 并行计算在多核平台上的实现与应用研究%Application and Realization Research of Parallel Computing on Multi-Core Platform

    Institute of Scientific and Technical Information of China (English)

    秦书茂; 叶海建

    2013-01-01

    多核 CPU 在当前已成为 PC 机的常规配置,为了充分发挥 PC 机的性能,以提高应用软件的运行速度,本文针对如何在多核 CPU 上实现并行计算进行了研究,将其应用到薄层水流流速参数的虚拟正态边界模型计算中.经实例测试验证,采用双核、四核并行计算的模型求解速度分别是单核情况下的1.4倍、2.4倍,核心数越多,倍数越大.%In the current, Multi-core CPU has become the general configuration of the PC. In order to give full play to the performance of the PC and improve the running speed of application software, how to parallel computing on the multi-core CPU is studied in this paper. The research is applied into the calculation of shallow water flow velocity measurement model with virtual boundary condition. The test result showed that the running speed on dual-core CPU and on quad-core CPU is 1.4 times and 2.3 times faster than that on the single-core CPU, which indicates the more the number of cores, the greater the multiplies.

  20. Transient climate simulations of the deglaciation 21–9 thousand years before present; PMIP4 Core experiment design and boundary conditions

    Directory of Open Access Journals (Sweden)

    R. F. Ivanovic

    2015-10-01

    Full Text Available The last deglaciation, which marked the transition between the last glacial and present interglacial periods, was punctuated by a series of rapid (centennial and decadal climate changes. Numerical climate models are useful for investigating mechanisms that underpin the events, especially now that some of the complex models can be run for multiple millennia. We have set up a Paleoclimate Modelling Intercomparison Project (PMIP working group to coordinate efforts to run transient simulations of the last deglaciation, and to facilitate the dissemination of expertise between modellers and those engaged with reconstructing the climate of the last 21 thousand years. Here, we present the design of a coordinated Core simulation over the period 21–9 thousand years before present (ka with time varying orbital forcing, greenhouse gases, ice sheets, and other geographical changes. A choice of two ice sheet reconstructions is given, but no ice sheet or iceberg meltwater should be prescribed in the Core simulation. Additional focussed simulations will also be coordinated on an ad-hoc basis by the working group, for example to investigate the effect of ice sheet and iceberg meltwater, and the uncertainty in other forcings. Some of these focussed simulations will focus on shorter durations around specific events to allow the more computationally expensive models to take part.

  1. 多核机群上数据密集型应用并行程序性能优化%Parallel program performance optimization for data-intensive applications on multi-core clusters

    Institute of Scientific and Technical Information of China (English)

    黄华林; 钟诚

    2012-01-01

    在异构多核机群系统上利用数据任务块的动态调度策略和全锁定技术,给出一种面向数据密集型应用的结点内主存和可用的共享二级缓存大小中动态调度数据块的多进程级和多线程级并行编程机制,给出了优化数据密集型应用并行程序性能的策略和技术.在多核计算机组成的异构机群上并行求解随机序列多关键字查找的实验结果表明,所给出的多核并行程序设计机制和性能优化方法可行和高效.%Using dynamic data task block scheduling policy and all-locking technology on the heterogeneous multi-core clusters, this paper presents a hybrid parallel programming mechanism of multiprocess-level and multithreaded-level for the data-intensive applications, which can efficiently use the data in the main memory and dynamic schedule the data block in shared L2 cache, and presents the technology and strategy of paralleled application performance optimization for the data-intensive applications. The experiments for solving the multi-keyword search of random sequences parallelly on the heterogeneous multi-core clusters show that the parallel programming mechanism and performance optimization methods are usable and efficient.

  2. Application of LBM on Multi-Core Parallel Programming Model%LBM在多核并行编程模型中的应用

    Institute of Scientific and Technical Information of China (English)

    李彬彬; 李青

    2011-01-01

    LBGK ( Lattice Bhatnagar-Gross-Krook ) model is not only the new ground on theory and application of LBM ( Lattice Boltzmann Method) ,but also a very novel numerical method. It applys to the massively parallel processing. With management of threads,MTI ( Multi-Thread Interface ) provides two main methods for parallel coding on multicore processor computer. One is data parallelism based on cache blocking, the other is a tasks schedule with working stealing. MTI provides an interface for the development of multicore environment conveniently and efficiently, greatly reducing the burden on developers. An LBGK model for pattern formation is realized by MTI, and the numerical results show that MTI is efficient and easy to use.%LBGK(Lattice Bhatnagar-Gross-Krook)模IV不仅是LBM(Lattice Boltzmann Method)理论及应用上的新突破,而且是一种非常新颖的数值计算方法,适合大规模并行计算.多线程并行编程接11库(Multi-Thread Interface,MTI)充分利用多核处理器的资源来提升计算的性能,为在多核环境下方便地开发高效的并行程序提供了一个接口,大大地减轻了开发人员的负担.MTI提供了使用cache块技术划分数据集实现单任务数据并行计算,以及采用任务密取调度策略实现多任务并行处理.应用MI实现了LBGK模型模拟斑图形成的并行计算,并获得了较高的并行效率.

  3. Record of Volcanism Since 7000 B.C. from the GISP2 Greenland Ice Core and Implications for the Volcano-Climate System.

    Science.gov (United States)

    Zielinski, G A; Mayewski, P A; Meeker, L D; Whitlow, S; Twickler, M S; Morrison, M; Meese, D A; Gow, A J; Alley, R B

    1994-05-13

    Sulfate concentrations from continuous biyearly sampling of the GISP2 Greenland ice core provide a record of potential climate-forcing volcanism since 7000 B.C. Although 85 percent of the events recorded over the last 2000 years were matched to documented volcanic eruptions, only about 30 percent of the events from 1 to 7000 B.C. were matched to such events. Several historic eruptions may have been greater sulfur producers than previously thought. There are three times as many events from 5000 to 7000 B.C. as over the last two millennia with sulfate deposition equal to or up to five times that of the largest known historical eruptions. This increased volcanism in the early Holocene may have contributed to climatic cooling.

  4. Link of volcanic activity and climate change in Altai studied in the ice core from Belukha Mountain

    OpenAIRE

    N. S. Malygina; T. V. Barlyaeva; T. S. Papina

    2013-01-01

    In the present research we discuss a role of volcanic activity in Altai thermal regime. Here we analyses the sulfate and temperature data reconstructed from the natural paleoarchive – ice core from the Belukha Mountain saddle. Sulfate ice-core reconstructions can serve as volcanic markers. The both – sulfate and temperature reconstructions – are for the last 750 years. As the characteristic of volcanic activity we consider Volcanic Explosivity Index (VEI), Dust Veil Index (DVI) and Ice core v...

  5. Transient climate simulations of the deglaciation 21-9 thousand years before present; PMIP4 Core experiment design and boundary conditions

    Science.gov (United States)

    Ivanovic, Ruza; Gregoire, Lauren; Kageyama, Masa; Roche, Didier; Valdes, Paul; Burke, Andrea; Drummond, Rosemarie; Peltier, W. Richard; Tarasov, Lev

    2016-04-01

    The last deglaciation, which marked the transition between the last glacial and present interglacial periods, was punctuated by a series of rapid (centennial and decadal) climate changes. Numerical climate models are useful for investigating mechanisms that underpin the events, especially now that some of the complex models can be run for multiple millennia. We have set up a Paleoclimate Modelling Intercomparison Project (PMIP) working group to coordinate efforts to run transient simulations of the last deglaciation, and to facilitate the dissemination of expertise between modellers and those engaged with reconstructing the climate of the last 21 thousand years. Here, we present the design of a coordinated Core simulation over the period 21-9 thousand years before present (ka) with time varying orbital forcing, greenhouse gases, ice sheets, and other geographical changes. A choice of two ice sheet reconstructions is given. Additional focussed simulations will also be coordinated on an ad-hoc basis by the working group, for example to investigate the effect of ice sheet and iceberg meltwater, and the uncertainty in other forcings. Some of these focussed simulations will concentrate on shorter durations around specific events to allow the more computationally expensive models to take part. Ivanovic, R. F., Gregoire, L. J., Kageyama, M., Roche, D. M., Valdes, P. J., Burke, A., Drummond, R., Peltier, W. R., and Tarasov, L.: Transient climate simulations of the deglaciation 21-9 thousand years before present; PMIP4 Core experiment design and boundary conditions, Geosci. Model Dev. Discuss., 8, 9045-9102, doi:10.5194/gmdd-8-9045-2015, 2015.

  6. Climatic Variability in Davis Sea Sector (east Antarctica) Over the Past 250 Years Based on the 105 KM Ice Core Geochemical Data

    Science.gov (United States)

    Vladimirova, Diana; Ekaykin, Alexey

    2014-05-01

    In this study we present the air temperature and snow accumulation rate reconstruction in the Davis sea sector of East Antarctica over the past 250 years based on the geochemical investigations of the ice core from 105 km borehole (105 km inland from Mirny Station) drilled in 1988. The core was dated by the counting of the annual layers in the stable water isotope content (dD and d18O) profile and using the absolute date marker (Tambora volcano layer). The accumulation values were deduced from the thickness of the layers multiplied by the core density. The isotope content was transformed into the air temperature by comparing to the instrumental meteorological data from Mirny station. The reconstructed temperature series demonstrates a 0.5°C warming over the last 250 years. At the same time, snow accumulation rate was decreasing at least since the middle of the XIXth century. The climatic characteristics demonstrate cyclic variability with the periods of 6, 9, 19, 32 and about 120 years. Interestingly, in frames of 19-year cycle the temperature and isotope content are negatively related, which could be explained by a zonal shift of the moisture source area. Based on the data of the sodium concentration and "deuterium excess" values in the ice core, we assumed an increased sea ice extent in the XIXth century comparing to the present day.

  7. Accumulation reconstruction and water isotope analysis for 1735–1997 of an ice core from the Ushkovsky volcano, Kamchatka, and their relationships to North Pacific climate records

    Directory of Open Access Journals (Sweden)

    T. Sato

    2013-04-01

    Full Text Available To investigate past climate change in the Northwest Pacific region, an ice core was retrieved in June 1998 from the Gorshkov crater glacier at the top of the Ushkovsky volcano, in central Kamchatka. Hydrogen isotope (δD analysis and past accumulation reconstructions were conducted to a depth of 140.7 m, dated to 1735. Two accumulation reconstruction methods were applied with the Salamatin and the Elmer/Ice ice flow models. Reconstructed accumulation rates and δD were significantly correlated with North Pacific surface temperature. This, and a significant correlation of δD with the North Pacific Gyre Oscillation (NPGO index implies that NPGO data is contained in this record. Wavelet analysis shows that the ice core records have significant multi-decadal power spectra up to the late 19th century. The multi-decadal periods of reconstructed accumulation rates change at around 1850 in the same way as do Northeast Pacific ice core and tree ring records. The loss of multi-decadal scale power spectra of δD and the 6‰ increase in its average value occurred around 1880. Thus the core record confirms that the periodicity of precipitation for the entire North Pacific changed between the end of the Little Ice Age through the present due to changes in conditions in the North Pacific Ocean.

  8. Long-term Records of Pacific Salmon Abundance From Sediment Core Analysis: Relationships to Past Climatic Change, and Implications for the Future

    Science.gov (United States)

    Finney, B.

    2002-12-01

    The response of Pacific salmon to future climatic change is uncertain, but will have large impacts on the economy, culture and ecology of the North Pacific Rim. Relationships between sockeye salmon populations and climatic change can be determined by analyzing sediment cores from lakes where sockeye return to spawn. Sockeye salmon return to their natal lake system to spawn and subsequently die following 2 - 3 years of feeding in the North Pacific Ocean. Sockeye salmon abundance can be reconstructed from stable nitrogen isotope analysis of lake sediment cores as returning sockeye transport significant quantities of N, relatively enriched in N-15, from the ocean to freshwater systems. Temporal changes in the input of salmon-derived N, and hence salmon abundance, can be quantified through downcore analysis of N isotopes. Reconstructions of sockeye salmon abundance from lakes in several regions of Alaska show similar temporal patterns, with variability occurring on decadal to millennial timescales. Over the past 2000 years, shifts in sockeye salmon abundance far exceed the historical decadal-scale variability. A decline occurred from about 100 BC - 800 AD, but salmon were consistently more abundant 1200 - 1900 AD. Declines since 1900 AD coincide with the period of extensive commercial fishing. Correspondence between these records and paleoclimatic data suggest that changes in salmon abundance are related to large scale climatic changes over the North Pacific. For example, the increase in salmon abundance c.a. 1200 AD corresponds to a period of glacial advance in southern Alaska, and a shift to drier conditions in western North America. Although the regionally coherent patterns in reconstructed salmon abundance are consistent with the hypothesis that climate is an important driver, the relationships do not always follow patterns observed in the 20th century. A main feature of recorded climate variability in this region is the alternation between multi-decade periods of

  9. ADVANCES AT A GLANCE IN PARALLEL COMPUTING

    Directory of Open Access Journals (Sweden)

    RAJKUMAR SHARMA

    2014-07-01

    Full Text Available In the history of computational world, sequential uni-processor computers have been exploited for years to solve scientific and business problems. To satisfy the demand of compute & data hungry applications, it was observed that better response time can be achieved only through parallelism. Large computational problems were partitioned and solved by using multiple CPUs in parallel. Computing performance was further improved by adopting multi-core architecture which provides hardware parallelism through use of multiple cores. Efficient resource utilization of a parallel computing environment by using software and hardware parallelism is a major research challenge. The present hardware technologies provide freedom to algorithm developers for control & management of resources through software codes, such as threads-to-cores mapping in recent multi-core processors. In this paper, a survey is presented since beginning of parallel computing up to the use of present state-of-art multi-core processors.

  10. Ice Core Investigations

    Science.gov (United States)

    Krim, Jessica; Brody, Michael

    2008-01-01

    What can glaciers tell us about volcanoes and atmospheric conditions? How does this information relate to our understanding of climate change? Ice Core Investigations is an original and innovative activity that explores these types of questions. It brings together popular science issues such as research, climate change, ice core drilling, and air…

  11. Ice Core Investigations

    Science.gov (United States)

    Krim, Jessica; Brody, Michael

    2008-01-01

    What can glaciers tell us about volcanoes and atmospheric conditions? How does this information relate to our understanding of climate change? Ice Core Investigations is an original and innovative activity that explores these types of questions. It brings together popular science issues such as research, climate change, ice core drilling, and air…

  12. Core biopsy needle versus standard aspiration needle for endoscopic ultrasound-guided sampling of solid pancreatic masses: a randomized parallel-group study.

    Science.gov (United States)

    Lee, Yun Nah; Moon, Jong Ho; Kim, Hee Kyung; Choi, Hyun Jong; Choi, Moon Han; Kim, Dong Choon; Lee, Tae Hoon; Cha, Sang-Woo; Cho, Young Deok; Park, Sang-Heum

    2014-12-01

    An endoscopic ultrasound (EUS)-guided fine needle biopsy (EUS-FNB) device using a core biopsy needle was developed to improve diagnostic accuracy by simultaneously obtaining cytological aspirates and histological core samples. We prospectively compared the diagnostic accuracy of EUS-FNB with standard EUS-guided fine needle aspiration (EUS-FNA) in patients with solid pancreatic masses. Between January 2012 and May 2013, consecutive patients with solid pancreatic masses were prospectively enrolled and randomized to undergo EUS-FNB using a core biopsy needle or EUS-FNA using a standard aspiration needle at a single tertiary center. The specimen was analyzed by onsite cytology, Papanicolaou-stain cytology, and histology. The main outcome measure was diagnostic accuracy for malignancy. The secondary outcome measures were: the median number of passes required to establish a diagnosis, the proportion of patients in whom the diagnosis was established with each pass, and complication rates. The overall accuracy of combining onsite cytology with Papanicolaou-stain cytology and histology was not significantly different for the FNB (n = 58) and FNA (n = 58) groups (98.3 % [95 %CI 94.9 % - 100 %] vs. 94.8 % [95 %CI 91.9 % - 100 %]; P = 0.671). Compared with FNA, FNB required a significantly lower median number of needle passes to establish a diagnosis (1.0 vs. 2.0; P < 0.001). On subgroup analysis of 111 patients with malignant lesions, the proportion of patients in whom malignancy was diagnosed on the first pass was significantly greater in the FNB group (72.7 % vs. 37.5 %; P < 0.001). The overall accuracy of FNB and FNA in patients with solid pancreatic masses was comparable; however, fewer passes were required to establish the diagnosis of malignancy using FNB.This study was registered on the UMIN Clinical Trial Registry (UMIN000014057). © Georg Thieme Verlag KG Stuttgart · New York.

  13. Causal Chains Arising from Climate Change in Mountain Regions: the Core Program of the Mountain Research Initiative

    Science.gov (United States)

    Greenwood, G. B.

    2014-12-01

    Mountains are a widespread terrestrial feature, covering from 12 to 24 percent of the world's terrestrial surface, depending of the definition. Topographic relief is central to the definition of mountains, to the benefits and costs accruing to society and to the cascade of changes expected from climate change. Mountains capture and store water, particularly important in arid regions and in all areas for energy production. In temperate and boreal regions, mountains have a great range in population densities, from empty to urban, while tropical mountains are often densely settled and farmed. Mountain regions contain a wide range of habitats, important for biodiversity, and for primary, secondary and tertiary sectors of the economy. Climate change interacts with this relief and consequent diversity. Elevation itself may accentuate warming (elevationi dependent warming) in some mountain regions. Even average warming starts complex chains of causality that reverberate through the diverse social ecological mountain systems affecting both the highlands and adjacent lowlands. A single feature of climate change such as higher snow lines affect the climate through albedo, the water cycle through changes in timing of release , water quality through the weathering of newly exposed material, geomorphology through enhanced erosion, plant communities through changes in climatic water balance, and animal and human communities through changes in habitat conditions and resource availabilities. Understanding these causal changes presents a particular interdisciplinary challenge to researchers, from assessing the existence and magnitude of elevation dependent warming and monitoring the full suite of changes within the social ecological system to climate change, to understanding how social ecological systems respond through individual and institutional behavior with repercussions on the long-term sustainability of these systems.

  14. Interannual to decadal scale North Pacific climate dynamics during the last millennium from Eclipse Icefield (St. Elias Mountains) ice core stable isotope records

    Science.gov (United States)

    Kreutz, K. J.; Wake, C.; Yalcin, K.; Vogan, N.; Introne, D.; Fisher, D.; Osterberg, E.

    2006-12-01

    A 345 meter ice core recovered from the St. Elias Mountains, Yukon Territory, Canada during 2002 has been continuously analyzed for stable hydrogen isotopes (deltaD), and is used to interpret changes in the North Pacfic hydrologic cycle and climate variability over the past 1000 years. Given the high annual snow accumulation rate at the site (1.5 meters/year), the record is high resolution (subannual) and annually dated to 1450 AD, and dated with ice flow models prior to 1450 AD. Five-year averaged isotope data over the past millennium display a classic Little Ice Age (LIA)/Medieval Climate Anomaly (MCA) pattern; that is, lower isotope ratios during the LIA, and higher isotope ratios during the MCA. Using the simple isotope/temperature relationship typically applied to ice core data, the Eclipse record may indicate lower regional temperatures and enhanced temperature variability during the period 1250 to 1700 AD. However, isotope data from an ice core recovered near the summit of Mt. Logan is clearly related to different hydrologic regimes. Regardless of the scaling used on the Eclipse isotope data, a distinct drop in isotope ratio occurs just prior to 1200AD, and may correspond with changes observed in tropical coral records. We suggest that fundamental changes in teleconnection and/or ENSO/PDO dynamics between the high and low latitudes in the Pacific may be responsible for the 13th century event. Based on the 1000-year record at 5-year resolution, as well as annual isotope data for the past 550 years, the 20th century is not anomalous with respect to previous time periods.

  15. Age of the Mt. Ortles ice cores, the Tyrolean Iceman and glaciation of the highest summit of South Tyrol since the Northern Hemisphere Climatic Optimum

    Science.gov (United States)

    Gabrielli, Paolo; Barbante, Carlo; Bertagna, Giuliano; Bertó, Michele; Binder, Daniel; Carton, Alberto; Carturan, Luca; Cazorzi, Federico; Cozzi, Giulio; Dalla Fontana, Giancarlo; Davis, Mary; De Blasi, Fabrizio; Dinale, Roberto; Dragà, Gianfranco; Dreossi, Giuliano; Festi, Daniela; Frezzotti, Massimo; Gabrieli, Jacopo; Galos, Stephan P.; Ginot, Patrick; Heidenwolf, Petra; Jenk, Theo M.; Kehrwald, Natalie; Kenny, Donald; Magand, Olivier; Mair, Volkmar; Mikhalenko, Vladimir; Lin, Ping Nan; Oeggl, Klaus; Piffer, Gianni; Rinaldi, Mirko; Schotterer, Ulrich; Schwikowski, Margit; Seppi, Roberto; Spolaor, Andrea; Stenni, Barbara; Tonidandel, David; Uglietti, Chiara; Zagorodnov, Victor; Zanoner, Thomas; Zennaro, Piero

    2016-11-01

    In 2011 four ice cores were extracted from the summit of Alto dell'Ortles (3859 m), the highest glacier of South Tyrol in the Italian Alps. This drilling site is located only 37 km southwest from where the Tyrolean Iceman, ˜ 5.3 kyrs old, was discovered emerging from the ablating ice field of Tisenjoch (3210 m, near the Italian-Austrian border) in 1991. The excellent preservation of this mummy suggested that the Tyrolean Iceman was continuously embedded in prehistoric ice and that additional ancient ice was likely preserved elsewhere in South Tyrol. Dating of the ice cores from Alto dell'Ortles based on 210Pb, tritium, beta activity and 14C determinations, combined with an empirical model (COPRA), provides evidence for a chronologically ordered ice stratigraphy from the modern glacier surface down to the bottom ice layers with an age of ˜ 7 kyrs, which confirms the hypothesis. Our results indicate that the drilling site has continuously been glaciated on frozen bedrock since ˜ 7 kyrs BP. Absence of older ice on the highest glacier of South Tyrol is consistent with the removal of basal ice from bedrock during the Northern Hemisphere Climatic Optimum (6-9 kyrs BP), the warmest interval in the European Alps during the Holocene. Borehole inclinometric measurements of the current glacier flow combined with surface ground penetration radar (GPR) measurements indicate that, due to the sustained atmospheric warming since the 1980s, an acceleration of the glacier Alto dell'Ortles flow has just recently begun. Given the stratigraphic-chronological continuity of the Mt. Ortles cores over millennia, it can be argued that this behaviour has been unprecedented at this location since the Northern Hemisphere Climatic Optimum.

  16. Research on Parallel Real Time Scheduling and Memory Allocation Algorithm on Multi-core Platform%多核平台的并行实时调度与内存分配算法

    Institute of Scientific and Technical Information of China (English)

    周本海; 乔建忠; 林树宽

    2012-01-01

    With low power consumption and high performance characteristics, multi-core processors have occupied the main market. Aiming at parallel real time scheduling on multi-core platform, a scheduling algorithm combined with local and global EDF was proposed. The budgets, deadline partition, and task migration time were decided by the proposed CPU width reserved server. A memory allocation method was presented, which could manage the memory resources for parallel real time tasks effectively. Experimental results showed that the proposed new scheduling algorithm has a higher scheduling success rate. In addition, using the presented memory partition algorithm, real-time characteristics and stability of tasks were assured in memory competition situation .%多核处理器凭借着低功耗高性能的优势占据了市场.针对多核平台上并行实时任务,提出局部与全局EDF相结合的调度算法,其中任务的截止期划分、执行预算以及迁移时机由所设计的处理器带宽预留服务器决定.同时,提出了内存分配算法,该算法能够更好地为并行实时任务管理内存资源.实验结果表明新的调度算法具有更高的调度成功率.另外,在内存资源竞争的前提下,内存分配算法可以保证并行任务的实时性与系统稳定性.

  17. 基于单核 DS P实时多任务宏观并行软件架构%Real-time Multitasking Macro Parallel Software Architecture Based on Single-core DSP

    Institute of Scientific and Technical Information of China (English)

    周敬东; 黄云朋; 周明刚; 李敏慧; 程钗

    2015-01-01

    The serial task execution mode is used in most of the existing single‐core embedded systems .The system can not respond to other tasks quickly and effectively , during the execution of complex tasks . A real‐time multitasking macro parallel software architecture based on single‐core DSP is designed by splitting for the complex task ,reducing the use of system latency program and querying task marks to scheduling task .The experiment test show that rapid task responding in complex multitasking system can meet the request of instantaneity ,stability of the system and multitask parallel at the macroscopic level .%现有单核嵌入式系统大多采用的是串行任务执行方式,系统在复杂任务执行过程中,不能快速有效地响应其他任务。通过对复杂任务的拆分以限制单个任务执行时间,减少系统延时程序的使用,以及使用查询任务标志调度任务等方法,设计并实现了一种基于单核DSP的实时多任务宏观并行软件架构。实际应用系统的试验验证了该软件架构在复杂多任务系统中任务响应快,可满足系统实时性要求,能够保证系统稳定运行。

  18. Climate Change in the North Pacific Region Over the Last Three Centuries as Expressed in an ice Core From Mount Logan

    Science.gov (United States)

    Moore, K.; Holdsworth, G.; Alverson, K.

    2002-12-01

    The relatively short length of most instrumental climate datasets restricts the study of variability that exists in the climate system. This is particularly true regarding the atmosphere where high quality spatially dense data exists only since the late 1940s. With this data, the Pacific North America pattern (PNA) has been identified as one of the dominant modes of variability in the atmosphere. The PNA is related to an inter-decadal mode of climate variability known as the Pacific Decadal Oscillation (PDO). The PDO has been shown to influence marine productivity in the North Pacific as well as modulating the impact of the El Nino-Southern Oscillation in North America and Australia. Here we present an updated 301-year ice core record from Mount Logan in northwestern North America that shows a statistically significant and accelerating positive trend in snow accumulation from the middle of the 19th century that appears to be associated with secular changes in the PNA and PDO. A manifestation of this trend has been a warming over northwestern North America both at the surface and throughout the lower atmosphere.

  19. Plants assemble species specific bacterial communities from common core taxa in three arcto-alpine climate zones

    NARCIS (Netherlands)

    Kumar, Manoj; Brader, Guenter; Sessitsch, Angela; Maki, Anita; van Elsas, Jan D.; Nissinen, Riitta

    2017-01-01

    Evidence for the pivotal role of plant-associated bacteria to plant health and productivity has accumulated rapidly in the last years. However, key questions related to what drives plant bacteriomes remain unanswered, among which is the impact of climate zones on plant-associated microbiota. This is

  20. The application of multi-core parallel computing using python language in cross-matching of massive catalogues%Python多核并行计算在海量星表交叉证认中的应用

    Institute of Scientific and Technical Information of China (English)

    裴彤; 张彦霞; 彭南博; 赵永恒

    2011-01-01

    天文学研究中经常需要通过交叉证认将来自多波段多项目天文数据联系起来统一考虑.当前天文数据急剧增长,必然导致交叉证认的速度过慢.针对这一问题,提出一种在多核环境下使用Python语言进行高效并行计算的方法,与以往的研究结果相比,速度提高了若干倍.这为下一步的多波段数据统计研究和数据挖掘打下了良好的基础.%As astronomical data grows rapidly, cross-matching between huge catalogues that contain millions or billions of celestial objects becomes a hotspot of research. In this paper, we present a parallel cross-match program, which is written in Python language and is able to make full use of multi-core processors. We explain why Python programming language is chosen and how to implement parallel computing with it. A sky splitting function HTM is selected to partition catalogues. The results of experiments prove that our program has a significant performance advantage comparing with previous functions and lays a good foundation for further statistical research and data mining.

  1. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  2. ‘Building Core Knowledge - Reconstructing Earth History’: Transforming Undergraduate Instruction by Bringing Ocean Drilling Science on Earth History and Global Climate Change into the Classroom (Invited)

    Science.gov (United States)

    St. John, K.; Leckie, R. M.; Jones, M. H.; Pound, K. S.; Pyle, E.; Krissek, L. A.

    2009-12-01

    This NSF-funded, Phase 1 CCLI project effectively integrates scientific ocean drilling data and research (DSDP-ODP-IODP-ANDRILL) with education. We have developed, and are currently testing, a suite of data-rich inquiry-based classroom learning materials based on sediment core archives. These materials are suitable for use in introductory geoscience courses that serve general education students, early geoscience majors, and pre-service teachers. 'Science made accessible' is the essence of this goal. Our team consists of research and education specialists from institutions ranging from R1 research to public liberal arts to community college. We address relevant and timely ‘Big Ideas’ with foundational geoscience concepts and climate change case studies, as well transferable skills valued in professional settings. The exercises are divided into separate but inter-related modules including: introduction to cores, seafloor sediments, microfossils and biostratigraphy, paleomagnetism and magnetostratigraphy, climate rhythms, oxygen-isotope changes in the Cenozoic, past Arctic and Antarctic climates, drill site selection, interpreting Arctic and Antarctic sediment cores, onset of Northern Hemisphere glaciation, onset of Antarctic glaciation, and the Paleocene-Eocene Thermal Maximum. Each module has several parts, and each is designed to be used in the classroom, laboratory, or assigned as homework. All exercises utilize authentic data. Students work with scientific uncertainty, practice quantitative and problem-solving skills, and expand their basic geologic and geographic knowledge. Students have the opportunity to work individually and in groups, evaluate real-world problems, and formulate hypotheses. Initial exercises in each module are useful to introduce a topic, gauge prior knowledge, and flag possible areas of student misconception. Comprehensive instructor guides provide essential background information, detailed answer keys, and alternative implementation

  3. A 60-year ice-core record of regional climate from Adélie Land, coastal Antarctica

    Science.gov (United States)

    Goursaud, Sentia; Masson-Delmotte, Valérie; Favier, Vincent; Preunkert, Susanne; Fily, Michel; Gallée, Hubert; Jourdain, Bruno; Legrand, Michel; Magand, Olivier; Minster, Bénédicte; Werner, Martin

    2017-02-01

    A 22.4 m-long shallow firn core was extracted during the 2006/2007 field season from coastal Adélie Land. Annual layer counting based on subannual analyses of δ18O and major chemical components was combined with 5 reference years associated with nuclear tests and non-retreat of summer sea ice to build the initial ice-core chronology (1946-2006), stressing uncertain counting for 8 years. We focus here on the resulting δ18O and accumulation records. With an average value of 21.8 ± 6.9 cm w.e. yr-1, local accumulation shows multi-decadal variations peaking in the 1980s, but no long-term trend. Similar results are obtained for δ18O, also characterised by a remarkably low and variable amplitude of the seasonal cycle. The ice-core records are compared with regional records of temperature, stake area accumulation measurements and variations in sea-ice extent, and outputs from two models nudged to ERA (European Reanalysis) atmospheric reanalyses: the high-resolution atmospheric general circulation model (AGCM), including stable water isotopes ECHAM5-wiso (European Centre Hamburg model), and the regional atmospheric model Modèle Atmosphérique Régional (AR). A significant linear correlation is identified between decadal variations in δ18O and regional temperature. No significant relationship appears with regional sea-ice extent. A weak and significant correlation appears with Dumont d'Urville wind speed, increasing after 1979. The model-data comparison highlights the inadequacy of ECHAM5-wiso simulations prior to 1979, possibly due to the lack of data assimilation to constrain atmospheric reanalyses. Systematic biases are identified in the ECHAM5-wiso simulation, such as an overestimation of the mean accumulation rate and its interannual variability, a strong cold bias and an underestimation of the mean δ18O value and its interannual variability. As a result, relationships between simulated δ18O and temperature are weaker than observed. Such systematic

  4. Link of volcanic activity and climate change in Altai studied in the ice core from Belukha Mountain

    Directory of Open Access Journals (Sweden)

    N. S. Malygina

    2013-01-01

    Full Text Available In the present research we discuss a role of volcanic activity in Altai thermal regime. Here we analyses the sulfate and temperature data reconstructed from the natural paleoarchive – ice core from the Belukha Mountain saddle. Sulfate ice-core reconstructions can serve as volcanic markers. The both – sulfate and temperature reconstructions – are for the last 750 years. As the characteristic of volcanic activity we consider Volcanic Explosivity Index (VEI, Dust Veil Index (DVI and Ice core volcanic index (IVI. The analysis was done using wavelet analysis and analysis of wavelet cross coherence and phase. As the result, we conclude that observed increases in the values of the indexes VEI, DVI, IVI basically correspond to decreases of temperature and increases of sulfate concentrations. This confirms the dependence of changes in the thermal regime of the Altai from volcanic activity. But in the 1750–1850 years period there is a delay of the changes in temperature with respect to the changes in volcanic activity. We suggest that it can be due to the superposition of the influence of solar and volcanic activity on changes in the thermal regime of Altai.

  5. Parallel biocomputing

    Directory of Open Access Journals (Sweden)

    Witte John S

    2011-03-01

    Full Text Available Abstract Background With the advent of high throughput genomics and high-resolution imaging techniques, there is a growing necessity in biology and medicine for parallel computing, and with the low cost of computing, it is now cost-effective for even small labs or individuals to build their own personal computation cluster. Methods Here we briefly describe how to use commodity hardware to build a low-cost, high-performance compute cluster, and provide an in-depth example and sample code for parallel execution of R jobs using MOSIX, a mature extension of the Linux kernel for parallel computing. A similar process can be used with other cluster platform software. Results As a statistical genetics example, we use our cluster to run a simulated eQTL experiment. Because eQTL is computationally intensive, and is conceptually easy to parallelize, like many statistics/genetics applications, parallel execution with MOSIX gives a linear speedup in analysis time with little additional effort. Conclusions We have used MOSIX to run a wide variety of software programs in parallel with good results. The limitations and benefits of using MOSIX are discussed and compared to other platforms.

  6. Mass Spectrometry Data Collection in Parallel at Multiple Core Facilities Operating TripleTOF 5600 and Orbitrap Elite/Velos Pro/Q Exactive Mass Spectrometers

    Science.gov (United States)

    Jones, K.; Kim, K.; Patel, B.; Kelsen, S.; Braverman, A.; Swinton, D.; Gafken, P.; Jones, L.; Lane, W.; Neveu, J.; Leung, H.; Shaffer, S.; Leszyk, J.; Stanley, B.; Fox, T.; Stanley, A.; Yeung, Anthony

    2013-01-01

    Proteomic research can benefit from simultaneous access to multiple cutting-edge mass spectrometers. 18 core facilities responded to our investigators seeking service through the ABRF Discussion Forum. Five of the facilities selected completed four plasma proteomics experiments as routine fee-for-service. Each biological experiment entailed an iTRAQ 4-plex proteome comparison of immunodepleted plasma provided as 30 labeled-peptide fractions. Identical samples were analyzed by two AB SCIEX TripleTOF 5600 and three Thermo Orbitrap (Elite/Velos Pro/Q Exactive) instruments. 480 LC-MS/MS runs delivered >250 GB of data over two months. We compare herein routine service analyses of three peptide fractions of different peptide abundance. Data files from each instrument were studied to develop optimal analysis parameters to compare with default parameters in Mascot Distiller 2.4, ProteinPilot 4.5 beta, AB Sciex MS Data Converter 1.3 beta, and Proteome Discover 1.3. Peak-picking for TripleTOFs was best by ProteinPilot 4.5 beta while Mascot Distiller and Proteome Discoverer were comparable for the Orbitraps. We compared protein identification and quantitation in SwissProt 2012_07 database by Mascot Server 2.4.01 versus ProteinPilot. By all search methods, more proteins, up to two fold, were identified using the Q Exactive than others. Q Exactive excelled also at the number of unique significant peptide ion sequences. However, software-dependent impact on subsequent interpretation, due to peptide modifications, can be critical. These findings may have special implications for iTRAQ plasma proteomics. For the low abundance peptide ions, the slope of the dynamic range drop-off in the plasma proteome is uniquely sharp compared with cell lysates. Our study provides data for testable improvements in the operation of these mass spectrometers. More importantly, we have demonstrated a new affordable expedient workflow for investigators to perform proteomic experiments through the ABRF

  7. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  8. The May 25-27 2005 Mount Logan Storm: Implications for the reconstruction of the climate signal contained in Gulf of Alaska Ice Cores

    Science.gov (United States)

    Moore, K.; Holdsworth, G.

    2006-12-01

    In late May 2005, 3 climbers were immobilized at 5400 m on Mount Logan, Canada`s highest mountain, by the high impact weather associated with an extratropical cyclone over the Gulf of Alaska. Rescue operations were hindered by the high winds, cold temperatures, and heavy snowfall associated with the storm. Ultimately, the climbers were rescued after the weather cleared. Just prior to the storm, two automated weather stations had been deployed on the mountain as part of a research program aimed at interpreting the climate signal contained in summit ice cores. These data provide a unique and hitherto unobtainable record of the high elevation meteorological conditions associated with a severe extratropical cyclone. In this talk, data from these weather stations along with surface and sounding data from the nearby town of Yakutat Alaska, satellite imagery and the NCEP reanalysis are used to characterize the synoptic-scale conditions associated with this storm. Particular emphasis is placed on the water vapor transport associated with this storm. The authors show that during this event, subtropical moisture was transported northwards towards the Mount Logan region. The magnitude of this transport into the Gulf of Alaska was exceeded only 1% of the time during the months of May and June over the period 1948-2005. As a result, the magnitude of the precipitable water field in the Gulf of Alaska region attained values usually found in the tropics. An atmospheric moisture budget analysis indicates that most of the moisture advected into the Mount Logan region was pre-existing water vapor already in the subtropical atmosphere and was not water vapor evaporated from the surface during the evolution of the storm. Implications of this moisture source for our understanding of the water isotopic climate signal in the Mount Logan ice cores will be discussed.

  9. The orbital scale evolution of regional climate recorded in a long sediment core from Heqing,China

    Institute of Scientific and Technical Information of China (English)

    SHEN Ji; XIAO HaiFeng; WANG SuMin; AN ZhiSheng; QIANG XiaoKe; XIAO XiaYun

    2007-01-01

    Based on the analysis of carbonate content and loss on ignition for a long sediment core (737 m in length) drilled in Heqing,the orbital scale evolution of the Southwest Monsoon is revealed,by using overlapped spectral analysis and filter methods. It is shown that the obliquity cycle and precession cycle are the key factors for the Southwest Monsoon evolution and that the change of the global ice volume and the uplift of the Qinghai-Tibetan Plateau also impose great influences on it.

  10. Accelerating Climate Simulations Through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  11. Climate and hydrology of the last interglaciation (MIS 5) in Owens Basin, California: Isotopic and geochemical evidence from core OL-92

    Science.gov (United States)

    Li, H.-C.; Bischoff, J.L.; Ku, T.-L.; Zhu, Z.-Y.

    2004-01-01

    ??18O, ??13C, total organic carbon, total inorganic carbon, and acid-leachable Li, Mg and Sr concentrations on 443 samples from 32 to 83 m depth in Owens Lake core OL-92 were analyzed to study the climatic and hydrological conditions between 60 and 155 ka with a resolution of ???200 a. The multi-proxy data show that Owens Lake overflowed during wet/cold conditions of marine isotope stages (MIS) 4, 5b and 6, and was closed during the dry/warm conditions of MIS 5a, c and e. The lake partially overflowed during MIS 5d. Our age model places the MIS 4/5 boundary at ca 72.5 ka and the MIS 5/6 boundary (Termination II) at ca 140 ka, agreeing with the Devils Hole chronology. The diametrical precipitation intensities between the Great Basin (cold/wet) and eastern China (cold/dry) on Milankovitch time scales imply a climatic teleconnection across the Pacific. It also probably reflects the effect of high-latitude ice sheets on the southward shifts of both the summer monsoon frontal zone in eastern Asia and the polar jet stream in western North America during glacial periods. ?? 2003 Elsevier Ltd. All rights reserved.

  12. Decadal climatic variations recorded in Guliya ice core and comparison with the historical documentary data from East China during the last 2000 years

    Institute of Scientific and Technical Information of China (English)

    施雅风; 姚檀栋; 杨保

    1999-01-01

    The high-resolution records of δ18O and snow accumulation variations from the Guliya ice core provide valuable data for research on climatic variations at a decadal resolution during the past 2000 years in China. Based on the ice core data, five spells have been divided: the warm and wet period before 270 AD, the cold and dry period between 280 and 970 AD, the moderate and dry period between 970 and 1510 AD, the well-defined" Little Ice Age "with drastic cold-warm fluctuations between 1510 and 1930 AD and the warming period since 1930 AD. According to the combination of temperature and precipitation, cold events (55 times) surpass warm ones (26 times), and dry events (55 times) surpass wet ones (45 times). Cold-wet events (14 times) are less than cold-dry ones (16 times), while warmwet events (10 times) are more than warm-dry ones (4 times). If the difference of 2‰ in δ18O (corresponding to 3K in temperature) between two or three adjacent decades is taken as the criterion of it, the abrupt chan

  13. Application of sediment core modelling to understanding climates of the past: An example from glacial-interglacial changes in Southern Ocean silica cycling

    Directory of Open Access Journals (Sweden)

    A. Ridgwell

    2006-12-01

    Full Text Available Paleoceanographic evidence from the Southern Ocean reveals an apparent stark meridional divide in biogeochemical dynamics associated with the glacial-interglacial cycles of the late Neogene. South of the present-day position of the Antarctic Polar Front biogenic opal is generally much more abundant in sediments during interglacials compared to glacials. To the north, an anti-phased relationship is observed, with maximum opal abundance instead occurring during glacials. This antagonistic response of sedimentary properties is an important model validation target for testing hypotheses of glacial-interglacial change, particularly with respect to understanding the causes of the variability in atmospheric CO2. Here, I illustrate a time-dependent modelling approach to helping understand past climatic change by means of the generation of synthetic sediment core records. I find a close match between model-predicted and observed down-core changes in sedimentary opal content is achieved when changes in seasonal sea-ice extent is imposed, suggesting that the cryosphere is probably the primary driver of the striking features exhibited by the paleoceanographic record of this region.

  14. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  15. Waves in the core and mechanical core-mantle interactions

    DEFF Research Database (Denmark)

    Jault, D.; Finlay, Chris

    2015-01-01

    the motions in the direction parallel to the Earth'srotation axis. This property accounts for the signicance of the core-mantle topography.In addition, the stiening of the uid in the direction parallel to the rotation axis gives riseto a magnetic diusion layer attached to the core-mantle boundary, which would...

  16. Vegetation and Climate Changes in Patagonia (46°S) during the Last 20 kyr cal. BP from South East Pacific MD 07 3088 Core

    Science.gov (United States)

    Montade, V.; Combourieu Nebout, N.; Siani, G.; Michel, E.; Kissel, C.; Carel, M.; Mulsow, S.

    2010-12-01

    The Chilean Patagonia (41°S to 56°S) crossed by the Andes from north to south represents a critical topographic constraint on atmospheric and oceanic systems, and the only continental landmass intercepting the entire Southern Westerlies Wind (SHW) belt in southern hemisphere. Therefore, the southern Chile is a key-area to study the paleoclimate changes and, to understand the synoptic scale ocean-atmospheric circulation systems of the mid to high southern latitudes. However, several questions remain partly unsolved: Is there abrupt reversal event during the Last Glacial-Interglacial transition (LGIT)? Is there a shift or an intensification of the SHW? When begin the Holocene onset? What are the inter or intra hemispheric climatic links? In this aim, we present here a detailed pollen record from the deep-sea core MD 07 3088 (46°04’S; 76°05’W, 1536 m) near Taitao peninsula, taken during the “Pachiderme” cruise (MD 159) within the IMAGES (International MArine Global changES) program (Kissel et al., 2007). The age model (Siani et al., in press) is based upon stable oxygen isotopes of planktonic foraminifera G. bulloïdes coupled to ten AMS 14C measurements performed on planktonic foraminifera and four tephrochronological markers attributed to the Hudson volcano (Haberle and Lumley, 1998). The pollen record expresses vegetation changes and thus climate variations during the last 20 kyr cal. BP. Several vegetation phases are observed during the LGIT and the Holocene onset: Before 18 kyr, the low diversity and pollen influx show the reduced vegetation due to the Patagonian Ice Cap extension and cold temperature. From 17.5 to 14.5 kyr, the diversity and pollen influx increase mark the vegetation development linked to the ice cap melting and temperature increase. From 14.5 to 12 kyr, the Astelia development illustrates the Magellanic Moorland extension and humid conditions linked to the SHW. Later 11.5 kyr, the forest diversification expresses the Holocene onset

  17. Long-term vegetation, climate and ocean dynamics inferred from a 73,500 years old marine sediment core (GeoB2107-3) off southern Brazil

    Science.gov (United States)

    Gu, Fang; Zonneveld, Karin A. F.; Chiessi, Cristiano M.; Arz, Helge W.; Pätzold, Jürgen; Behling, Hermann

    2017-09-01

    Long-term changes in vegetation and climate of southern Brazil, as well as ocean dynamics of the adjacent South Atlantic, were studied by analyses of pollen, spores and organic-walled dinoflagellate cysts (dinocysts) in marine sediment core GeoB2107-3 collected offshore southern Brazil covering the last 73.5 cal kyr BP. The pollen record indicates that grasslands were much more frequent in the landscapes of southern Brazil during the last glacial period if compared to the late Holocene, reflecting relatively colder and/or less humid climatic conditions. Patches of forest occurred in the lowlands and probably also on the exposed continental shelf that was mainly covered by salt marshes. Interestingly, drought-susceptible Araucaria trees were frequent in the highlands (with a similar abundance as during the late Holocene) until 65 cal kyr BP, but were rare during the following glacial period. Atlantic rainforest was present in the northern lowlands of southern Brazil during the recorded last glacial period, but was strongly reduced from 38.5 until 13.0 cal kyr BP. The reduction was probably controlled by colder and/or less humid climatic conditions. Atlantic rainforest expanded to the south since the Lateglacial period, while Araucaria forests advanced in the highlands only during the late Holocene. Dinocysts data indicate that the Brazil Current (BC) with its warm, salty and nutrient-poor waters influenced the study area throughout the investigated period. However, variations in the proportion of dinocyst taxa indicating an eutrophic environment reflect the input of nutrients transported mainly by the Brazilian Coastal Current (BCC) and partly discharged by the Rio Itajaí (the major river closest to the core site). This was strongly related to changes in sea level. A stronger influence of the BCC with nutrient rich waters occurred during Marine Isotope Stage (MIS) 4 and in particular during the late MIS 3 and MIS 2 under low sea level. Evidence of Nothofagus pollen

  18. Climate and Low Latitude Water Cycle Variations Over the Last 300 ka Using Ice Core Records and iLOVECLIM Integration

    Science.gov (United States)

    Extier, Thomas; Landais, Amaelle; Roche, Didier; Bréant, Camille; Bazin, Lucie; Prie, Frederic; François, Louis

    2017-04-01

    The Quaternary is characterized by a succession of glacial and interglacial periods recorded in various climatic archives from high to low latitudes. Antarctic ice cores provide high latitude climate reconstruction from water isotopes as well as global proxy records such as greenhouse gas concentrations. Within global tracers, δ18O of O2 or δ18Oatm is a quite complex tracer which reflects global variations of the low latitude water cycle and vegetation changes. The last two terminations (TI 20-11 ka and TII 136-128 ka) are already well documented and display a high resolution δ18Oatm signal with large amplitude changes, whereas the changes are smaller and poorly documented for the TIII (around 245 ka). Here we display new δ18Oatm data over the last 300 ka on the Dome C ice core in order to compare the δ18Oatm dynamics over the last three terminations. The new high resolution δ18Oatm data covering the Termination III confirm the smaller δ18Oatm amplitude changes compared to TI and TII. Moreover, the δ18Oatm changes of TIII appear to be divided in several steps. The δ18Oatm trapped in Dome C ice cores also shows strong similarity with the 65°N summer insolation and the precession signal on orbital timescales as well as with the δ18Ocalcite measured in the Asian speleothems, suggesting a link with the monsoon dynamics. However, the quantitative interpretation of δ18Oatm is limited by our knowledge of past oxygen fluxes. We present here the first step toward a more quantitative interpretation of δ18Oatm variations through the use of the iLOVECLIM intermediate complexity model with a new vegetation module CARAIB (Warnant et al., 1994; Otto et al., 2002; Laurent et al., 2008; Dury et al., 2011). Through considering more plant functional types (PFTs) and more accurate biosphere productivity variations than the previous module, CARAIB will be helpful to quantify the impact of the biosphere changes on the δ18Oatm.

  19. PARALLEL STABILIZATION

    Institute of Scientific and Technical Information of China (English)

    J.L.LIONS

    1999-01-01

    A new algorithm for the stabilization of (possibly turbulent, chaotic) distributed systems, governed by linear or non linear systems of equations is presented. The SPA (Stabilization Parallel Algorithm) is based on a systematic parallel decomposition of the problem (related to arbitrarily overlapping decomposition of domains) and on a penalty argument. SPA is presented here for the case of linear parabolic equations: with distrjbuted or boundary control. It extends to practically all linear and non linear evolution equations, as it will be presented in several other publications.

  20. Analyzing Tropical Waves Using the Parallel Ensemble Empirical Model Decomposition Method: Preliminary Results from Hurricane Sandy

    Science.gov (United States)

    Shen, Bo-Wen; Cheung, Samson; Li, Jui-Lin F.; Wu, Yu-ling

    2013-01-01

    In this study, we discuss the performance of the parallel ensemble empirical mode decomposition (EMD) in the analysis of tropical waves that are associated with tropical cyclone (TC) formation. To efficiently analyze high-resolution, global, multiple-dimensional data sets, we first implement multilevel parallelism into the ensemble EMD (EEMD) and obtain a parallel speedup of 720 using 200 eight-core processors. We then apply the parallel EEMD (PEEMD) to extract the intrinsic mode functions (IMFs) from preselected data sets that represent (1) idealized tropical waves and (2) large-scale environmental flows associated with Hurricane Sandy (2012). Results indicate that the PEEMD is efficient and effective in revealing the major wave characteristics of the data, such as wavelengths and periods, by sifting out the dominant (wave) components. This approach has a potential for hurricane climate study by examining the statistical relationship between tropical waves and TC formation.

  1. Late Quaternary vegetational and climate dynamics in northeastern Brazil, inferences from marine core GeoB 3104-1

    Science.gov (United States)

    Behling, Hermann; W. Arz, Helge; Pätzold, Jürgen; Wefer, Gerold

    2000-06-01

    Late Quaternary paleoenvironments from northeastern (NE) Brazil have been studied by pollen analysis of marine sediment. The studied core GeoB 3104-1 (3°40' S, 37°43' W, 767 m b.s.l.) from the upper continental slope off NE Brazil is 517 cm long and >42,000 14C yr BP old. Chronological control was obtained by 12 radiocarbon (AMS) dates from individuals of the foraminiferal species Globigerinoides sacculifer. Modern pollen analogs were received from 15 river, lake and forest soil surface samples from NE Brazil. Marine pollen dates indicate the predominance of semi-arid caatinga vegetation in NE Brazil during the recorded period between >42,000 and 8500 14C yr BP. The increased fluvial input of terrigenous material, with high concentrations of pollen and specially fern spores, into the marine deposits, about 40,000, 33,000 and 24,000 14C yr BP and between 15,500 and 11,800 14C yr BP, indicate short-term periods of strong rainfall on the NE Brazilian continent. The expansion of mountain, floodplain and gallery forests characterize the interval between 15,500 and 11,800 14C yr BP as the wettest recorded period in NE Brazil, which allowed floristic exchanges between Atlantic rain forest and Amazonian rain forest, and vice versa. The paleodata from core GeoB 3104-1 confirm the, in general, dry pre-Last Glacial Maximum (LGM) and LGM conditions and the change to wet Lateglacial environments in tropical South America. The annual movement of the intertropical convergence zone over NE Brazil, the strong influence of the Antarctic cold fronts and changes of the high-pressure cell over the southern Atlantic, may explain the very wet Lateglacial period in NE Brazil. The documented NE Brazilian short-term signals correlate with the documented Dansgaard-Oeschger cycles and Heinrich events from the northern Hemisphere and suggest strong teleconnections.

  2. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  3. Parallel Worlds

    DEFF Research Database (Denmark)

    Steno, Anne Mia

    2013-01-01

    as a symbol of something else, for instance as a way of handling uncertainty in difficult times, magical practice should also be seen as an emic concept. In this context, understanding the existence of two parallel universes, the profane and the magic, is important because the witches’ movements across...

  4. The paradox of cooling streams in a warming world: regional climate trends do not parallel variable local trends in stream temperature in the Pacific continental United States

    Science.gov (United States)

    Arismendi, Ivan; Johnson, Sherri; Dunham, Jason B.; Haggerty, Roy; Hockman-Wert, David

    2012-01-01

    Temperature is a fundamentally important driver of ecosystem processes in streams. Recent warming of terrestrial climates around the globe has motivated concern about consequent increases in stream temperature. More specifically, observed trends of increasing air temperature and declining stream flow are widely believed to result in corresponding increases in stream temperature. Here, we examined the evidence for this using long-term stream temperature data from minimally and highly human-impacted sites located across the Pacific continental United States. Based on hypothesized climate impacts, we predicted that we should find warming trends in the maximum, mean and minimum temperatures, as well as increasing variability over time. These predictions were not fully realized. Warming trends were most prevalent in a small subset of locations with longer time series beginning in the 1950s. More recent series of observations (1987-2009) exhibited fewer warming trends and more cooling trends in both minimally and highly human-influenced systems. Trends in variability were much less evident, regardless of the length of time series. Based on these findings, we conclude that our perspective of climate impacts on stream temperatures is clouded considerably by a lack of long-termdata on minimally impacted streams, and biased spatio-temporal representation of existing time series. Overall our results highlight the need to develop more mechanistic, process-based understanding of linkages between climate change, other human impacts and stream temperature, and to deploy sensor networks that will provide better information on trends in stream temperatures in the future.

  5. Can the solar cycle and climate synchronize the snowshoe hare cycle in Canada? Evidence from tree rings and ice cores.

    Science.gov (United States)

    Sinclair, A R; Gosline, J M; Holdsworth, G; Krebs, C J; Boutin, S; Smith, J N; Boonstra, R; Dale, M

    1993-02-01

    Dark marks in the rings of white spruce less than 50 yr old in Yukon, Canada, are correlated with the number of stems browsed by snowshoe hares. The frequency of these marks is positively correlated with the density of hares in the same region. The frequency of marks in trees germinating between 1751 and 1983 is positively correlated with the hare fur records of the Hudson Bay Company. Both tree marks and hare numbers are correlated with sunspot numbers, and there is a 10-yr periodicity in the correlograms. Phase analysis shows that tree marks and sunspot numbers have periods of nearly constant phase difference during the years 1751-1787, 1838-1870, and 1948 to the present, and these periods coincide with those of high sunspot maxima. The nearly constant phase relations between the annual net snow accumulation on Mount Logan and (1) tree mark ratios, (2) hare fur records before about 1895, and (3) sunspot number during periods of high amplitude in the cycles suggest there is a solar cycle-climate-hare population and tree mark link. We suggest four ways of testing this hypothesis.

  6. From School of Rock to Building Core Knowledge: Teaching about Cenozoic climate change with data and case studies from the primary literature

    Science.gov (United States)

    Leckie, R. M.; St John, K. K.; Jones, M. H.; Pound, K. S.; Krissek, L. A.; Peart, L. W.

    2011-12-01

    The School of Rock (SoR) began in 2005 as a pilot geoscience professional development program for K-12 teachers and informal educators aboard the JOIDES Resolution (JR). Since then, the highly successful SoR program, sponsored by the Consortium for Ocean Leadership's Deep Earth Academy, has conducted on-shore professional development at the Integrated Ocean Drilling Program (IODP) core repository in College Station, TX, and on the JR. The success of the SoR program stems from the natural synergy that develops between research scientists and educators when their combined pedagogical skills and scientific knowledge are used to uncover a wealth of scientific ocean drilling discoveries and research findings. Educators are challenged with authentic inquiry based on sediment archives; these lessons from the past are then made transferable to the general public and to classrooms through the creation of age-appropriate student-active learning materials (http://www.oceanleadership.org/education/deep-earth-academy/educators/classroom-activities/). This science made accessible approach was the basis for a successful NSF Course Curriculum and Laboratory Improvement (CCLI) proposal to develop teaching materials for use at the college level. Our Building Core Knowledge project resulted in a series of 14 linked, yet independent, inquiry-based exercise modules around the theme of Reconstructing Earth's Climate History. All of the exercises build upon authentic data from peer reviewed scientific publications. These multiple part modules cover fundamental paleoclimate principles, tools and proxies, and Cenozoic case studies. It is important to teach students how we know what we know. For example, paleoclimate records must be systematically described, ages must be determined, and indirect evidence (i.e., proxies) of past climate must be analyzed. Much like the work of a detective, geoscientists and paleoclimatologists reconstruct what happened in the past, and when and how it

  7. Parallel clustering with CFinder

    CERN Document Server

    Pollner, Peter; Vicsek, Tamas; 10.1142/S0129626412400014

    2012-01-01

    The amount of available data about complex systems is increasing every year, measurements of larger and larger systems are collected and recorded. A natural representation of such data is given by networks, whose size is following the size of the original system. The current trend of multiple cores in computing infrastructures call for a parallel reimplementation of earlier methods. Here we present the grid version of CFinder, which can locate overlapping communities in directed, weighted or undirected networks based on the clique percolation method (CPM). We show that the computation of the communities can be distributed among several CPU-s or computers. Although switching to the parallel version not necessarily leads to gain in computing time, it definitely makes the community structure of extremely large networks accessible.

  8. Remarks on parallel computations in MATLAB environment

    Science.gov (United States)

    Opalska, Katarzyna; Opalski, Leszek

    2013-10-01

    The paper attempts to summarize author's investigation of parallel computation capability of MATLAB environment in solving large ordinary differential equations (ODEs). Two MATLAB versions were tested and two parallelization techniques: one used multiple processors-cores, the other - CUDA compatible Graphics Processing Units (GPUs). A set of parameterized test problems was specially designed to expose different capabilities/limitations of the different variants of the parallel computation environment tested. Presented results illustrate clearly the superiority of the newer MATLAB version and, elapsed time advantage of GPU-parallelized computations for large dimensionality problems over the multiple processor-cores (with speed-up factor strongly dependent on the problem structure).

  9. Recent changes in north-west Greenland climate documented by NEEM shallow ice core data and simulations, and implications for past-temperature reconstructions

    Science.gov (United States)

    Masson-Delmotte, V.; Steen-Larsen, H. C.; Ortega, P.; Swingedouw, D.; Popp, T.; Vinther, B. M.; Oerter, H.; Sveinbjornsdottir, A. E.; Gudlaugsdottir, H.; Box, J. E.; Falourd, S.; Fettweis, X.; Gallée, H.; Garnier, E.; Gkinis, V.; Jouzel, J.; Landais, A.; Minster, B.; Paradis, N.; Orsi, A.; Risi, C.; Werner, M.; White, J. W. C.

    2015-08-01

    Combined records of snow accumulation rate, δ18O and deuterium excess were produced from several shallow ice cores and snow pits at NEEM (North Greenland Eemian Ice Drilling), covering the period from 1724 to 2007. They are used to investigate recent climate variability and characterise the isotope-temperature relationship. We find that NEEM records are only weakly affected by inter-annual changes in the North Atlantic Oscillation. Decadal δ18O and accumulation variability is related to North Atlantic sea surface temperature and is enhanced at the beginning of the 19th century. No long-term trend is observed in the accumulation record. By contrast, NEEM δ18O shows multidecadal increasing trends in the late 19th century and since the 1980s. The strongest annual positive δ18O values are recorded at NEEM in 1928 and 2010, while maximum accumulation occurs in 1933. The last decade is the most enriched in δ18O (warmest), while the 11-year periods with the strongest depletion (coldest) are depicted at NEEM in 1815-1825 and 1836-1846, which are also the driest 11-year periods. The NEEM accumulation and δ18O records are strongly correlated with outputs from atmospheric models, nudged to atmospheric reanalyses. Best performance is observed for ERA reanalyses. Gridded temperature reconstructions, instrumental data and model outputs at NEEM are used to estimate the multidecadal accumulation-temperature and δ18O-temperature relationships for the strong warming period in 1979-2007. The accumulation sensitivity to temperature is estimated at 11 ± 2 % °C-1 and the δ18O-temperature slope at 1.1 ± 0.2 ‰ °C-1, about twice as large as previously used to estimate last interglacial temperature change from the bottom part of the NEEM deep ice core.

  10. Recent changes in north-west Greenland climate documented by NEEM shallow ice core data and simulations, and implications for past temperature reconstructions

    Science.gov (United States)

    Masson-Delmotte, V.; Steen-Larsen, H. C.; Ortega, P.; Swingedouw, D.; Popp, T.; Vinther, B. M.; Oerter, H.; Sveinbjornsdottir, A. E.; Gudlaugsdottir, H.; Box, J. E.; Falourd, S.; Fettweis, X.; Gallée, H.; Garnier, E.; Jouzel, J.; Landais, A.; Minster, B.; Paradis, N.; Orsi, A.; Risi, C.; Werner, M.; White, J. W. C.

    2015-01-01

    Combined records of snow accumulation rate, δ18O and deuterium excess were produced from several shallow ice cores and snow pits at NEEM (north-west Greenland), covering the period from 1724 to 2007. They are used to investigate recent climate variability and characterize the isotope-temperature relationship. We find that NEEM records are only weakly affected by inter-annual changes in the North Atlantic Oscillation. Decadal δ18O and accumulation variability is related to North Atlantic SST, and enhanced at the beginning of the 19th century. No long-term trend is observed in the accumulation record. By contrast, NEEM δ18O shows multi-decadal increasing trends in the late 19th century and since the 1980s. The strongest annual positive δ18O anomaly values are recorded at NEEM in 1928 and 2010, while maximum accumulation occurs in 1933. The last decade is the most enriched in δ18O (warmest), while the 11-year periods with the strongest depletion (coldest) are depicted at NEEM in 1815-1825 and 1836-1846, which are also the driest 11-year periods. The NEEM accumulation and δ18O records are strongly correlated with outputs from atmospheric models, nudged to atmospheric reanalyses. Best performance is observed for ERA reanalyses. Gridded temperature reconstructions, instrumental data and model outputs at NEEM are used to estimate the multi-decadal accumulation-temperature and δ18O-temperature relationships for the strong warming period in 1979-2007. The accumulation sensitivity to temperature is estimated at 11 ± 2% °C-1 and the δ18O-temperature slope at 1.1 ± 0.2‰ °C-1, about twice larger than previously used to estimate last interglacial temperature change from the bottom part of the NEEM deep ice core.

  11. Performing a local reduction operation on a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  12. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  13. Fast parallel event reconstruction

    CERN Document Server

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  14. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  15. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  16. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  17. Parallel Programming with Matrix Distributed Processing

    CERN Document Server

    Di Pierro, Massimo

    2005-01-01

    Matrix Distributed Processing (MDP) is a C++ library for fast development of efficient parallel algorithms. It constitues the core of FermiQCD. MDP enables programmers to focus on algorithms, while parallelization is dealt with automatically and transparently. Here we present a brief overview of MDP and examples of applications in Computer Science (Cellular Automata), Engineering (PDE Solver) and Physics (Ising Model).

  18. FAMOUS, faster: using parallel computing techniques to accelerate the FAMOUS/HadCM3 climate model with a focus on the radiative transfer algorithm

    Directory of Open Access Journals (Sweden)

    P. Hanappe

    2011-09-01

    Full Text Available We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations.

    The modified algorithm runs more than 50 times faster on the CELL's Synergistic Processing Element than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60 % of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.

  19. FAMOUS, faster: using parallel computing techniques to accelerate the FAMOUS/HadCM3 climate model with a focus on the radiative transfer algorithm

    Directory of Open Access Journals (Sweden)

    P. Hanappe

    2011-06-01

    Full Text Available We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. A task queue and a thread pool are used to distribute the computation to several processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations.

    The modified algorithm runs more than 50 times faster on the CELL's Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster and on graphics processors, using OpenCL, more than 2.5 times faster, as compared to the original code. Because the radiation code takes more than 60 % of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach.

  20. CO sub 2 -climate relationship as deduced from the Vostok ice core: a re-examination based on new measurements and on a re-evaluation of the air dating

    Energy Technology Data Exchange (ETDEWEB)

    Barnola, J.M.; Pimienta, P.; Raynaud, D. (Laboratoire de Glaciologie et Geophysique et de l' Environment, Cedex (FR)); Korotkevich, Y.S. (Arctic and Antarctic Research Inst., Leningrad (SU))

    1991-01-01

    Interpretation of the past CO{sub 2} variations recorded in polar ice during the large climatic transitions requires an accurate determination of the air-ice age difference. For the Vostok core, the age differences resulting from different assumptions on the firn densification process are compared and a new procedure is proposed to date the air trapped in this core. The penultimate deglaciation is studied on the basis of this new air dating and new CO{sub 2} measurements. These measurements and results obtained on other ice cores indicate that at the beginning of the deglaciations, the CO{sub 2} increase is either in phase or lags by less than about 1000 years with respect to the Antarctic temperature, while it clearly lags the temperature at the onset of the last glaciation. (orig.) (21 refs., 3 figs., 1 tab.).

  1. Parallel plasma fluid turbulence calculations

    Energy Technology Data Exchange (ETDEWEB)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-12-31

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center`s CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

  2. Parallelizing More Loops with Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    2012-01-01

    The performance of many parallel applications relies not on instruction-level parallelism but on loop-level parallelism. Unfortunately, automatic parallelization of loops is a fragile process; many different obstacles affect or prevent it in practice. To address this predicament we developed...... an interactive compilation feedback system that guides programmers in iteratively modifying their application source code. This helps leverage the compiler’s ability to generate loop-parallel code. We employ our system to modify two sequential benchmarks dealing with image processing and edge detection......, resulting in scalable parallelized code that runs up to 8.3 times faster on an eightcore Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should be combined...

  3. Study on Parallel Computing

    Institute of Scientific and Technical Information of China (English)

    Guo-Liang Chen; Guang-Zhong Sun; Yun-Quan Zhang; Ze-Yao Mo

    2006-01-01

    In this paper, we present a general survey on parallel computing. The main contents include parallel computer system which is the hardware platform of parallel computing, parallel algorithm which is the theoretical base of parallel computing, parallel programming which is the software support of parallel computing. After that, we also introduce some parallel applications and enabling technologies. We argue that parallel computing research should form an integrated methodology of "architecture - algorithm - programming - application". Only in this way, parallel computing research becomes continuous development and more realistic.

  4. Ice core profiles of saturated fatty acids (C12:0-C30:0) and oleic acid (C18:1) from southern Alaska since 1734 AD: A link to climate change in the Northern Hemisphere

    Science.gov (United States)

    Pokhrel, Ambarish; Kawamura, Kimitaka; Seki, Osamu; Matoba, Sumio; Shiraiwa, Takayuki

    2015-01-01

    An ice core drilled at Aurora Peak in southeast Alaska was analyzed for homologous series of straight chain fatty acids (C12:0-C30:0) including unsaturated fatty acid (oleic acid) using gas chromatography (GC/FID) and GC/mass spectrometry (GC/MS). Molecular distributions of fatty acids are characterized by even carbon number predominance with a peak at palmitic acid (C16:0, av. 20.3 ± SD. 29.8 ng/g-ice) followed by oleic acid (C18:1, 19.6 ± 38.6 ng/g-ice) and myristic acid (C14:0, 15.3 ± 21.9 ng/g-ice). The historical trends of short-chain fatty acids, together with correlation analysis with inorganic ions and organic tracers suggest that short-chain fatty acids (except for C12:0 and C15:0) were mainly derived from sea surface micro layers through bubble bursting mechanism and transported over the glacier through the atmosphere. This atmospheric transport process is suggested to be linked with Kamchatka ice core δD record from Northeast Asia and Greenland Temperature Anomaly (GTA). In contrast, long-chain fatty acids (C20:0-C30:0) are originated from terrestrial higher plants, soil organic matter and dusts, which are also linked with GTA. Hence, this study suggests that Alaskan fatty acids are strongly influenced by Pacific Decadal Oscillation/North Pacific Gyre Oscillation and/or extra tropical North Pacific surface climate and Arctic oscillation. We also found that decadal scale variability of C18:1/C18:0 ratios in the Aurora Peak ice core correlate with the Kamchatka ice core δD, which reflects climate oscillations in the North Pacific. This study suggests that photochemical aging of organic aerosols could be controlled by climate periodicity.

  5. Parallel transposition of sparse data structures

    DEFF Research Database (Denmark)

    Wang, Hao; Liu, Weifeng; Hou, Kaixi

    2016-01-01

    Many applications in computational sciences and social sciences exploit sparsity and connectivity of acquired data. Even though many parallel sparse primitives such as sparse matrix-vector (SpMV) multiplication have been extensively studied, some other important building blocks, e.g., parallel...... transposition for sparse matrices and graphs, have not received the attention they deserve. In this paper, we first identify that the transposition operation can be a bottleneck of some fundamental sparse matrix and graph algorithms. Then, we revisit the performance and scalability of parallel transposition...... approaches on x86-based multi-core and many-core processors. Based on the insights obtained, we propose two new parallel transposition algorithms: ScanTrans and MergeTrans. The experimental results show that our ScanTrans method achieves an average of 2.8-fold (up to 6.2-fold) speedup over the parallel...

  6. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  7. Dense SDM (12-core × 3-mode) transmission over 527 km with 33.2-ns mode-dispersion employing low-complexity parallel MIMO frequency-domain equalization

    DEFF Research Database (Denmark)

    Shibahara, K.; Mizuno, T.; Takara, H.;

    We demonstrate 12-core × 3-mode dense SDM transmission over 527 km graded-index multi-core few-mode fiber without mode-dispersion management. Employing low baud rate multi-carrier signal and frequency-domain equalization enables 33.2-ns DMD compensation with low computational complexity. © 2015 OSA...

  8. Dense SDM (12-Core × 3-Mode) Transmission Over 527 km With 33.2-ns Mode-Dispersion Employing Low-Complexity Parallel MIMO Frequency-Domain Equalization

    DEFF Research Database (Denmark)

    Shibahara, Kohki; Lee, Doohwan; Kobayashi, Takayuki;

    2016-01-01

    as intercore crosstalk. Mode dependent loss/gain effect was also mitigated by employing both a ring-core FM erbium-doped fiber amplifier and a free-space optics type gain equalizer. By combining these advanced techniques together, we finally demonstrate 12-core × 3-mode dense SDM transmission over 527-km GI MC...

  9. Identification of a core-periphery structure among participants of a business climate survey. An investigation based on the ZEW survey data

    Science.gov (United States)

    Stolzenburg, U.; Lux, T.

    2011-12-01

    Processes of social opinion formation might be dominated by a set of closely connected agents who constitute the cohesive `core' of a network and have a higher influence on the overall outcome of the process than those agents in the more sparsely connected `periphery'. Here we explore whether such a perspective could shed light on the dynamics of a well known economic sentiment index. To this end, we hypothesize that the respondents of the survey under investigation form a core-periphery network, and we identify those agents that define the core (in a discrete setting) or the proximity of each agent to the core (in a continuous setting). As it turns out, there is significant correlation between the so identified cores of different survey questions. Both the discrete and the continuous cores allow an almost perfect replication of the original series with a reduced data set of core members or weighted entries according to core proximity. Using a monthly time series on industrial production in Germany, we also compared experts' predictions with the real economic development. The core members identified in the discrete setting showed significantly better prediction capabilities than those agents assigned to the periphery of the network.

  10. FastQuery: A Parallel Indexing System for Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Prabhat,

    2011-07-29

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the- art index and query technologies such as FastBit can significantly improve accesses to these datasets by augmenting the user data with indexes and other secondary information. However, a challenge is that the indexes assume the relational data model but the scientific data generally follows the array data model. To match the two data models, we design a generic mapping mechanism and implement an efficient input and output interface for reading and writing the data and their corresponding indexes. To take advantage of the emerging many-core architectures, we also develop a parallel strategy for indexing using threading technology. This approach complements our on-going MPI-based parallelization efforts. We demonstrate the flexibility of our software by applying it to two of the most commonly used scientific data formats, HDF5 and NetCDF. We present two case studies using data from a particle accelerator model and a global climate model. We also conducted a detailed performance study using these scientific datasets. The results show that FastQuery speeds up the query time by a factor of 2.5x to 50x, and it reduces the indexing time by a factor of 16 on 24 cores.

  11. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...... parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...... available parallelism and further extraction of parallelism is limited by small data sets and a relatively high parallelization overhead. Load balance is difficult to obtain due to the limited parallelism and made worse by non-uniform memory latency. Three parallel OpenMP implementations of the application...

  12. New Methodologies for Parallel Architecture

    Institute of Scientific and Technical Information of China (English)

    Dong-Rui Fan; Xiao-Wei Li; Guo-Jie Li

    2011-01-01

    Moore's law continues to grant computer architects ever more transistors in the foreseeable future,and parallelism is the key to continued performance scaling in modern microprocessors.In this paper,the achievements in our research project,which is supported by the National Basic Research 973 Program of China,on parallel architecture,are systematically presented.The innovative approaches and techniques to solve the significant problems in parallel architecture design are summarized,including architecture level optimization,compiler and languag~supported technologies,reliability,power-performance efficient design,test and verification challenges,and platform building.Two prototype chips,a multiheavy-core Godson-3 and a many-light-core Godson-T,are described to demonstrate the highly scalable and reconfigurable parallel architecture designs.We also present some of our achievements appearing in ISCA,MICRO,ISSCC,HPCA,PLDI,PACT,IJCAI,Hot Chips,DATE,IEEE Trans.VLSI,IEEE Micro,IEEE Trans.Computers,etc.

  13. Practical scalability assesment for parallel scientific numerical applications

    CERN Document Server

    Perlin, Natalie; Kirtman, Ben P

    2016-01-01

    The concept of scalability analysis of numerical parallel applications has been revisited, with the specific goals defined for the performance estimation of research applications. A series of Community Climate Model System (CCSM) numerical simulations were used to test the several MPI implementations, determine optimal use of the system resources, and their scalability. The scaling capacity and model throughput performance metrics for $N$ cores showed a log-linear behavior approximated by a power fit in the form of $C(N)=bN^a$, where $a$ and $b$ are two empirical constants. Different metrics yielded identical power coefficients ($a$), but different dimensionality coefficients ($b$). This model was consistent except for the large numbers of N. The power fit approach appears to be very useful for scalability estimates, especially when no serial testing is possible. Scalability analysis of additional scientific application has been conducted in the similar way to validate the robustness of the power fit approach...

  14. ENSO and interdecadal climate variability over the last century documented by geochemical records of two coral cores from the South West Pacific

    Directory of Open Access Journals (Sweden)

    T. Ourbak

    2006-01-01

    Full Text Available The south west Pacific is affected by climatic phenomena such as ENSO (El Niño Southern Oscillation or the PDO (Pacific Decadal Oscillation. Near-monthly resolution calibrations of Sr/Ca, U/Ca and δ18Oc were made on corals taken from New Caledonia and Wallis Island. These geochemical variations could be linked to SST (sea surface temperature and SSS (sea surface salinity variations over the last two decades, itselves dependent on ENSO occurrences. On the other hand, near-half-yearly resolution over the last century smoothes seasonal and interannual climate signals, but emphasizes low frequency climate variability.

  15. Globally synchronous climate change 2800 years ago: Proxy data from peat in South America

    Science.gov (United States)

    Chambers, Frank M.; Mauquoy, Dmitri; Brain, Sally A.; Blaauw, Maarten; Daniell, John R. G.

    2007-01-01

    Initial findings from high-latitude ice-cores implied a relatively unvarying Holocene climate, in contrast to the major climate swings in the preceding late-Pleistocene. However, several climate archives from low latitudes imply a less than equable Holocene climate, as do recent studies on peat bogs in mainland north-west Europe, which indicate an abrupt climate cooling 2800 years ago, with parallels claimed in a range of climate archives elsewhere. A hypothesis that this claimed climate shift was global, and caused by reduced solar activity, has recently been disputed. Until now, no directly comparable data were available from the southern hemisphere to help resolve the dispute. Building on investigations of the vegetation history of an extensive mire in the Valle de Andorra, Tierra del Fuego, we took a further peat core from the bog to generate a high-resolution climate history through the use of determination of peat humification and quantitative leaf-count plant macrofossil analysis. Here, we present the new proxy-climate data from the bog in South America. The data are directly comparable with those in Europe, as they were produced using identical laboratory methods. They show that there was a major climate perturbation at the same time as in northwest European bogs. Its timing, nature and apparent global synchronicity lend support to the notion of solar forcing of past climate change, amplified by oceanic circulation. This finding of a similar response simultaneously in both hemispheres may help validate and improve global climate models. That reduced solar activity might cause a global climatic change suggests that attention be paid also to consideration of any global climate response to increases in solar activity. This has implications for interpreting the relative contribution of climate drivers of recent 'global warming'.

  16. Combined Scheduling and Mapping for Scalable Computing with Parallel Tasks

    Directory of Open Access Journals (Sweden)

    Jörg Dümmler

    2012-01-01

    Full Text Available Recent and future parallel clusters and supercomputers use symmetric multiprocessors (SMPs and multi-core processors as basic nodes, providing a huge amount of parallel resources. These systems often have hierarchically structured interconnection networks combining computing resources at different levels, starting with the interconnect within multi-core processors up to the interconnection network combining nodes of the cluster or supercomputer. The challenge for the programmer is that these computing resources should be utilized efficiently by exploiting the available degree of parallelism of the application program and by structuring the application in a way which is sensitive to the heterogeneous interconnect. In this article, we pursue a parallel programming method using parallel tasks to structure parallel implementations. A parallel task can be executed by multiple processors or cores and, for each activation of a parallel task, the actual number of executing cores can be adapted to the specific execution situation. In particular, we propose a new combined scheduling and mapping technique for parallel tasks with dependencies that takes the hierarchical structure of modern multi-core clusters into account. An experimental evaluation shows that the presented programming approach can lead to a significantly higher performance compared to standard data parallel implementations.

  17. Friis Hills Drilling Project - Coring an Early to mid-Miocene terrestrial sequence in the Transantarctic Mountains to examine climate gradients and ice sheet variability along an inland-to-offshore transect

    Science.gov (United States)

    Lewis, A. R.; Levy, R. H.; Naish, T.; Gorman, A. R.; Golledge, N.; Dickinson, W. W.; Kraus, C.; Florindo, F.; Ashworth, A. C.; Pyne, A.; Kingan, T.

    2015-12-01

    The Early to mid-Miocene is a compelling interval to study Antarctic ice sheet (AIS) sensitivity. Circulation patterns in the southern hemisphere were broadly similar to present and reconstructed atmospheric CO2 concentrations were analogous to those projected for the next several decades. Geologic records from locations proximal to the AIS are required to examine ice sheet response to climate variability during this time. Coastal and offshore drill core records recovered by ANDRILL and IODP provide information regarding ice sheet variability along and beyond the coastal margin but they cannot constrain the extent of inland retreat. Additional environmental data from the continental interior is required to constrain the magnitude of ice sheet variability and inform numerical ice sheet models. The only well-dated terrestrial deposits that register early to mid-Miocene interior ice extent and climate are in the Friis Hills, 80 km inland. The deposits record multiple glacial-interglacial cycles and fossiliferous non-glacial beds show that interglacial climate was warm enough for a diverse biota. Drifts are preserved in a shallow valley with the oldest beds exposed along the edges where they terminate at sharp erosional margins. These margins reveal drifts in short stratigraphic sections but none is more than 13 m thick. A 34 m-thick composite stratigraphic sequence has been produced from exposed drift sequences but correlating beds in scattered exposures is problematic. Moreover, much of the sequence is buried and inaccessible in the basin center. New seismic data collected during 2014 reveal a sequence of sediments at least 50 m thick. This stratigraphic package likely preserves a detailed and more complete sedimentary sequence for the Friis Hills that can be used to refine and augment the outcrop-based composite stratigraphy. We aim to drill through this sequence using a helicopter-transportable diamond coring system. These new cores will allow us to obtain

  18. Late Maastrichtian-Early Paleocene sea level and climate changes in the Antioch Church Core (Alabama, Gulf of Mexico margin, USA): A multi-proxy approach

    OpenAIRE

    Schulte, Peter; Speijer, Robert

    2009-01-01

    The Antioch Church core from central Alabama, spanning the Cretaceous-Paleogene (K-P) boundary, was investigated by a multi-proxy approach to study paleoenvironmental and sea level hanges within the wellconstrained sequence stratigraphic setting of the Gulf of Mexico margin. The Antioch Church core comprises the Maastrichtian calcareous nannoplankton Zone CC25 and the Danian Zones NP1 to NP4 corresponding to the Maastrichtian planktonic foraminifera Zones CF3 and the Danian Zones P1a to P2....

  19. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  20. Uncertainty and extreme events in future climate and hydrologic projections for the Pacific Northwest: providing a basis for vulnerability and core/corridor assessments

    Science.gov (United States)

    Littell, Jeremy S.; Mauger, Guillaume S.; Salathe, Eric P.; Hamlet, Alan F.; Lee, Se-Yeun; Stumbaugh, Matt R.; Elsner, Marketa; Norheim, Robert; Lutz, Eric R.; Mantua, Nathan J.

    2014-01-01

    The purpose of this project was to (1) provide an internally-consistent set of downscaled projections across the Western U.S., (2) include information about projection uncertainty, and (3) assess projected changes of hydrologic extremes. These objectives were designed to address decision support needs for climate adaptation and resource management actions. Specifically, understanding of uncertainty in climate projections – in particular for extreme events – is currently a key scientific and management barrier to adaptation planning and vulnerability assessment. The new dataset fills in the Northwest domain to cover a key gap in the previous dataset, adds additional projections (both from other global climate models and a comparison with dynamical downscaling) and includes an assessment of changes to flow and soil moisture extremes. This new information can be used to assess variations in impacts across the landscape, uncertainty in projections, and how these differ as a function of region, variable, and time period. In this project, existing University of Washington Climate Impacts Group (UW CIG) products were extended to develop a comprehensive data archive that accounts (in a reigorous and physically based way) for climate model uncertainty in future climate and hydrologic scenarios. These products can be used to determine likely impacts on vegetation and aquatic habitat in the Pacific Northwest (PNW) region, including WA, OR, ID, northwest MT to the continental divide, northern CA, NV, UT, and the Columbia Basin portion of western WY New data series and summaries produced for this project include: 1) extreme statistics for surface hydrology (e.g. frequency of soil moisture and summer water deficit) and streamflow (e.g. the 100-year flood, extreme 7-day low flows with a 10-year recurrence interval); 2) snowpack vulnerability as indicated by the ratio of April 1 snow water to cool-season precipitation; and, 3) uncertainty analyses for multiple climate

  1. Reassessment of the Upper Fremont Glacier ice-core chronologies by synchronizing of ice-core-water isotopes to a nearby tree-ring chronology

    Science.gov (United States)

    Chellman, Nathan J.; McConnell, Joseph R.; Arienzo, Monica; Pederson, Gregory T.; Aarons, Sarah; Csank, Adam

    2017-01-01

    The Upper Fremont Glacier (UFG), Wyoming, is one of the few continental glaciers in the contiguous United States known to preserve environmental and climate records spanning recent centuries. A pair of ice cores taken from UFG have been studied extensively to document changes in climate and industrial pollution (most notably, mid-19th century increases in mercury pollution). Fundamental to these studies is the chronology used to map ice-core depth to age. Here, we present a revised chronology for the UFG ice cores based on new measurements and using a novel dating approach of synchronizing continuous water isotope measurements to a nearby tree-ring chronology. While consistent with the few unambiguous age controls underpinning the previous UFG chronologies, the new interpretation suggests a very different time scale for the UFG cores with changes of up to 80 years. Mercury increases previously associated with the mid-19th century Gold Rush now coincide with early-20th century industrial emissions, aligning the UFG record with other North American mercury records from ice and lake sediment cores. Additionally, new UFG records of industrial pollutants parallel changes documented in ice cores from southern Greenland, further validating the new UFG chronologies while documenting the extent of late 19th and early 20th century pollution in remote North America.

  2. Proteomics Core

    Data.gov (United States)

    Federal Laboratory Consortium — Proteomics Core is the central resource for mass spectrometry based proteomics within the NHLBI. The Core staff help collaborators design proteomics experiments in a...

  3. Proteomics Core

    Data.gov (United States)

    Federal Laboratory Consortium — Proteomics Core is the central resource for mass spectrometry based proteomics within the NHLBI. The Core staff help collaborators design proteomics experiments in...

  4. Past Warmer Climate Periods at the Antarctic Margin Detected From Proxies and Measurements of Biogenic Opal in the AND-1B Core: The XRF Spectral Silver (Ag) Peak Used as a new Tool for Biogenic Opal Quantification.

    Science.gov (United States)

    Kuhn, G.; Helling, D.; von Eynatten, H.; Niessen, F.; Magens, D.

    2008-12-01

    geochemical analyses revealed that measuring these low Ag concentrations and their variability (leaching data (r=0.88, n=481). The biogenic opal concentrations in combination with other high-resolution data will be used as a cyclostratigraphic approach to understand paleoenvironmental and climate changes. Periods with much higher accumulation of biogenic opal than today were detected in the core that indicate a retreat and perhaps a total decay of the Ross Ice Shelf.

  5. Change in ice rheology during climate variations – implications for ice flow modelling and dating of the EPICA Dome C core

    Directory of Open Access Journals (Sweden)

    G. Durand

    2007-01-01

    Full Text Available The study of the distribution of crystallographic orientations (i.e., the fabric along ice cores provides information on past and current ice flow in ice-sheets. Besides the usually observed formation of a vertical single maximum fabric, the EPICA Dome C ice core (EDC shows an abrupt and unexpected strengthening of its fabric during termination II around 1750 m depth. Such strengthening has already been observed for sites located on an ice-sheet flank. This suggests that horizontal shear could occur along the EDC core. Moreover, the change in the fabric leads to a modification of the effective viscosity between neighbouring ice layers. Through the use of an anisotropic ice flow model, we quantify the change in effective viscosity and investigate its implication for ice flow and dating.

  6. Climatic significance of δ18O records from an 80.36 m ice core in the East Rongbuk Glacier, Mount Qomolangma (Everest)

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Dongqi; QIN; Dahe; HOU; Shugui; KANG; Shichang; REN

    2005-01-01

    The δ18O variations in an 80.36 m ice core retrieved in the accumulation zone of the East Rongbuk Glacier, Mount Qomolangma (Everest), is not consistent with changes of air temperature from both southern and northern slopes of Himalayas, as well as these of the temperature anomalies over the Northern Hemisphere. The negative relationship between the δ18O and the net accumulation records of the ice core suggests the "amount effect" of summer precipitation on the δ18O values in the region. Therefore, the δ18O records of the East Rongbuk ice core should be a proxy of Indian Summer Monsoon intensity, which shows lower δ18O values during strong monsoon phases and higher values during weak phases.

  7. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  8. Extending the High-Resolution Global Climate Record in Santa Barbara Basin: Developing a More Continuous Composite Section from Overlapping Cores

    Science.gov (United States)

    Behl, R. J.; Kennett, J. P.; Hill, T. M.; Pak, D.; Schimmelmann, A.; Cannariato, K. G.; Nicholson, C.; Sorlien, C. C.; Hopkins, S. E.; Team, S.

    2005-12-01

    More than thirty ~2 to 5m-long piston cores were recovered from an eroded, breached anticline on the Mid-Channel Trend of the Santa Barbara Basin (SBB). Precision placement of cores enabled us to build several composite stratigraphic sections of overlapping cores. This was accomplished by continuous shipboard evaluation and feedback between pre-existing and concurrently acquired high-resolution seismic data and immediate sedimentologic core analysis to determine subsequent core locations. Overlap was confirmed by correlated stratigraphic patterns of alternating laminated vs. massive intervals, gray flood layers, spectrophotometric and MST density/porosity data. These cores were acquired to provide a semi-continuous, composite paleoceanographic record of the Quaternary SBB and the California Margin that extends beyond the fertile ODP Site 893 core, to possibly as old as 450 to 600 ka, an age previously unreachable by conventional methods. Most cores were mantled by glauconitic sand or a thin carbonate hardground encrusted with sessile organisms, including solitary corals. Underlying the condensed Holocene sand or hardground deposits are alternating layers of Pleistocene laminated and massive/bioturbated sediment with minor sand and sandy clay layers. The style, continuity, and variability of laminated fabric and the nature of bedding contacts are similar to that observed at ODP Site 893 where glacial episodes were associated with oxygenated, bioturbated sediment and interglacial and interstadial sediment were associated with dysoxic, laminated sediment. Laminated sediment comprises 38% of the hemipelagic deposits which is nearly identical with the ratio of laminated to massive sediment over the past 160 ky at Site 893. By extrapolation, despite accumulating in a mobile, deforming, active margin basin, the earlier Pleistocene deposits seem to record similar behavior to the last 160 ky recorded at ODP Site 893. In some intervals, gray layers are thicker and more

  9. Should the moral core of climate issues be emphasized or downplayed in public discourse? Three ways to successfully manage the double-edged sword of moral communication

    NARCIS (Netherlands)

    Täuber, Susanne; van Zomeren, Martijn; Kutlaca, Maja

    The main objective of this paper is to identify a serious problem for communicators regarding the framing of climate issues in public discourse, namely that moralizing such an issue can motivate individuals while at the same time defensively lead them to avoid solving the problem. We review recent

  10. Should the moral core of climate issues be emphasized or downplayed in public discourse? Three ways to successfully manage the double-edged sword of moral communication

    NARCIS (Netherlands)

    Täuber, Susanne; van Zomeren, Martijn; Kutlaca, Maja

    2015-01-01

    The main objective of this paper is to identify a serious problem for communicators regarding the framing of climate issues in public discourse, namely that moralizing such an issue can motivate individuals while at the same time defensively lead them to avoid solving the problem. We review recent s

  11. Parallel Frequent Pattern Discovery: Challenges and Methodology

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Parallel frequent pattern discovery algorithms exploit parallel and distributed computing resources to relieve the sequential bottlenecks of current frequent pattern mining (FPM) algorithms. Thus, parallel FPM algorithms achieve better scalability and performance, so they are attracting much attention in the data mining research community. This paper presents a comprehensive survey of the state-of-the-art parallel and distributed frequent pattern mining algorithms with more emphasis on pattern discovery from complex data (e.g., sequences and graphs) on various platforms. A review of typical parallel FPM algorithms uncovers the major challenges, methodologies, and research problems in the field of parallel frequent pattern discovery,such as work-load balancing, finding good data layouts, and data decomposition. This survey also indicates a dramatic shift of the research interest in the field from the simple parallel frequent itemset mining on traditional parallel and distributed platforms to parallel pattern mining of more complex data on emerging architectures, such as multi-core systems and the increasingly mature grid infrastructure.

  12. A new ice-core record from Lomonosovfonna, Svalbard : viewing the 1920-97 data in relation to present climate and environmental conditions

    NARCIS (Netherlands)

    Isaksson, E; Pohjola, [No Value; Jauhiainen, T; Moore, J; Pinglot, JM; Vaikmae, R; van de Wal, RSW; Ivask, J; Karlof, L; Martma, T; Meijer, HAJ; Mulvaney, R; Thomassen, M; van den Broeke, M

    2001-01-01

    In 1997 a 121 m ice core was retrieved from Lomonosovfonna, the highest ice field in Spitsbergen, Svalbard (1250 m a.s.l.). Radar measurements indicate an ice depth of 126.5 m, and borehole temperature measurements show that the ice is below the melting point, High-resolution sampling of major ions,

  13. Integrated Task and Data Parallel Programming

    Science.gov (United States)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated

  14. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...... parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...... available parallelism. It is difficult to further extract parallelism since the application has small data sets and parallelization overhead is relatively high. There is also a fair amount of load imbalance which is made worse by a non-uniform memory latency. Even so, we show that with some tuning relative...

  15. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  16. A Tutorial on Parallel and Concurrent Programming in Haskell

    Science.gov (United States)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  17. Computing Parallelism in Discourse

    CERN Document Server

    Gardent, C; Gardent, Claire; Kohlhase, Michael

    1997-01-01

    Although much has been said about parallelism in discourse, a formal, computational theory of parallelism structure is still outstanding. In this paper, we present a theory which given two parallel utterances predicts which are the parallel elements. The theory consists of a sorted, higher-order abductive calculus and we show that it reconciles the insights of discourse theories of parallelism with those of Higher-Order Unification approaches to discourse semantics, thereby providing a natural framework in which to capture the effect of parallelism on discourse semantics.

  18. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  19. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  20. Parallel execution of portfolio optimization

    CERN Document Server

    Nuriyev, R

    2008-01-01

    Analysis of asset liability management (ALM) strategies especially for long term horizon is a crucial issue for banks, funds and insurance companies. Modern economic models, investment strategies and optimization criteria make ALM studies computationally very intensive task. It attracts attention to multiprocessor system and especially to the cheapest one: multi core PCs and PC clusters. In this article we are analyzing problem of parallel organization of portfolio optimization, results of using clusters for optimization and the most efficient cluster architecture for these kinds of tasks.

  1. Parallel processing ITS

    Energy Technology Data Exchange (ETDEWEB)

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  2. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  3. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  4. Ice-Core Study of the Link between Sea-Salt Aerosol, Sea-Ice Cover and Climate in the Antarctic Peninsula Area

    Energy Technology Data Exchange (ETDEWEB)

    Aristarain, A.J. [Laboratorio de Estratigrafia Glaciar y Geoquimica del Agua y de la Nieve LEGAN, Instituto Antartico Argentino, Mendoza (Argentina); Delmas, R.J. [Laboratoire de Glaciologie et Geophysique de l' Environnement LGGE, Centre National de la Recherche Scientifique, BP 96, 38402 St. Martin d' Heres Cedex (France); Stievenard, M. [Laboratoire des Sciences du Climat et de l' Environnement LSCE, Centre d' Etudes de Saclay, 91191 Gif-sur-Yvette, Cedex (France)

    2004-11-01

    Three ice cores and a set of snow pit samples collected on James Ross Island, Antarctic Peninsula, in 1979, 1981 and 1991 have been analyzed for water stable isotope content D or 18O (isotopic temperature) and major chemical species. A reliable and detailed chronological scale has been established first for the upper 24.5 m of water equivalent (1990-1943) where various data sets can be compared, then extended down to 59.5 m of water equivalent (1847) with the aid of seasonal variations and the sulphate peak reflecting the 1883 Krakatoa volcanic eruption. At James Ross Island, sea-salt aerosol is generally produced by ice-free marine surfaces during the summer months, although some winter sea-salt events have been observed. For the upper part of the core (1990-1943), correlations (positive or negative) were calculated between isotopic temperature, chloride content (a sea-salt indicator), sea-ice extent, regional atmospheric temperature changes and atmospheric circulation. The D and chloride content correlation was then extended back to 1847, making it possible to estimate decadal sea-ice cover fluctuations over the study period. Our findings suggest that ice-core records from James Ross Island reflect the recent warming and sea-ice decrease trends observed in the Antarctic Peninsula area from the mid-1940s.

  5. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  6. Parallelization of Subchannel Analysis Code MATRA

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems.

  7. Developing Parallel Programs

    Directory of Open Access Journals (Sweden)

    Ranjan Sen

    2012-09-01

    Full Text Available Parallel programming is an extension of sequential programming; today, it is becoming the mainstream paradigm in day-to-day information processing. Its aim is to build the fastest programs on parallel computers. The methodologies for developing a parallelprogram can be put into integrated frameworks. Development focuses on algorithm, languages, and how the program is deployed on the parallel computer.

  8. PALM: a Parallel Dynamic Coupler

    Science.gov (United States)

    Thevenin, A.; Morel, T.

    2008-12-01

    In order to efficiently represent complex systems, numerical modeling has to rely on many physical models at a time: an ocean model coupled with an atmospheric model is at the basis of climate modeling. The continuity of the solution is granted only if these models can constantly exchange information. PALM is a coupler allowing the concurrent execution and the intercommunication of programs not having been especially designed for that. With PALM, the dynamic coupling approach is introduced: a coupled component can be launched and can release computers' resources upon termination at any moment during the simulation. In order to exploit as much as possible computers' possibilities, the PALM coupler handles two levels of parallelism. The first level concerns the components themselves. While managing the resources, PALM allocates the number of processes which are necessary to any coupled component. These models can be parallel programs based on domain decomposition with MPI or applications multithreaded with OpenMP. The second level of parallelism is a task parallelism: one can define a coupling algorithm allowing two or more programs to be executed in parallel. PALM applications are implemented via a Graphical User Interface called PrePALM. In this GUI, the programmer initially defines the coupling algorithm then he describes the actual communications between the models. PALM offers a very high flexibility for testing different coupling techniques and for reaching the best load balance in a high performance computer. The transformation of computational independent code is almost straightforward. The other qualities of PALM are its easy set-up, its flexibility, its performances, the simple updates and evolutions of the coupled application and the many side services and functions that it offers.

  9. XLPE三芯电缆稳态并联热路模型及实验验证%Steady-State Parallel Thermal Circuit Model for Three-Core XLPE Cable and Its Experimental Verification

    Institute of Scientific and Technical Information of China (English)

    李文祥; 刘刚; 王振华; 王鹏

    2015-01-01

    Three-core XLPE cables are commonly used in low-voltage distribution networks. However, studies on ampacity of three-core cable are rarely seen. Considering the structure difference between three-core cable and one-core cable, and based on IEC 60287 calculation standard, the thermal circuit model of 6 cable layers and 4 thermal circuit nodes is derived using transferring heat knowl-edge. Using the shape factor method to calculate thermal resistance, each layer temperature is obtained by calculation of cable surface temperature. For verification of calculation accuracy, current-rising test is designed with two kinds of laying methods: buried laying and air laying. In this test, the temperature of conduct, XLPE insulating layer, armor layer and cable skin are measured. And compar-ative analysis is carried out. The experimental temperature data and theoretical calculation reveals that the calculation error using the proposed model is within the acceptable range, Therefore, this method can be used for calculation of cable conduct temperature in en-gineering.%交联聚乙烯材料的三芯电缆广泛应用于低压配电网中,但长期以来,关于电缆载流量计算的研究多集中于单芯电缆。考虑到三芯电缆与单芯电缆的结构差异,在IEC 60287标准计算的基础上,利用传热学知识,理论推导了三芯电缆并联结构的6层4节点稳态热路模型。采用形状因子法计算热阻参数,利用外表皮温度反推计算得到电缆各层温度。为了验证计算的准确性,设计了空气敷设和土壤敷设两种敷设方式下的升流实验,测量得到稳态时导体线芯、绝缘层、铠装层和外表皮温度,并与理论温度计算值进行了比较分析。分析结果表明,利用提出的热路模型进行三芯电缆载流量计算的误差在允许范围内,可应用于工程实际。

  10. Time evolutions of the greenhouse effect gases concentration and the climate from the Vostok core; Evolutions temporelles de la concentration en gaz a effet de serre et du climat d'apres la carotte de Vostok (Antarctique)

    Energy Technology Data Exchange (ETDEWEB)

    Pepin, L.; Barnola, J.M.; Petit, J.R.; Raynaud, D. [CNRS, Lab. de Glaciologie et de Geophysique de l' Environnement, Universite Joseph-Fourier, Grenoble I, 38 (France)

    2000-07-01

    Polar ice and air bubbles trapped in it, attest climate history and the atmospheric composition of the Earth. The Vostok drill at the Russian station of Vostok on the Antarctic Plateau provides a more 400 000-year long record of these conditions. Though this time, climate in Antarctica oscillated between 4 glacials, for which surface temperatures were about 10 deg C below modern ones, and 5 interglacial periods as the modern one. It is shown that greenhouse gases (CO{sub 2} and CH{sub 4}) concentrations are higher during warm periods than cold ones. Scrutinizing glacial to interglacial transitions shows that Southern Hemisphere processes lead northern ones. Furthermore, sea level rise appears to lag variations of other variables. Finally, these records give evidence that CO{sub 2} and CH{sub 4} levels measured at present time, have never been reached during the late 420 000 years. (authors)

  11. Parallel transformation of K-SVD solar image denoising algorithm

    Science.gov (United States)

    Liang, Youwen; Tian, Yu; Li, Mei

    2017-02-01

    The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.

  12. Holocene biomass burning recorded in polar and low-latitude ice cores

    Science.gov (United States)

    Kehrwald, N. M.; Zennaro, P.; Zangrando, R.; Gabrielli, P.; Thompson, L. G.; Gambaro, A.; Barbante, C.

    2011-12-01

    Ice cores contain specific molecular markers including levoglucosan (1,6-anhydro-β-D-glucopyranose) and other pyrochemical evidence that provides much-needed information on the role of fire in regions with no existing data of past fire activity. Levoglucosan is a cellulose combustion product produced at burning temperatures of 300°C or greater. We first trace fire emissions from a boreal forest source in the Canadian Shield through transport and deposition at Summit, Greenland. Atmospheric and surface samples suggest that levoglucosan in snow can record biomass burning events up to 1000s of kilometers away. Levoglucosan does degrade by interacting with hydroxyl radicals in the atmosphere, but it is emitted in large quantities, allowing the use as a biomass burning tracer. These quantified atmospheric biomass burning emissions and associated parallel oxalate and levoglucosan peaks in snow pit samples validates levoglucosan as a proxy for past biomass burning in snow records and by extension in ice cores. The temporal and spatial resolution of chemical markers in ice cores matches the core in which they are measured. The longest temporal resolution extends back approximately eight glacial cycles in the EPICA Dome C ice core, but many ice cores provide high-resolution Holocene records. The spatial resolution of chemical markers in ice cores depends on the core location where low-latitude ice cores primarily reflect regional climate parameters, and polar ice cores integrate hemispheric signals. Here, we compare levoglucosan flux measured during the late Holocene in the Kilimanjaro (3°04.6'S; 37°21.2'E, 5893 masl) and NEEM, Greenland (77°27' N; 51°3'W, 2454 masl) ice cores. We contrast the Holocene results with levoglucosan flux across the past 600,000 years in the EPICA Dome C (75°06'S, 123°21'E, 3233 masl) ice core.

  13. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  14. Interactive Parallel Data Analysis within Data-Centric Cluster Facilities using the IPython Notebook

    Science.gov (United States)

    Pascoe, S.; Lansdowne, J.; Iwi, A.; Stephens, A.; Kershaw, P.

    2012-12-01

    The data deluge is making traditional analysis workflows for many researchers obsolete. Support for parallelism within popular tools such as matlab, IDL and NCO is not well developed and rarely used. However parallelism is necessary for processing modern data volumes on a timescale conducive to curiosity-driven analysis. Furthermore, for peta-scale datasets such as the CMIP5 archive, it is no longer practical to bring an entire dataset to a researcher's workstation for analysis, or even to their institutional cluster. Therefore, there is an increasing need to develop new analysis platforms which both enable processing at the point of data storage and which provides parallelism. Such an environment should, where possible, maintain the convenience and familiarity of our current analysis environments to encourage curiosity-driven research. We describe how we are combining the interactive python shell (IPython) with our JASMIN data-cluster infrastructure. IPython has been specifically designed to bridge the gap between the HPC-style parallel workflows and the opportunistic curiosity-driven analysis usually carried out using domain specific languages and scriptable tools. IPython offers a web-based interactive environment, the IPython notebook, and a cluster engine for parallelism all underpinned by the well-respected Python/Scipy scientific programming stack. JASMIN is designed to support the data analysis requirements of the UK and European climate and earth system modeling community. JASMIN, with its sister facility CEMS focusing the earth observation community, has 4.5 PB of fast parallel disk storage alongside over 370 computing cores provide local computation. Through the IPython interface to JASMIN, users can make efficient use of JASMIN's multi-core virtual machines to perform interactive analysis on all cores simultaneously or can configure IPython clusters across multiple VMs. Larger-scale clusters can be provisioned through JASMIN's batch scheduling system

  15. High-resolution paleoclimatology of the Santa Barbara Basin during the Medieval Climate Anomaly and early Little Ice Age based on diatom and silicoflagellate assemblages in Kasten core SPR0901-02KC

    Science.gov (United States)

    Barron, John A.; Bukry, David B.; Hendy, Ingrid L.

    2015-01-01

    Diatom and silicoflagellate assemblages documented in a high-resolution time series spanning 800 to 1600 AD in varved sediment recovered in Kasten core SPR0901-02KC (34°16.845’ N, 120°02.332’ W, water depth 588 m) from the Santa Barbara Basin (SBB) reveal that SBB surface water conditions during the Medieval Climate Anomaly (MCA) and the early part of the Little Ice Age (LIA) were not extreme by modern standards, mostly falling within one standard deviation of mean conditions during the pre anthropogenic interval of 1748 to 1900. No clear differences between the character of MCA and the early LIA conditions are apparent. During intervals of extreme droughts identified by terrigenous proxy scanning XRF analyses, diatom and silicoflagellate proxies for coastal upwelling typically exceed one standard deviation above mean values for 1748-1900, supporting the hypothesis that droughts in southern California are associated with cooler (or La Niña-like) sea surface temperatures (SSTs). Increased percentages of diatoms transported downslope generally coincide with intervals of increased siliciclastic flux to the SBB identified by scanning XRF analyses. Diatom assemblages suggest only two intervals of the MCA (at ~897 to 922 and ~1151 to 1167) when proxy SSTs exceeded one standard deviation above mean values for 1748 to 1900. Conversely, silicoflagellates imply extreme warm water events only at ~830 to 860 (early MCA) and ~1360 to 1370 (early LIA) that are not supported by the diatom data. Silicoflagellates appear to be more suitable for characterizing average climate during the 5 to 11 year-long sample intervals studied in the SPR0901-02KC core than diatoms, probably because diatom relative abundances may be dominated by seasonal blooms of a particular year.

  16. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  17. Recent changes in North West Greenland climate documented by NEEM shallow ice core data and simulations, and implications for past temperature reconstructions

    Science.gov (United States)

    Masson-Delmotte, V.; Steen-Larsen, H. C.

    2014-12-01

    Stack records of accumulation, d18O and deuterium excess were produced from up to 4 shallow ice cores at NEEM (North-West Greenland), spanning 1724-2007 and updated to 2011 using pit water stable isotope data. Signal-to-noise ratio is high for d18O (1.3) and accumulation (1.2) but is low for deuterium excess (0.4). No long-term trend is observed in the accumulation record. By contrast, NEEM d18O shows multi-decadal increasing trends in the late 19th century and since the 1980s. Decadal d18O and accumulation variability is in phase with Atlantic Multi-decadal Oscillation indices, and enhanced at the beginning of the 19th century. Large-scale spatial coherency is detected between NEEM and other Greenland ice core and temperature records, strongest for North-West Greenland d18O and summer South-West coastal temperature instrumental records. The strength of correlations with the North Atlantic Oscillation is smaller than in central or south Greenland. The strongest positive d18O values are recorded at NEEM in 2010, followed by 1928, while maximum accumulation occurs in 1933. The coldest/driest decades are depicted at NEEM in 1815-1825 and 1836-1836. The spatial structure of these warm/ wet years and cold/dry decades is investigated using all available Greenland ice cores. During the period 1958-2011, the NEEM accumulation and d18O records are highly correlated with simulated precipitation, temperature and d18O from simulations performed with MAR, LMDZiso and ECHAM5iso atmospheric models, nudged to atmospheric reanalyses. Model-data agreement is better using ERA reanalyses than NCEP/NCAR and 20CR ones. Model performance is poor for deuterium excess. Gridded temperature reconstructions, instrumental data and model outputs at NEEM are used to estimate the d18O-temperature relationship for the strong warming period in 1979-2007. The estimated slope of this relationship is 1.1±0.2‰ per °C, about twice larger than previously used to estimate last interglacial temperature

  18. High-resolution detrital flux and provenance records from the Lake Suigetsu (SG06/12 cores) and climate changes in Central Japan during the last deglaciation

    Science.gov (United States)

    Nagashima, K.; Nakagawa, T.; Suzuki, Y.; Tada, R.; Sugisaki, S.; Bronk Ramsey, C.; Bryant, C. L.; Staff, R.; Brauer, A.; Lamb, H.; Schlolaut, G.; Tarasov, P. E.; Gotanda, K.; Haraguchi, T.; Yonenobu, H.; Yokoyama, Y.

    2013-12-01

    Stalagmites in Chinese caves (e.g., Wang et al., 2001, 2005), loess/paleosol sequence of the Chinese Loess Plateau, and lacustrine sediments in Asian countries are favorable to monitor the past changes in East Asian monsoon and the path and intensity of the Westerly Jet. However, not much is known about these changes during the last deglaciation mostly due to the large uncertainty in the chronologies of the lacustrine and loess/paleosol sediments. Lake Suigetsu in Central Japan is known for the varved sediments which cover at least last 70 kyr. Recently, precise Age-Model is established for SG06 core based on varve counting and more than 800 radiocarbon dates (e.g., Ramsey et al., 2012; Staff et al., 2013). Here we examine the precipitation and wind-system changes in Central Japan during the last deglaciation from the flux and provenance changes of the detrital materials in the sediments of SG06 core. We reconstructed flux of detrital materials for the last glacial part of the SG06 core (1402-1810 cm interval of the SG06 composite depth) with 1 cm resolution (corresponded to 7-13 yrs) and estimated provenance of the detrital materials using color, chemical compositions (please see a poster presented by Suzuki et al), grain sizes, and electron spin resonance intensity and crystallinity of the quartz (these methods were detailed in Nagashima et al., 2007, 2011). The reconstructed flux of detrital materials are characterized by the millennial-scale increases exceeding 12 mg/cm2/yr at 16,600-14,800 and 13,700-12,800 SG062012 yr BP and short-lived (centennial to decadal) episodes of higher flux repeated more than thirty times throughout the deglaciation. The grain sizes, color, and crystallinity of quartz suggest that the content of the detrital materials increased during 16,600-14,800 SG062012 yr BP, which was mostly due to suspended particles supplied from Hasu river through Lake Mikata, that is located immediately upstream of Lake Suigetsu and trapping most of coarse

  19. Records of climatic changes and volcanic events in an ice core from Central Dronning Maud Land (East Antarctica) during the past century

    Indian Academy of Sciences (India)

    V N Nijampurkar; D K Rao; H B Clausen; M K Kaul; A Chaturvedi

    2002-03-01

    The depth profiles of electrical conductance, 18O, 210Pb and cosmogenic radio isotopes 10Be and 36Cl have been measured in a 30 m ice core from east Antarctica near the Indian station, Dakshin Gangotri. Using 210Pb and 18O, the mean annual accumulation rates have been calculated to be 20 and 21 cm of ice equivalent per year during the past ∼150 years. Using these acumulation rates, the volcanic event that occurred in 1815 AD, has been identified based on electrical conductance measurements. Based on 18O measurements, the mean annual surface air temperatures (MASAT) data observed during the last 150 years indicates that the beginning of the 19th century was cooler by about 2°C than the recent past and the middle of 18th century. The fallout of cosmogenic radio isotope 10Be compares reasonably well with those obtained on other stations (73° S to 90°S) from Antarctica and higher latitudes beyond 77°N. The fallout of 36Cl calculated based on the present work agrees well with the mean global production rate estimated earlier by Lal and Peters (1967) The bomb pulse of 36Cl observed in Greenland is not observed in the present studies a result which is puzzling and needs to be studied on neighbouring ice cores from the same region.

  20. Comparison between Greenland Ice-Margin an Ice-Core Oxygen-18 Records

    DEFF Research Database (Denmark)

    Reeh, Niels; Oerter, H.; Thomsen, H. Højmark

    2002-01-01

    or more records were obtained along closely spaced parallel sampling profiles, showing good reproducibility of the records. We present ice-margin delta(18)O records reaching back to the Pleistocene. Many of the characteristic delta(18)O variations known from Greenland deep ice cores can be recognized......Old ice for palaeoenvironmental studies retrieved by deep core drilling in the central regions of the large ice sheets can also be retrieved from the ice-sheet margins. The delta(18)O content of the surface ice was studied at 15 different Greenland ice-margin locations. At some locations, two...... at locations near the central ice divide. This is in accordance with deep ice-core results. We conclude that delta(18)O records measured on ice from the Greenland ice-sheet margin provide useful information about past climate and dynamics of the ice sheet, and thus are important (and cheap) supplements to deep...

  1. Dust and associated trace element fluxes in a firn core from the coastal East Antarctica and its linkages with the Southern Hemisphere climate variability over the last ~ 50 yr

    Directory of Open Access Journals (Sweden)

    C. M. Laluraj

    2013-04-01

    Full Text Available High-resolution records of dust and trace element fluxes were studied in a firn core from the coastal Dronning Maud Land (cDML in East Antarctica to identify the influence of climate variability on accumulation of these components over the past ~ 50 yr. A doubling of dust deposition was observed since 1985, coinciding with a shift in the Southern Annular Mode (SAM index to positive values and associated increase in the wind speed. Back-trajectories showed that an increase in dust deposition is associated with the air parcels originating from north-west of the site, possibly indicating its origin from the Patagonian region. Our results suggest that while multiple processes could have influenced the increased dust formation, shift in SAM had a dominant influence on its transport. It is observed that since the 1985s the strength of easterlies increased significantly over the cDML region, which could sink air and dust material to the region that were brought by the westerlies through mass compensation. The correlation between the dust flux and δ18O records further suggest that enhanced dust flux in the firn core occurred during periods of colder atmospheric temperature, which reduced the moisture content and increased dust fall. Interestingly, the timing and amplitude of the insoluble dust peaks matched remarkably well with the fluxes of Ba, Cr, Cu, and Zn confirming that dust was the main carrier/source of atmospheric trace elements to East Antarctica during the recent past.

  2. A Computing Platform for Parallel Sparse Matrix Computations

    Science.gov (United States)

    2016-01-05

    infiniband. Each node contains 24 cores. This parallel computing platform has been used by my research group in the early stages of developing large... research staff Inventions (DD882) Scientific Progress Two classes of parallel solvers have been developed. The first is a family of parallel sparse...SECURITY CLASSIFICATION OF: This grant enabled the purchase of an Intel multiprocessor consisting of eight multicore nodes interconnected via an

  3. Parallel optimization of multi-view deblurring algorithm in dual-core digital signal processor%多视点去模糊算法在双核DSP上的并行优化

    Institute of Scientific and Technical Information of China (English)

    付航; 章秀华; 贺武

    2015-01-01

    To rea1ize fast running of the mu1ti-view deb1urring a1gorithm in sma11 devices, apara11e1 optimiza-tion a1gorithm was proposed.TMS320C6657 dua1-core digita1 signa1 processor (DSP) was used as the primary computing chip, and the CCSv5.2 was used as the software deve1opment environment. To so1ve the prob1em of time consume of the sing1e core, the time of each sub-function in the a1gorithm was ana1yzed by using the time stamp counter. Then the a1gorithm of matrix mu1tip1ication that consumes the 1ongest time in the sub-function was optimized by dividing into two parts using a dividing point, and the amount of ca1cu1ation was assigned to the two cores of DSP equa11y to ensure the a1gorithm run in para11e1. The resu1ts show that the so1ving method of the dividing point is correct and effective; the running time of the DSP is reduced signifi-cantly by the optimized a1gorithm of mu1ti-view deb1urring and the operation efficiency is improved.%为了实现多视点去模糊算法在小型设备上快速运行,提出了一种并行优化的方法.采用TMS320C6657双核数字信号处理器(DSP,Digita1 signa1 processor)作为主要运算芯片,使用CCSv5.2作为软件开发环境.为了解决算法在单核上运行时间长的问题,首先使用时间戳计数器对算法中各部分功能函数的运行时间进行了详细的统计和分析;然后将运行时间最长的子函数中矩阵相乘部分的算法进行了优化,采用一个分界点将矩阵相乘部分算法划分为两块,将计算量均等的分配到DSP的两个核心上,使这部分算法能够同时在两个核心上并行运算.结果表明对分界点的求解是正确有效的;优化后的图像去模糊算法极大的缩短了DSP上的运算时间,提高了运算效率.

  4. Parallelization in Modern C++

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  5. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  6. Parallel digital forensics infrastructure.

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  7. Introduction to Parallel Computing

    Science.gov (United States)

    1992-05-01

    Topology C, Ada, C++, Data-parallel FORTRAN, 2D mesh of node boards, each node FORTRAN-90 (late 1992) board has 1 application processor Devopment Tools ...parallel machines become the wave of the present, tools are increasingly needed to assist programmers in creating parallel tasks and coordinating...their activities. Linda was designed to be such a tool . Linda was designed with three important goals in mind: to be portable, efficient, and easy to use

  8. Parallel Wolff Cluster Algorithms

    Science.gov (United States)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  9. Practical Parallel Rendering

    CERN Document Server

    Chalmers, Alan

    2002-01-01

    Meeting the growing demands for speed and quality in rendering computer graphics images requires new techniques. Practical parallel rendering provides one of the most practical solutions. This book addresses the basic issues of rendering within a parallel or distributed computing environment, and considers the strengths and weaknesses of multiprocessor machines and networked render farms for graphics rendering. Case studies of working applications demonstrate, in detail, practical ways of dealing with complex issues involved in parallel processing.

  10. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  11. Approach of generating parallel programs from parallelized algorithm design strategies

    Institute of Scientific and Technical Information of China (English)

    WAN Jian-yi; LI Xiao-ying

    2008-01-01

    Today, parallel programming is dominated by message passing libraries, such as message passing interface (MPI). This article intends to simplify parallel programming by generating parallel programs from parallelized algorithm design strategies. It uses skeletons to abstract parallelized algorithm design strategies, as well as parallel architectures. Starting from problem specification, an abstract parallel abstract programming language+ (Apla+) program is generated from parallelized algorithm design strategies and problem-specific function definitions. By combining with parallel architectures, implicity of parallelism inside the parallelized algorithm design strategies is exploited. With implementation and transformation, C++ and parallel virtual machine (CPPVM) parallel program is finally generated. Parallelized branch and bound (B&B) algorithm design strategy and parallelized divide and conquer (D & C) algorithm design strategy are studied in this article as examples. And it also illustrates the approach with a case study.

  12. FEREBUS: Highly parallelized engine for kriging training.

    Science.gov (United States)

    Di Pasquale, Nicodemo; Bane, Michael; Davie, Stuart J; Popelier, Paul L A

    2016-11-05

    FFLUX is a novel force field based on quantum topological atoms, combining multipolar electrostatics with IQA intraatomic and interatomic energy terms. The program FEREBUS calculates the hyperparameters of models produced by the machine learning method kriging. Calculation of kriging hyperparameters (θ and p) requires the optimization of the concentrated log-likelihood L̂(θ,p). FEREBUS uses Particle Swarm Optimization (PSO) and Differential Evolution (DE) algorithms to find the maximum of L̂(θ,p). PSO and DE are two heuristic algorithms that each use a set of particles or vectors to explore the space in which L̂(θ,p) is defined, searching for the maximum. The log-likelihood is a computationally expensive function, which needs to be calculated several times during each optimization iteration. The cost scales quickly with the problem dimension and speed becomes critical in model generation. We present the strategy used to parallelize FEREBUS, and the optimization of L̂(θ,p) through PSO and DE. The code is parallelized in two ways. MPI parallelization distributes the particles or vectors among the different processes, whereas the OpenMP implementation takes care of the calculation of L̂(θ,p), which involves the calculation and inversion of a particular matrix, whose size increases quickly with the dimension of the problem. The run time shows a speed-up of 61 times going from single core to 90 cores with a saving, in one case, of ∼98% of the single core time. In fact, the parallelization scheme presented reduces computational time from 2871 s for a single core calculation, to 41 s for 90 cores calculation. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  13. Antarctic climate variability during the past few centuries based on ice core records from coastal Dronning Maud Land and its implications on the Recent warming

    Digital Repository Service at National Institute of Oceanography (India)

    Thamban, M.; Naik, S.S.; Laluraj, C.M.; Chaturvedi, A.; Ravindra, R.

      Southern  Ocean  is  an  outcome  of  the  interplay of the ice sheet, ocean, sea ice, and atmosphere and their response to past and present  climate forcing. With ~98% of its area covered with snow and ice, the Antarctic continent reflects  most... pressure gradient and its zonal location. Due to the circumpolar nature of this variation,  it  is  called  the  Southern  Annular  Mode  (SAM),  which  is  the  principal  mode  of  variability  in  the  atmospheric circulation of the southern extratropics and high latitudes (see Trenberth et al., 2007).  The...

  14. CPCP: Colorado Plateau Coring Project – 100 Million Years of Early Mesozoic Climatic, Tectonic, and Biotic Evolution of an Epicontinental Basin Complex

    Directory of Open Access Journals (Sweden)

    John W. Geissman

    2008-07-01

    Full Text Available Early Mesozoic epicontinental basins of western North America contain a spectacular record of the climatic and tectonic development of northwestern Pangea as well as what is arguably the world’s richest and most-studied Triassic-Jurassic continental biota. The Colorado Plateau and its environs (Fig. 1 expose the textbook example of these layered sedimentary records (Fig. 2. Intensely studied since the mid-nineteenth century, the basins, their strata, and their fossils have stimulated hypotheses on the development of the Early Mesozoic world as reflected in the international literature. Despite this long history of research, the lack of numerical time calibration, the presence of major uncertainties in global correlations, and an absence of entire suites of environmental proxies still loom large and prevent integration of this immense environmental repository into a useful global picture. Practically insurmountable obstacles to outcrop sampling require a scientific drilling experiment to recover key sedimentary sections that will transform our understanding of the Early Mesozoic world.

  15. Patterns For Parallel Programming

    CERN Document Server

    Mattson, Timothy G; Massingill, Berna L

    2005-01-01

    From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.

  16. Parallel scheduling algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  17. Defending climate science

    Science.gov (United States)

    Showstack, Randy

    2012-01-01

    The National Center for Science Education (NCSE), which has long been in the lead in defending the teaching of evolution in public schools, has expanded its core mission to include defending climate science, the organization announced in January. “We consider climate change a critical issue in our own mission to protect the integrity of science education,” said NSCE executive director Eugenie Scott. “Climate affects everyone, and the decisions we make today will affect generations to come. We need to teach kids now about the realities of global warming and climate change so that they're prepared to make informed, intelligent decisions in the future.”

  18. A note on parallel efficiency of fire simulation on cluster

    Science.gov (United States)

    Valasek, L.; Glasa, J.

    2016-08-01

    Current HPC clusters are capable to reduce execution time of parallelized tasks significantly. The paper discusses the use of two selected strategies of cluster computational resources allocation and their impact on parallel efficiency of fire simulation. Simulation of a simple corridor fire scenario by Fire Dynamics Simulator parallelized by the MPI programming model is tested on the HPC cluster at the Institute of Informatics of Slovak Academy of Sciences in Bratislava (Slovakia). The tests confirm that parallelization has a great potential to reduce execution times achieving promising values of parallel efficiency of the simulation, however, the results also show that the use of increasing numbers of computational meshes resulting in increasing numbers of used computational cores does not necessarily decrease the execution time nor the parallel efficiency of simulation. The results obtained indicate that the simulation achieves different values of the execution time and the parallel efficiency in regard of the used strategy for cluster computational resources allocation.

  19. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  20. Transformer core

    NARCIS (Netherlands)

    Mehendale, A.; Hagedoorn, Wouter; Lötters, Joost Conrad

    2010-01-01

    A transformer core includes a stack of a plurality of planar core plates of a magnetically permeable material, which plates each consist of a first and a second sub-part that together enclose at least one opening. The sub-parts can be fitted together via contact faces that are located on either side

  1. Transformer core

    NARCIS (Netherlands)

    Mehendale, A.; Hagedoorn, Wouter; Lötters, Joost Conrad

    2008-01-01

    A transformer core includes a stack of a plurality of planar core plates of a magnetically permeable material, which plates each consist of a first and a second sub-part that together enclose at least one opening. The sub-parts can be fitted together via contact faces that are located on either side

  2. Effect of core body temperature, time of day, and climate conditions on behavioral patterns of lactating dairy cows experiencing mild to moderate heat stress.

    Science.gov (United States)

    Allen, J D; Hall, L W; Collier, R J; Smith, J F

    2015-01-01

    Cattle show several responses to heat load, including spending more time standing. Little is known about what benefit this may provide for the animals. Data from 3 separate cooling management trials were analyzed to investigate the relationship between behavioral patterns in lactating dairy cows experiencing mild to moderate heat stress and their body temperature. Cows (n=157) were each fitted with a leg data logger that measured position and an intravaginal data logger that measures core body temperature (CBT). Ambient conditions were also collected. All data were standardized to 5-min intervals, and information was divided into several categories: when standing and lying bouts were initiated and the continuance of each bout (7,963 lying and 6,276 standing bouts). In one location, cows were continuously subjected to heat-stress levels according to temperature-humidity index (THI) range (THI≥72). The THI range for the other 2 locations was below and above a heat-stress threshold of 72 THI. Overall and regardless of period of day, cows stood up at greater CBT compared with continuing to stand or switching to a lying position. In contrast, cows lay down at lower CBT compared with continuing to lie or switching to a standing position, and lying bouts lasted longer when cows had lower CBT. Standing bouts also lasted longer when cattle had greater CBT, and they were less likely to lie down (less than 50% of lying bouts initiated) when their body temperature was over 38.8°C. Also, cow standing behavior was affected once THI reached 68. Increasing CBT decreased lying duration and increased standing duration. A CBT of 38.93°C marked a 50% likelihood a cow would be standing. This is the first physiological evidence that standing may help cool cows and provides insight into a communally observed behavioral response to heat.

  3. Options for Parallelizing a Planning and Scheduling Algorithm

    Science.gov (United States)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.

    2011-01-01

    Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.

  4. Parallelized Seeded Region Growing Using CUDA

    Science.gov (United States)

    Park, Seongjin; Lee, Hyunna; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests. PMID:25309619

  5. Parallelized seeded region growing using CUDA.

    Science.gov (United States)

    Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.

  6. Parallel Computing Methods For Particle Accelerator Design

    CERN Document Server

    Popescu, Diana Andreea; Hersch, Roger

    We present methods for parallelizing the transport map construction for multi-core processors and for Graphics Processing Units (GPUs). We provide an efficient implementation of the transport map construction. We describe a method for multi-core processors using the OpenMP framework which brings performance improvement over the serial version of the map construction. We developed a novel and efficient algorithm for multivariate polynomial multiplication for GPUs and we implemented it using the CUDA framework. We show the benefits of using the multivariate polynomial multiplication algorithm for GPUs in the map composition operation for high orders. Finally, we present an algorithm for map composition for GPUs.

  7. Parallel execution of chemical software on EGEE Grid

    CERN Document Server

    Sterzel, Mariusz

    2008-01-01

    Constant interest among chemical community to study larger and larger molecules forces the parallelization of existing computational methods in chemistry and development of new ones. These are main reasons of frequent port updates and requests from the community for the Grid ports of new packages to satisfy their computational demands. Unfortunately some parallelization schemes used by chemical packages cannot be directly used in Grid environment. Here we present a solution for Gaussian package. The current state of development of Grid middleware allows easy parallel execution in case of software using any of MPI flavour. Unfortunately many chemical packages do not use MPI for parallelization therefore special treatment is needed. Gaussian can be executed in parallel on SMP architecture or via Linda. These require reservation of certain number of processors/cores on a given WN and the equal number of processors/cores on each WN, respectively. The current implementation of EGEE middleware does not offer such f...

  8. Programming Massively Parallel Architectures using MARTE: a Case Study

    CERN Document Server

    Rodrigues, Wendell; Dekeyser, Jean-Luc

    2011-01-01

    Nowadays, several industrial applications are being ported to parallel architectures. These applications take advantage of the potential parallelism provided by multiple core processors. Many-core processors, especially the GPUs(Graphics Processing Unit), have led the race of floating-point performance since 2003. While the performance improvement of general- purpose microprocessors has slowed significantly, the GPUs have continued to improve relentlessly. As of 2009, the ratio between many-core GPUs and multicore CPUs for peak floating-point calculation throughput is about 10 times. However, as parallel programming requires a non-trivial distribution of tasks and data, developers find it hard to implement their applications effectively. Aiming to improve the use of many-core processors, this work presents an case-study using UML and MARTE profile to specify and generate OpenCL code for intensive signal processing applications. Benchmark results show us the viability of the use of MDE approaches to generate G...

  9. A hybrid algorithm for parallel molecular dynamics simulations

    Science.gov (United States)

    Mangiardi, Chris M.; Meyer, R.

    2017-10-01

    This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.

  10. Parallelization of Kinetic Theory Simulations

    CERN Document Server

    Howell, Jim; Colbry, Dirk; Pickett, Rodney; Staber, Alec; Sagert, Irina; Strother, Terrance

    2013-01-01

    Numerical studies of shock waves in large scale systems via kinetic simulations with millions of particles are too computationally demanding to be processed in serial. In this work we focus on optimizing the parallel performance of a kinetic Monte Carlo code for astrophysical simulations such as core-collapse supernovae. Our goal is to attain a flexible program that scales well with the architecture of modern supercomputers. This approach requires a hybrid model of programming that combines a message passing interface (MPI) with a multithreading model (OpenMP) in C++. We report on our approach to implement the hybrid design into the kinetic code and show first results which demonstrate a significant gain in performance when many processors are applied.

  11. On the application and interpretation of Keeling plots in paleo climate research – deciphering δ13C of atmospheric CO2 measured in ice cores

    Directory of Open Access Journals (Sweden)

    P. Köhler

    2006-01-01

    Full Text Available The Keeling plot analysis is an interpretation method widely used in terrestrial carbon cycle research to quantify exchange processes of carbon between terrestrial reservoirs and the atmosphere. Here, we analyse measured data sets and artificial time series of the partial pressure of atmospheric carbon dioxide (pCO2 and of δ13C of CO2 over industrial and glacial/interglacial time scales and investigate to what extent the Keeling plot methodology can be applied to longer time scales. The artificial time series are simulation results of the global carbon cycle box model BICYCLE. The signals recorded in ice cores caused by abrupt terrestrial carbon uptake or release loose information due to air mixing in the firn before bubble enclosure and limited sampling frequency. Carbon uptake by the ocean cannot longer be neglected for less abrupt changes as occurring during glacial cycles. We introduce an equation for the calculation of long-term changes in the isotopic signature of atmospheric CO2 caused by an injection of terrestrial carbon to the atmosphere, in which the ocean is introduced as third reservoir. This is a paleo extension of the two reservoir mass balance equations of the Keeling plot approach. It gives an explanation for the bias between the isotopic signature of the terrestrial release and the signature deduced with the Keeling plot approach for long-term processes, in which the oceanic reservoir cannot be neglected. These deduced isotopic signatures are similar (−8.6‰ for steady state analyses of long-term changes in the terrestrial and marine biosphere which both perturb the atmospheric carbon reservoir. They are more positive than the δ13C signals of the sources, e.g. the terrestrial carbon pools themselves (−25‰. A distinction of specific processes acting on the global carbon cycle from the Keeling plot approach is not straightforward. In general, processes related to biogenic fixation or release of carbon have lower y

  12. On the application and interpretation of Keeling plots in paleo climate research – deciphering δ13C of atmospheric CO2 measured in ice cores

    Directory of Open Access Journals (Sweden)

    H. Fischer

    2006-06-01

    Full Text Available The Keeling plot analysis is an interpretation method widely used in terrestrial carbon cycle research to quantify exchange processes of carbon between terrestrial reservoirs and the atmosphere. Here, we analyse measured data sets and artificial time series of the partial pressure of atmospheric carbon dioxide (pCO2 and of δ13C of CO2 over industrial and glacial/interglacial time scales and investigate to what extent the Keeling plot methodology can be applied to longer time scales. The artificial time series are simulation results of the global carbon cycle box model BICYCLE. Our analysis shows that features seen in pCO2 and δ13C during the industrial period can be interpreted with respect to the Keeling plot. However, only a maximum of approximately half of the signal can be explained by this method. The signals recorded in ice cores caused by abrupt terrestrial carbon uptake or release loose information due to air mixing in the firn before bubble enclosure and limited sampling frequency. For less abrupt changes as occurring during glacial cycles carbon uptake by the ocean cannot longer be neglected. We introduce an equation for the calculation of the effective isotopic signature of long-term changes in the carbon cycle, in which the ocean is introduced as third reservoir. This is a paleo extention of the two reservoir mass balance equations of the Keeling plot approach. Steady state analyses of changes in the terrestrial and marine biosphere lead to similar effective isotopic signatures (−8.6 per mil of the carbon fluxes perturbing the atmosphere. These signatures are more positive than the δ13C signals of the sources, e.g. the terrestrial carbon pools themselves (~−25 per mil. In all other cases the effective isotopic signatures are larger (−8.2 per mil to −0.7 per mil, and very often indistinguishable in the light of the uncertainties. Therefore, a back calculation from well distinct fluctuations in pCO2 and δ13C to identify

  13. Parallel adaptive wavelet collocation method for PDEs

    Energy Technology Data Exchange (ETDEWEB)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com [FortiVenti Inc., Suite 404, 999 Canada Place, Vancouver, BC, V6C 3E2 (Canada); Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States); Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States); Vasilyev, Oleg V., E-mail: Oleg.Vasilyev@Colorado.edu [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States)

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  14. Integrated research of parallel computing: Status and future

    Institute of Scientific and Technical Information of China (English)

    CHEN GuoLiang; SUN GuangZhong; XU Yun; LONG Bai

    2009-01-01

    In the past twenty years, the research group in University of Science and Technology of China has de-veloped an integrated research method for parallel computing, which is a combination of "Architecture-Algorithm-Programming-Application". This method is also called the ecological environment of parallel computing research. In this paper, we survey the current status of integrated research method for par-allel computing and by combining the impact of multi-core systems, cloud computing and personal high performance computer, we present our outlook on the future development of parallel computing.

  15. Abrupt climate change:Debate or action

    Institute of Scientific and Technical Information of China (English)

    CHENG Hai

    2004-01-01

    Global abrupt climate changes have been documented by various climate records, including ice cores,ocean sediment cores, lake sediment cores, cave deposits,loess deposits and pollen records. The climate system prefers to be in one of two stable states, i.e. interstadial or stadial conditions, but not in between. The transition between two states has an abrupt character. Abrupt climate changes are,in general, synchronous in the northern hemisphere and tropical regions. The timescale for abrupt climate changes can be as short as a decade. As the impacts may be potentially serious, we need to take actions such as reducing CO2emissions to the atmosphere.

  16. Kalman Filter Tracking on Parallel Architectures

    CERN Document Server

    Cerati, Giuseppe; Lantz, Steven; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2015-01-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has...

  17. Kalman Filter Tracking on Parallel Architectures

    CERN Document Server

    Cerati, Giuseppe; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevz; Wittich, Peter; Wuerthwein, Frank; Yagil, Avi

    2016-01-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. To stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector sy...

  18. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    , and the limited memory in these architectures, severely constrains the data sets that can be processed. Moreover, the language-integrated cost semantics for nested data parallelism pioneered by NESL depends on a parallelism-flattening execution strategy that only exacerbates the problem. This is because...... machine without any changes in the specification. We expose streams as sequences in the frontend languages to provide the programmer with high-level information and control over streamable and non-streamable computations. In particular, we can extend NESL's intuitive and high-level work–depth model......Rank algorithm and a MD5 dictionary attack algorithm. For Streaming NESL we show that for several examples of simple, but not trivially parallelizable, text-processing tasks, we obtain single-core performance on par with off-the-shelf GNU Coreutils code, and near-linear speedups for multiple cores...

  19. Algorithms and parallel computing

    CERN Document Server

    Gebali, Fayez

    2011-01-01

    There is a software gap between the hardware potential and the performance that can be attained using today's software parallel program development tools. The tools need manual intervention by the programmer to parallelize the code. Programming a parallel computer requires closely studying the target algorithm or application, more so than in the traditional sequential programming we have all learned. The programmer must be aware of the communication and data dependencies of the algorithm or application. This book provides the techniques to explore the possible ways to

  20. Parallel Programming Paradigms

    Science.gov (United States)

    1987-07-01

    GOVT ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER 4, TITL.: td Subtitle) S. TYPE OF REPORT & PERIOD COVERED Parallel Programming Paradigms...studied. 0A ITI is Jt, t’i- StCUI-eASSIICATION OFvrHIS PAGFrm".n Def. £ntered, Parallel Programming Paradigms Philip Arne Nelson Department of Computer...8416878 and by the Office of Naval Research Contracts No. N00014-86-K-0264 and No. N00014-85- K-0328. 8 ?~~ O .G 1 49 II Parallel Programming Paradigms

  1. Multi-core processors - An overview

    CERN Document Server

    Venu, Balaji

    2011-01-01

    Microprocessors have revolutionized the world we live in and continuous efforts are being made to manufacture not only faster chips but also smarter ones. A number of techniques such as data level parallelism, instruction level parallelism and hyper threading (Intel's HT) already exists which have dramatically improved the performance of microprocessor cores. This paper briefs on evolution of multi-core processors followed by introducing the technology and its advantages in today's world. The paper concludes by detailing on the challenges currently faced by multi-core processors and how the industry is trying to address these issues.

  2. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  3. Core BPEL

    DEFF Research Database (Denmark)

    Hallwyl, Tim; Højsgaard, Espen

    extensions. Combined with the fact that the language definition does not provide a formal semantics, it is an arduous task to work formally with the language (e.g. to give an implementation). In this paper we identify a core subset of the language, called Core BPEL, which has fewer and simpler constructs......, does not allow omissions, and does not contain ignorable elements. We do so by identifying syntactic sugar, including default values, and ignorable elements in WS-BPEL. The analysis results in a translation from the full language to the core subset. Thus, we reduce the effort needed for working...... formally with WS-BPEL, as one, without loss of generality, need only consider the much simpler Core BPEL. This report may also be viewed as an addendum to the WS-BPEL standard specification, which clarifies the WS-BPEL syntax and presents the essential elements of the language in a more concise way...

  4. Core BPEL

    DEFF Research Database (Denmark)

    Hallwyl, Tim; Højsgaard, Espen

    extensions. Combined with the fact that the language definition does not provide a formal semantics, it is an arduous task to work formally with the language (e.g. to give an implementation). In this paper we identify a core subset of the language, called Core BPEL, which has fewer and simpler constructs......, does not allow omissions, and does not contain ignorable elements. We do so by identifying syntactic sugar, including default values, and ignorable elements in WS-BPEL. The analysis results in a translation from the full language to the core subset. Thus, we reduce the effort needed for working...... formally with WS-BPEL, as one, without loss of generality, need only consider the much simpler Core BPEL. This report may also be viewed as an addendum to the WS-BPEL standard specification, which clarifies the WS-BPEL syntax and presents the essential elements of the language in a more concise way...

  5. Core benefits

    National Research Council Canada - National Science Library

    Keith, Brian W

    2010-01-01

    This SPEC Kit explores the core employment benefits of retirement, and life, health, and other insurance -benefits that are typically decided by the parent institution and often have significant governmental regulation...

  6. Parallel Software Model Checking

    Science.gov (United States)

    2015-01-08

    JAN 2015 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Parallel Software Model Checking 5a. CONTRACT NUMBER 5b. GRANT NUMBER...AND ADDRESS(ES) Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 8. PERFORMING ORGANIZATION REPORT NUMBER 9...3: ∧ ≥ 10 ∧ ≠ 10 ⇒ : Parallel Software Model Checking Team Members Sagar Chaki, Arie Gurfinkel

  7. CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly Parallel Image-Analysis Algorithms

    Science.gov (United States)

    Mighell, Kenneth John

    2010-10-01

    The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called crblaster, which does cosmic-ray rejection of CCD images using the embarrassingly parallel l.a.cosmic algorithm. crblaster is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. crblaster uses a two-dimensional image partitioning algorithm that partitions an input image into N rectangular subimages of nearly equal area; the subimages include sufficient additional pixels along common image partition edges such that the need for communication between computer processes is eliminated. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly parallel algorithms. The crblaster source code is freely available at the official application Web site at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800 × 800 pixel Hubble Space Telescope WFPC2 image takes 44 s with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8 GHz quad-core Intel Xeon processors. crblaster is 7.4 times faster when processing the same image on a single core on the same machine. Processing the same image with crblaster simultaneously on all eight cores of the same machine takes 0.875 s—which is a speedup factor of 50.3 times faster than the

  8. Kalman Filter Tracking on Parallel Architectures

    Directory of Open Access Journals (Sweden)

    Cerati Giuseppe

    2016-01-01

    Full Text Available Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.

  9. Kalman Filter Tracking on Parallel Architectures

    Science.gov (United States)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-11-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.

  10. Continuous parallel coordinates.

    Science.gov (United States)

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.

  11. Designing Parallel Bus Using Universal Asynchronous Receiver Transmitter

    Directory of Open Access Journals (Sweden)

    Satyandra Sharad

    2013-04-01

    Full Text Available This paper entitled “DESIGNING PARALLEL BUSUSING UNIVERSAL ASYNCHRONOUS RECEIVER TRANSMITTER” is designed to the core of a UART interface module, which includes both receive and transmit modules, and the command parser. This paper will be a viable solution to design parallel buses with the help of UART. In the test bench , there is a RFM(register file model to which we write/read back data from just to check our design .The txt file issues serial inputs to the core and the core outputs parallel data and address in the form of bus. This bus is connected to our RFM (register file model instantiated in the test bench along with the design. This makes easy to retrieve parallel data from serial input. The base of the paper is to use microcontroller along with other components to interface with the physical world. In contrast, most serial communication must first be converted back into parallel form by a universal asynchronous receiver/transmitter (UART before they may be directly connected to a data bus. Both Transmissions (Parallel and Serial are used to connect peripheral devices and enable us to communicate with these devices. The UART core described here is designed using VHDL and implemented on Xilinx Vertex FPGA.

  12. Continuous Chemistry in Ice Cores

    DEFF Research Database (Denmark)

    Kjær, Helle Astrid

    originating from volcanic eruptions, crucial for cross-dating ice cores and relevant for climate interpretations. The method includes a heat bath to minimize the acidifying effect of CO2 both from the laboratory and from the ice itself. While for acidic ice the method finds similar concentrations of H......Ice cores provide high resolution records of past climate and environment. In recent years the use of continuous flow analysis (CFA) systems has increased the measurement throughput, while simultaneously decreasing the risk of contaminating the ice samples. CFA measurements of high temporal...... resolution increase our knowledge on fast climate variations and cover a wide range of proxies informing on a variety of components such as atmospheric transport, volcanic eruptions, forest fires and many more. New CFA methods for the determination of dissolved reactive phosphorus (DRP) and pH are presented...

  13. Atmospheric Methane in Ice Cores

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The reconstruction of air trapped in ice cores provides us the most direct information about atmospheric CH4 variations in the past history. Ice core records from the "Three Poles (Antarctica, Greenland and Tibetan Plateau)" reveal the detailed fluctuations of atmospheric CH4 concentration with time and are allowed to quantify the CH4 differences among latitudes. These data are indispensably in the farther study of the relationship between greenhouse gases and climatic change, and of the past changes in terrestrial CH4 emissions. Ice cores reconstruction indicates that atmospheric CH4 concentration has increased quickly since industrialization, and the present day's level of atmospheric CH4 (1800 ppbv) is unprecedented during the past Glacial-Interglacial climate cycles.

  14. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  15. Equalizer: a scalable parallel rendering framework.

    Science.gov (United States)

    Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato

    2009-01-01

    Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.

  16. Parallel Magnetic Resonance Imaging

    CERN Document Server

    Uecker, Martin

    2015-01-01

    The main disadvantage of Magnetic Resonance Imaging (MRI) are its long scan times and, in consequence, its sensitivity to motion. Exploiting the complementary information from multiple receive coils, parallel imaging is able to recover images from under-sampled k-space data and to accelerate the measurement. Because parallel magnetic resonance imaging can be used to accelerate basically any imaging sequence it has many important applications. Parallel imaging brought a fundamental shift in image reconstruction: Image reconstruction changed from a simple direct Fourier transform to the solution of an ill-conditioned inverse problem. This work gives an overview of image reconstruction from the perspective of inverse problems. After introducing basic concepts such as regularization, discretization, and iterative reconstruction, advanced topics are discussed including algorithms for auto-calibration, the connection to approximation theory, and the combination with compressed sensing.

  17. Parallel optical sampler

    Energy Technology Data Exchange (ETDEWEB)

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  18. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  19. Scalable Parallelization of Skyline Computation for Multi-core Processors

    DEFF Research Database (Denmark)

    Chester, Sean; Sidlauskas, Darius; Assent, Ira;

    2015-01-01

    , which is used to minimize dominance tests while maintaining high throughput. The algorithm uses an efficiently-updatable data structure over the shared, global skyline, based on point-based partitioning. Also, we release a large benchmark of optimized skyline algorithms, with which we demonstrate...

  20. Plasmonics and the parallel programming problem

    Science.gov (United States)

    Vishkin, Uzi; Smolyaninov, Igor; Davis, Chris

    2007-02-01

    While many parallel computers have been built, it has generally been too difficult to program them. Now, all computers are effectively becoming parallel machines. Biannual doubling in the number of cores on a single chip, or faster, over the coming decade is planned by most computer vendors. Thus, the parallel programming problem is becoming more critical. The only known solution to the parallel programming problem in the theory of computer science is through a parallel algorithmic theory called PRAM. Unfortunately, some of the PRAM theory assumptions regarding the bandwidth between processors and memories did not properly reflect a parallel computer that could be built in previous decades. Reaching memories, or other processors in a multi-processor organization, required off-chip connections through pins on the boundary of each electric chip. Using the number of transistors that is becoming available on chip, on-chip architectures that adequately support the PRAM are becoming possible. However, the bandwidth of off-chip connections remains insufficient and the latency remains too high. This creates a bottleneck at the boundary of the chip for a PRAM-On-Chip architecture. This also prevents scalability to larger "supercomputing" organizations spanning across many processing chips that can handle massive amounts of data. Instead of connections through pins and wires, power-efficient CMOS-compatible on-chip conversion to plasmonic nanowaveguides is introduced for improved latency and bandwidth. Proper incorporation of our ideas offer exciting avenues to resolving the parallel programming problem, and an alternative way for building faster, more useable and much more compact supercomputers.

  1. SPINning parallel systems software.

    Energy Technology Data Exchange (ETDEWEB)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-03-15

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin.

  2. Coarrars for Parallel Processing

    Science.gov (United States)

    Snyder, W. Van

    2011-01-01

    The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

  3. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  4. ADAPTATION OF PARALLEL VIRTUAL MACHINES MECHANISMS TO PARALLEL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Zafer DEMİR

    2001-02-01

    Full Text Available In this study, at first, Parallel Virtual Machine is reviewed. Since It is based upon parallel processing, it is similar to parallel systems in principle in terms of architecture. Parallel Virtual Machine is neither an operating system nor a programming language. It is a specific software tool that supports heterogeneous parallel systems. However, it takes advantage of the features of both to make users close to parallel systems. Since tasks can be executed in parallel on parallel systems by Parallel Virtual Machine, there is an important similarity between PVM and distributed systems and multiple processors. In this study, the relations in question are examined by making use of Master-Slave programming technique. In conclusion, the PVM is tested with a simple factorial computation on a distributed system to observe its adaptation to parallel architects.

  5. Core Values

    Science.gov (United States)

    Martin, Tim

    2016-01-01

    In this article, two lessons are introduced in which students examine Arctic lake sediments from Lake El'gygytgyn in Russia and discover a climate signal in a lake or pond near their own school. The lessons allow students to experience fieldwork, understand lab procedure, practice basic measurement and observation skills, and learn how to…

  6. Core Values

    Science.gov (United States)

    Martin, Tim

    2016-01-01

    In this article, two lessons are introduced in which students examine Arctic lake sediments from Lake El'gygytgyn in Russia and discover a climate signal in a lake or pond near their own school. The lessons allow students to experience fieldwork, understand lab procedure, practice basic measurement and observation skills, and learn how to…

  7. A hybrid algorithm for parallel molecular dynamics simulations

    CERN Document Server

    Mangiardi, Chris M

    2016-01-01

    This article describes an algorithm for hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-ranged forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with AVX and AVX-2 processors as well as Xeon-Phi co-processors.

  8. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro

    2012-11-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM, experiences have almost exclusively been limited to formulation based on flat homogeneous parallel loops. FMM in fact contains operations that cannot be readily expressed in such conventional but restrictive models. We show that task parallelism, or parallel recursions in particular, allows us to parallelize all operations of FMM naturally and scalably. Moreover it allows us to parallelize a \\'\\'mutual interaction\\'\\' for force/potential evaluation, which is roughly twice as efficient as a more conventional, unidirectional force/potential evaluation. The net result is an open source FMM that is clearly among the fastest single node implementations, including those on GPUs; with a million particles on a 32 cores Sandy Bridge 2.20GHz node, it completes a single time step including tree construction and force/potential evaluation in 65 milliseconds. The study clearly showcases both programmability and performance benefits of flexible parallel constructs over more monolithic parallel loops. © 2012 IEEE.

  9. A Parallel Saturation Algorithm on Shared Memory Architectures

    Science.gov (United States)

    Ezekiel, Jonathan; Siminiceanu

    2007-01-01

    Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.

  10. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  11. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  12. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  13. Parallel and Distributed Databases

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Kemper, Alfons; Prieto, Manuel; Szalay, Alex

    2009-01-01

    Euro-Par Topic 5 addresses data management issues in parallel and distributed computing. Advances in data management (storage, access, querying, retrieval, mining) are inherent to current and future information systems. Today, accessing large volumes of information is a reality: Data-intensive appli

  14. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  15. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    2001-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were implemente

  16. Implementation of Parallel Algorithms

    Science.gov (United States)

    1991-09-30

    Lecture Notes in Computer Science , Warwich, England, July 16-20... Lecture Notes in Computer Science , Springer-Verlag, Bangalor, India, December 1990. J. Reif, J. Canny, and A. Page, "An Exact Algorithm for Kinodynamic...Parallel Algorithms and its Impact on Computational Geometry, in Optimal Algorithms, H. Djidjev editor, Springer-Verlag Lecture Notes in Computer Science

  17. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  18. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  19. The Ophidia framework: toward cloud-based data analytics for climate change

    Science.gov (United States)

    Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni

    2015-04-01

    The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private

  20. Parallel Semi-Implicit Spectral Element Atmospheric Model

    Science.gov (United States)

    Fournier, A.; Thomas, S.; Loft, R.

    2001-05-01

    The shallow-water equations (SWE) have long been used to test atmospheric-modeling numerical methods. The SWE contain essential wave-propagation and nonlinear effects of more complete models. We present a semi-implicit (SI) improvement of the Spectral Element Atmospheric Model to solve the SWE (SEAM, Taylor et al. 1997, Fournier et al. 2000, Thomas & Loft 2000). SE methods are h-p finite element methods combining the geometric flexibility of size-h finite elements with the accuracy of degree-p spectral methods. Our work suggests that exceptional parallel-computation performance is achievable by a General-Circulation-Model (GCM) dynamical core, even at modest climate-simulation resolutions (>1o). The code derivation involves weak variational formulation of the SWE, Gauss(-Lobatto) quadrature over the collocation points, and Legendre cardinal interpolators. Appropriate weak variation yields a symmetric positive-definite Helmholtz operator. To meet the Ladyzhenskaya-Babuska-Brezzi inf-sup condition and avoid spurious modes, we use a staggered grid. The SI scheme combines leapfrog and Crank-Nicholson schemes for the nonlinear and linear terms respectively. The localization of operations to elements ideally fits the method to cache-based microprocessor computer architectures --derivatives are computed as collections of small (8x8), naturally cache-blocked matrix-vector products. SEAM also has desirable boundary-exchange communication, like finite-difference models. Timings on on the IBM SP and Compaq ES40 supercomputers indicate that the SI code (20-min timestep) requires 1/3 the CPU time of the explicit code (2-min timestep) for T42 resolutions. Both codes scale nearly linearly out to 400 processors. We achieved single-processor performance up to 30% of peak for both codes on the 375-MHz IBM Power-3 processors. Fast computation and linear scaling lead to a useful climate-simulation dycore only if enough model time is computed per unit wall-clock time. An efficient SI

  1. A Parallel Algebraic Multigrid Solver on Graphics Processing Units

    KAUST Repository

    Haase, Gundolf

    2010-01-01

    The paper presents a multi-GPU implementation of the preconditioned conjugate gradient algorithm with an algebraic multigrid preconditioner (PCG-AMG) for an elliptic model problem on a 3D unstructured grid. An efficient parallel sparse matrix-vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster and a multi-GPU configuration with eight GPUs is about 100 times faster than a typical server CPU core. © 2010 Springer-Verlag.

  2. Kalman Filter Tracking on Parallel Architectures

    Science.gov (United States)

    Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2015-12-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter [2]. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust and are exactly those being used today for the design of the tracking system for HL-LHC. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedup both with Intel Xeon and Xeon Phi. We report here our further progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic simulation setup.

  3. Core Java

    CERN Document Server

    Horstmann, Cay S

    2013-01-01

    Fully updated to reflect Java SE 7 language changes, Core Java™, Volume I—Fundamentals, Ninth Edition, is the definitive guide to the Java platform. Designed for serious programmers, this reliable, unbiased, no-nonsense tutorial illuminates key Java language and library features with thoroughly tested code examples. As in previous editions, all code is easy to understand, reflects modern best practices, and is specifically designed to help jumpstart your projects. Volume I quickly brings you up-to-speed on Java SE 7 core language enhancements, including the diamond operator, improved resource handling, and catching of multiple exceptions. All of the code examples have been updated to reflect these enhancements, and complete descriptions of new SE 7 features are integrated with insightful explanations of fundamental Java concepts.

  4. CITYZEN climate impact studies

    Energy Technology Data Exchange (ETDEWEB)

    Schutz, Martin (ed.)

    2011-07-01

    We have estimated the impact of climate change on the chemical composition of the troposphere due to changes in climate from current climate (2000-2010) looking 40 years ahead (2040-2050). The climate projection has been made by the ECHAM5 model and was followed by chemistry-transport modelling using a global model, Oslo CTM2 (Isaksen et al., 2005; Srvde et al., 2008), and a regional model, EMEP. In this report we focus on carbon monoxide (CO) and surface ozone (O3) which are measures of primary and secondary air pollution. In parallel we have estimated the change in the same air pollutants resulting from changes in emissions over the same time period. (orig.)

  5. Bitplane Image Coding With Parallel Coefficient Processing.

    Science.gov (United States)

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible.

  6. Parallelizing TTree::Draw functionality with PROOF

    CERN Document Server

    Marinaci, Stefano

    2014-01-01

    In ROOT, the software context of this project, multi-threading is not currently an easy option, because ROOT is not by construction thread-aware and thread-safeness can only be achieved with heavy locking. Therefore, for a ROOT task, multi-processing is currently the most eective way to achieve cuncurrency. Multi-processing in ROOT is done via PROOF. PROOF is used to enable interactive analysis of large sets of ROOT les in parallel on clusters of computers or many-core machines. More generally PROOF can parallelize tasks that can be formulated as a set of independent sub-tasks. The PROOF technology is rather ecient to exploit all the CPU's provided by many-core processors. A dedicated version of PROOF, PROOF-Lite, provides an out-of-the-box solution to take full advantage of the additional cores available in today desktops or laptops. One of the items on the PROOF plan of work is to improve the inte- gration of PROOF-Lite for local processing of ROOT trees. In this project we investigate the case of the Draw ...

  7. To Parallelize or Not to Parallelize, Speed Up Issue

    CERN Document Server

    Elnashar, Alaa Ismail

    2011-01-01

    Running parallel applications requires special and expensive processing resources to obtain the required results within a reasonable time. Before parallelizing serial applications, some analysis is recommended to be carried out to decide whether it will benefit from parallelization or not. In this paper we discuss the issue of speed up gained from parallelization using Message Passing Interface (MPI) to compromise between the overhead of parallelization cost and the gained parallel speed up. We also propose an experimental method to predict the speed up of MPI applications.

  8. Parallel Execution of Multi Set Constraint Rewrite Rules

    DEFF Research Database (Denmark)

    Sulzmann, Martin; Lam, Edmund Soon Lee

    2008-01-01

    that the underlying constraint rewrite implementation executes rewrite steps in parallel on increasingly popular becoming multi-core architectures. We design and implement efficient algorithms which allow for the parallel execution of multi-set constraint rewrite rules. Our experiments show that we obtain some......Multi-set constraint rewriting allows for a highly parallel computational model and has been used in a multitude of application domains such as constraint solving, agent specification etc. Rewriting steps can be applied simultaneously as long as they do not interfere with each other.We wish...

  9. SPEED-UP IMPROVEMENT USING PARALLEL APPROACH IN IMAGE STEGANOGRAPHY

    Directory of Open Access Journals (Sweden)

    JyothiUpadhya K

    2013-12-01

    Full Text Available This paper presents a parallel approach to improve the time complexity problem associated with sequential algorithms. An image steganography algorithm in transform domain is considered for implementation. Image steganography is a technique to hide secret message in an image. With the parallel implementation, large message can be hidden in large image since it does not take much processing time. It is implemented on GPU systems. Parallel programming is done using OpenCL in CUDA cores from NVIDIA. The speed-up improvement obtained is very good with reasonably good output signal quality, when large amount of data is processed

  10. PARALLEL ASSAY OF OXYGEN EQUILIBRIA OF HEMOGLOBIN

    Science.gov (United States)

    Lilly, Laura E.; Blinebry, Sara K.; Viscardi, Chelsea M.; Perez, Luis; Bonaventura, Joe; McMahon, Tim J.

    2013-01-01

    Methods to systematically analyze in parallel the function of multiple protein or cell samples in vivo or ex vivo (i.e. functional proteomics) in a controlled gaseous environment have thus far been limited. Here we describe an apparatus and procedure that enables, for the first time, parallel assay of oxygen equilibria in multiple samples. Using this apparatus, numerous simultaneous oxygen equilibrium curves (OECs) can be obtained under truly identical conditions from blood cell samples or purified hemoglobins (Hbs). We suggest that the ability to obtain these parallel datasets under identical conditions can be of immense value, both to biomedical researchers and clinicians who wish to monitor blood health, and to physiologists studying non-human organisms and the effects of climate change on these organisms. Parallel monitoring techniques are essential in order to better understand the functions of critical cellular proteins. The procedure can be applied to human studies, wherein an OEC can be analyzed in light of an individual’s entire genome. Here, we analyzed intraerythrocytic Hb, a protein that operates at the organism’s environmental interface and then comes into close contact with virtually all of the organism’s cells. The apparatus is theoretically scalable, and establishes a functional proteomic screen that can be correlated with genomic information on the same individuals. This new method is expected to accelerate our general understanding of protein function, an increasingly challenging objective as advances in proteomic and genomic throughput outpace the ability to study proteins’ functional properties. PMID:23827235

  11. Design Patterns: establishing a discipline of parallel software engineering

    CERN Document Server

    CERN. Geneva

    2010-01-01

    Many core processors present us with a software challenge. We must turn our serial code into parallel code. To accomplish this wholesale transformation of our software ecosystem, we must define established practice is in parallel programming and then develop tools to support that practice. This leads to design patterns supported by frameworks optimized at runtime with advanced autotuning compilers. In this talk I provide an update of my ongoing research with the ParLab at UC Berkeley to realize this vision. In particular, I will describe our draft parallel pattern language, our early experiments with software frameworks, and the associated runtime optimization tools.About the speakerTim Mattson is a parallel programmer (Ph.D. Chemistry, UCSC, 1985). He does linear algebra, finds oil, shakes molecules, solves differential equations, and models electrons in simple atomic systems. He has spent his career working with computer scientists to make sure the needs of parallel applications programmers are met.Tim has ...

  12. Collisionless parallel shocks

    Science.gov (United States)

    Khabibrakhmanov, I. KH.; Galeev, A. A.; Galinskii, V. L.

    1993-01-01

    Consideration is given to a collisionless parallel shock based on solitary-type solutions of the modified derivative nonlinear Schroedinger equation (MDNLS) for parallel Alfven waves. The standard derivative nonlinear Schroedinger equation is generalized in order to include the possible anisotropy of the plasma distribution and higher-order Korteweg-de Vies-type dispersion. Stationary solutions of MDNLS are discussed. The anisotropic nature of 'adiabatic' reflections leads to the asymmetric particle distribution in the upstream as well as in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves, which can significantly contribute to the shock thermalization.

  13. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  14. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  15. Ultrascalable petaflop parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  16. Homology, convergence and parallelism.

    Science.gov (United States)

    Ghiselin, Michael T

    2016-01-05

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. © 2015 The Author(s).

  17. An integrated approach to improving the parallel applications development process

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Craig E [Los Alamos National Laboratory; Watson, Gregory R [IBM; Tibbitts, Beth R [IBM

    2009-01-01

    The development of parallel applications is becoming increasingly important to a broad range of industries. Traditionally, parallel programming was a niche area that was primarily exploited by scientists trying to model extremely complicated physical phenomenon. It is becoming increasingly clear, however, that continued hardware performance improvements through clock scaling and feature-size reduction are simply not going to be achievable for much longer. The hardware vendor's approach to addressing this issue is to employ parallelism through multi-processor and multi-core technologies. While there is little doubt that this approach produces scaling improvements, there are still many significant hurdles to be overcome before parallelism can be employed as a general replacement to more traditional programming techniques. The Parallel Tools Platform (PTP) Project was created in 2005 in an attempt to provide developers with new tools aimed at addressing some of the parallel development issues. Since then, the introduction of a new generation of peta-scale and multi-core systems has highlighted the need for such a platform. In this paper, we describe some of the challenges facing parallel application developers, present the current state of PTP, and provide a simple case study that demonstrates how PTP can be used to locate a potential deadlock situation in an MPI code.

  18. An integrated approach to improving the parallel applications development process

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Craig E [Los Alamos National Laboratory; Watson, Gregory R [IBM; Tibbitts, Beth R [IBM

    2009-01-01

    The development of parallel applications is becoming increasingly important to a broad range of industries. Traditionally, parallel programming was a niche area that was primarily exploited by scientists trying to model extremely complicated physical phenomenon. It is becoming increasingly clear, however, that continued hardware performance improvements through clock scaling and feature-size reduction are simply not going to be achievable for much longer. The hardware vendor's approach to addressing this issue is to employ parallelism through multi-processor and multi-core technologies. While there is little doubt that this approach produces scaling improvements, there are still many significant hurdles to be overcome before parallelism can be employed as a general replacement to more traditional programming techniques. The Parallel Tools Platform (PTP) Project was created in 2005 in an attempt to provide developers with new tools aimed at addressing some of the parallel development issues. Since then, the introduction of a new generation of peta-scale and multi-core systems has highlighted the need for such a platform. In this paper, we describe some of the challenges facing parallel application developers, present the current state of PTP, and provide a simple case study that demonstrates how PTP can be used to locate a potential deadlock situation in an MPI code.

  19. Parallel programming with MPI

    Energy Technology Data Exchange (ETDEWEB)

    Tatebe, Osamu [Electrotechnical Lab., Tsukuba, Ibaraki (Japan)

    1998-03-01

    MPI is a practical, portable, efficient and flexible standard for message passing, which has been implemented on most MPPs and network of workstations by machine vendors, universities and national laboratories. MPI avoids specifying how operations will take place and superfluous work to achieve efficiency as well as portability, and is also designed to encourage overlapping communication and computation to hide communication latencies. This presentation briefly explains the MPI standard, and comments on efficient parallel programming to improve performance. (author)

  20. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  1. Implementation of Parallel Algorithms

    Science.gov (United States)

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  2. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  3. Parallel Algorithms Derivation

    Science.gov (United States)

    1989-03-31

    Lecture Notes in Computer Science , Warwich, England, July 16.20, 1990. J. Reif and J. Storer, "A Parallel Architecture for...34, The 10th Conference on Foundations of Software Technology and Theoretical Computer Science, Lecture Notes in Computer Science , Springer-Verlag...Geometry, in Optimal Algorithms, H. Djidjev editor, Springer-Verlag Lecture Notes in Computer Science 401, 1989, 1.8.. J. Reif, R. Paturi, and S.

  4. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  5. Neural simulations on multi-core architectures

    Directory of Open Access Journals (Sweden)

    Hubert Eichner

    2009-07-01

    Full Text Available Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i. e. user-transparent load balancing.

  6. Progress in Fast, Accurate Multi-scale Climate Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Collins, William D [Lawrence Berkeley National Laboratory (LBNL); Johansen, Hans [Lawrence Berkeley National Laboratory (LBNL); Evans, Katherine J [ORNL; Woodward, Carol S. [Lawrence Livermore National Laboratory (LLNL); Caldwell, Peter [Lawrence Livermore National Laboratory (LLNL)

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  7. Parallel Feature Extraction System

    Institute of Scientific and Technical Information of China (English)

    MAHuimin; WANGYan

    2003-01-01

    Very high speed image processing is needed in some application specially for weapon. In this paper, a high speed image feature extraction system with parallel structure was implemented by Complex programmable logic device (CPLD), and it can realize image feature extraction in several microseconds almost with no delay. This system design is presented by an application instance of flying plane, whose infrared image includes two kinds of feature: geometric shape feature in the binary image and temperature-feature in the gray image. Accordingly the feature extraction is taken on the two kind features. Edge and area are two most important features of the image. Angle often exists in the connection of the different parts of the target's image, which indicates that one area ends and the other area begins. The three key features can form the whole presentation of an image. So this parallel feature extraction system includes three processing modules: edge extraction, angle extraction and area extraction. The parallel structure is realized by a group of processors, every detector is followed by one route of processor, every route has the same circuit form, and works together at the same time controlled by a set of clock to realize feature extraction. The extraction system has simple structure, small volume, high speed, and better stability against noise. It can be used in the war field recognition system.

  8. The Parallel C Preprocessor

    Directory of Open Access Journals (Sweden)

    Eugene D. Brooks III

    1992-01-01

    Full Text Available We describe a parallel extension of the C programming language designed for multiprocessors that provide a facility for sharing memory between processors. The programming model was initially developed on conventional shared memory machines with small processor counts such as the Sequent Balance and Alliant FX/8, but has more recently been used on a scalable massively parallel machine, the BBN TC2000. The programming model is split-join rather than fork-join. Concurrency is exploited to use a fixed number of processors more efficiently rather than to exploit more processors as in the fork-join model. Team splitting, a mechanism to split the team of processors executing a code into subteams to handle parallel subtasks, is used to provide an efficient mechanism to exploit nested concurrency. We have found the split-join programming model to have an inherent implementation advantage, compared to the fork-join model, when the number of processors in a machine becomes large.

  9. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    -parallel TFO strand was modified with Y with one or two insertions at the end of the TFO strand, the thermal stability was increased 1.2 °C and 3 °C at pH 7.2, respectively, whereas one insertion in the middle of the TFO strand decreased the thermal stability 1.4 °C compared to the wild type oligonucleotide......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...... chain, especially at the end of the TFO strand. On the other hand, the thermal stability of the anti-parallel triplex was dramatically decreased when the TFO strand was modified with the LNA monomer analog Z in the middle of the TFO strand (ΔTm = -9.1 °C). Also the thermal stability decreased...

  10. Integrated Task And Data Parallel Programming: Language Design

    Science.gov (United States)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated

  11. Parallel Programming with MatlabMPI

    CERN Document Server

    Kepner, J V

    2001-01-01

    MatlabMPI is a Matlab implementation of the Message Passing Interface (MPI) standard and allows any Matlab program to exploit multiple processors. MatlabMPI currently implements the basic six functions that are the core of the MPI point-to-point communications standard. The key technical innovation of MatlabMPI is that it implements the widely used MPI ``look and feel'' on top of standard Matlab file I/O, resulting in an extremely compact (~100 lines) and ``pure'' implementation which runs anywhere Matlab runs. The performance has been tested on both shared and distributed memory parallel computers. MatlabMPI can match the bandwidth of C based MPI at large message sizes. A test image filtering application using MatlabMPI achieved a speedup of ~70 on a parallel computer.

  12. pMatlab Parallel Matlab Library

    CERN Document Server

    Bliss, N; Bliss, Nadya; Kepner, Jeremy

    2006-01-01

    MATLAB has emerged as one of the languages most commonly used by scientists and engineers for technical computing, with ~1,000,000 users worldwide. The compute intensive nature of technical computing means that many MATLAB users have codes that can significantly benefit from the increased performance offered by parallel computing. pMatlab (www.ll.mit.edu/pMatlab) provides this capability by implementing Parallel Global Array Semantics (PGAS) using standard operator overloading techniques. The core data structure in pMatlab is a distributed numerical array whose distribution onto multiple processors is specified with a map construct. Communication operations between distributed arrays are abstracted away from the user and pMatlab transparently supports redistribution between any block-cyclic-overlapped distributions up to four dimensions. pMatlab is built on top of the MatlabMPI communication library (www.ll.mit.edu/MatlabMPI) and runs on any combination of heterogeneous systems that support MATLAB, which incl...

  13. Oxytocin: parallel processing in the social brain?

    Science.gov (United States)

    Dölen, Gül

    2015-06-01

    Early studies attempting to disentangle the network complexity of the brain exploited the accessibility of sensory receptive fields to reveal circuits made up of synapses connected both in series and in parallel. More recently, extension of this organisational principle beyond the sensory systems has been made possible by the advent of modern molecular, viral and optogenetic approaches. Here, evidence supporting parallel processing of social behaviours mediated by oxytocin is reviewed. Understanding oxytocinergic signalling from this perspective has significant implications for the design of oxytocin-based therapeutic interventions aimed at disorders such as autism, where disrupted social function is a core clinical feature. Moreover, identification of opportunities for novel technology development will require a better appreciation of the complexity of the circuit-level organisation of the social brain. © 2015 The Authors. Journal of Neuroendocrinology published by John Wiley & Sons Ltd on behalf of British Society for Neuroendocrinology.

  14. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  15. Influence of Sea Ice on Arctic Marine Sulfur Biogeochemistry in the Community Climate System Model

    Energy Technology Data Exchange (ETDEWEB)

    Deal, Clara [Univ. of Alaska, Fairbanks, AL (United States); Jin, Meibing [Univ. of Alaska, Fairbanks, AL (United States)

    2013-06-30

    Global climate models (GCMs) have not effectively considered how responses of arctic marine ecosystems to a warming climate will influence the global climate system. A key response of arctic marine ecosystems that may substantially influence energy exchange in the Arctic is a change in dimethylsulfide (DMS) emissions, because DMS emissions influence cloud albedo. This response is closely tied to sea ice through its impacts on marine ecosystem carbon and sulfur cycling, and the ice-albedo feedback implicated in accelerated arctic warming. To reduce the uncertainty in predictions from coupled climate simulations, important model components of the climate system, such as feedbacks between arctic marine biogeochemistry and climate, need to be reasonably and realistically modeled. This research first involved model development to improve the representation of marine sulfur biogeochemistry simulations to understand/diagnose the control of sea-ice-related processes on the variability of DMS dynamics. This study will help build GCM predictions that quantify the relative current and possible future influences of arctic marine ecosystems on the global climate system. Our overall research objective was to improve arctic marine biogeochemistry in the Community Climate System Model (CCSM, now CESM). Working closely with the Climate Ocean Sea Ice Model (COSIM) team at Los Alamos National Laboratory (LANL), we added 1 sea-ice algae and arctic DMS production and related biogeochemistry to the global Parallel Ocean Program model (POP) coupled to the LANL sea ice model (CICE). Both CICE and POP are core components of CESM. Our specific research objectives were: 1) Develop a state-of-the-art ice-ocean DMS model for application in climate models, using observations to constrain the most crucial parameters; 2) Improve the global marine sulfur model used in CESM by including DMS biogeochemistry in the Arctic; and 3) Assess how sea ice influences DMS dynamics in the arctic marine

  16. 支持多核并行程序确定性重放的高效访存冲突记录方法%High Efficient Memory Race Recording Scheme for Parallel Program Deterministic Replay Under Multi-Core Architecture

    Institute of Scientific and Technical Information of China (English)

    刘磊; 黄河; 唐志敏

    2012-01-01

    多核系统中并行程序执行过程的不确定性给程序调试带来了很大的困难.准确记录初始执行中冲突访存的次序是并行程序确定性重放的基础.提出了通过建立精确happens-before关系记录访存冲突的方法.此方法利用简洁高效的地址冲突检测机制确定冲突访存操作在执行中所处happens-before序关系的位置,可以抑制部分记录信息的产生,从而有效减少记录信息.与其他方式方法相比,可以进一步压缩17%的记录条数.采用逻辑向量时钟描述冲突访存操作间的happens-before关系,与采用标量时钟相比,可以避免happens-before关系的误识,降低重放执行时并行度的损失.%Current shared memory multi-core and multiprocessor systems are nondeterministic. When these systems execute a multithreaded application, even if supplied with the same input, they could produce a different output each time. It frustrates debugging and limits the ability to properly test multithreaded code, and is becoming a major stumbling block to the much-needed widespread adoption of parallel programming. The support for deterministic replay of multithreaded execution is greatly helpful in finding concurrency bugs. A memory race recording scheme, named Rainbow, is proposed. Its core idea is to make inter-thread communications fully deterministic. The unique feature of Rainbow is that it precisely sets up happens-before relationships between conflicting memory operations among different threads. By using effective, bloom-filter based, coherence history queue, Rainbow removes redundant happens-before relations implied in the already generated log and enables a compact log. Rainbow adds the modest hardware to the base multi-core processors, and the coherence protocol is unmodified. The analysis results show that Rainbow reduces the log size by 17% of a state-of-the-art scheme, and the records execution speed is similar to that of release consistency (RC) execution

  17. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  18. Parallel processing of atmospheric chemistry calculations: Preliminary considerations

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, S.; Jones, P.

    1995-01-01

    Global climate calculations are already saturating the class modern vector supercomputers with only a few central processing units. Increased resolution and inclusion of routines to deal with biogeochemical portions of the terrestrial climate system will soon demand massively parallel approaches. The atmospheric photochemistry ensemble is intimately linked to climate through the trace greenhouse gases ozone and methane and modules for representing it are being attached to global three dimensional transport and GCM frameworks. Atmospheric kinetics involve dozens of highly interactive tracers and so will accentuate the need for parallel processing of earth system simulations. In the present text we lay some of the groundwork for addition of atmospheric kinetics packages to GCM and global scale atmospheric models on multiply parallel computers. The discussion is tailored for consumption by the photochemical modelling community. After a review of numerical atmospheric chemistry methods, we examine how kinetics can be implemented on a parallel computer. We concentrate especially on data layout and flexibility and how these can be implemented in various programming models. We conclude that chemistry can be implemented rather easily within existing frameworks of several parallel atmospheric models. However, memory limitations may preclude high resolution studies of global chemistry.

  19. A 21 000-year record of fluorescent organic matter markers in the WAIS Divide ice core

    Science.gov (United States)

    D'Andrilli, Juliana; Foreman, Christine M.; Sigl, Michael; Priscu, John C.; McConnell, Joseph R.

    2017-05-01

    Englacial ice contains a significant reservoir of organic material (OM), preserving a chronological record of materials from Earth's past. Here, we investigate if OM composition surveys in ice core research can provide paleoecological information on the dynamic nature of our Earth through time. Temporal trends in OM composition from the early Holocene extending back to the Last Glacial Maximum (LGM) of the West Antarctic Ice Sheet Divide (WD) ice core were measured by fluorescence spectroscopy. Multivariate parallel factor (PARAFAC) analysis is widely used to isolate the chemical components that best describe the observed variation across three-dimensional fluorescence spectroscopy (excitation-emission matrices; EEMs) assays. Fluorescent OM markers identified by PARAFAC modeling of the EEMs from the LGM (27.0-18.0 kyr BP; before present 1950) through the last deglaciation (LD; 18.0-11.5 kyr BP), to the mid-Holocene (11.5-6.0 kyr BP) provided evidence of different types of fluorescent OM composition and origin in the WD ice core over 21.0 kyr. Low excitation-emission wavelength fluorescent PARAFAC component one (C1), associated with chemical species similar to simple lignin phenols was the greatest contributor throughout the ice core, suggesting a strong signature of terrestrial OM in all climate periods. The component two (C2) OM marker, encompassed distinct variability in the ice core describing chemical species similar to tannin- and phenylalanine-like material. Component three (C3), associated with humic-like terrestrial material further resistant to biodegradation, was only characteristic of the Holocene, suggesting that more complex organic polymers such as lignins or tannins may be an ecological marker of warmer climates. We suggest that fluorescent OM markers observed during the LGM were the result of greater continental dust loading of lignin precursor (monolignol) material in a drier climate, with lower marine influences when sea ice extent was higher and

  20. Lightweight Specifications for Parallel Correctness

    Science.gov (United States)

    2012-12-05

    series (series), encryption and decryption (crypt), and LU factorization (lufact) — as well as a parallel molecular dynamic simulator (moldyn), ray...111, 57, 132]). The PJ benchmarks include an app computing a Monte Carlo approximation of π (pi), a parallel cryptographic key cracking app (keysearch3...an app for parallel rendering Mandelbrot Set images (mandelbrot), and a parallel branch-and-bound search for optimal phylogenetic trees (phylogeny

  1. Architectural Adaptability in Parallel Programming

    Science.gov (United States)

    1991-05-01

    I AD-A247 516 Architectural Adaptability in Parallel Programming Lawrence Alan Crowl Technical Report 381 May 1991 92-06322 UNIVERSITY OF ROC R...COMPUTER SCIENCE Best Avai~lable Copy Architectural Adaptability in Parallel Programming by Lawrence Alan Crowl Submitted in Partial Fulfillment of the...in the development of their programs. In applying abstraction to parallel programming , we can use abstractions to represent potential parallelism

  2. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  3. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  4. Modelling Interglacial Climate

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Anker

    Past warm climate states could potentially provide information on future global warming. The past warming was driven by changed insolation rather than an increased greenhouse effect, and thus the warm climate states are expected to be different. Nonetheless, the response of the climate system...... the impact of a changing sea ice cover. The first part focusses on the last interglacial climate (125,000 years before present) which was characterized by substantial warming at high northern latitudes due to an increased insolation during summer. The simulations reveal that the oceanic changes dominate...... the response at high northern latitudes, while the direct insolation impact is more dominant in the tropics. On Greenland, the simulated warming is low compared to the ice core reconstructions. Surface mass balance calculations indicate that the oceanic conditions favor increased accumulation in the southeast...

  5. Greenland climate change

    DEFF Research Database (Denmark)

    Masson-Delmotte, Valerie; Swingedouw, D.; Landais, A.

    2012-01-01

    Climate archives available from deep-sea and marine shelf sediments, glaciers, lakes and ice cores in and around Greenland allow us to place the current trends in regional climate, ice sheet dynamics, and land surface changes in a broader perspective. We show that during the last decade (2000s...... regional climate and ice sheet dynamics. The magnitude and rate of future changes in Greenland temperature, in response to increasing greenhouse gas emissions, may be faster than any past abrupt events occurring under interglacial conditions. Projections indicate that within one century Greenland may......), atmospheric and sea-surface temperatures are reaching levels last encountered millennia ago when northern high latitude summer insolation was higher due to a different orbital configuration. Concurrently, records from lake sediments in southern Greenland document major environmental and climatic conditions...

  6. A full parallel radix sorting algorithm for multicore processors

    OpenAIRE

    Maus, Arne

    2011-01-01

    The problem addressed in this paper is that we want to sort an integer array a [] of length n on a multi core machine with k cores. Amdahl’s law tells us that the inherent sequential part of any algorithm will in the end dominate and limit the speedup we get from parallelisation of that algorithm. This paper introduces PARL, a parallel left radix sorting algorithm for use on ordinary shared memory multi core machines, that has just one simple statement in its sequential part. It can be seen a...

  7. Massively Parallel Genetics.

    Science.gov (United States)

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance." Copyright © 2016 by the Genetics Society of America.

  8. Parallel Eclipse Project Checkout

    Science.gov (United States)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  9. CSM parallel structural methods research

    Science.gov (United States)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  10. Integrated Current Balancing Transformer for Primary Parallel Isolated Boost Converter

    DEFF Research Database (Denmark)

    Sen, Gökhan; Ouyang, Ziwei; Thomsen, Ole Cornelius

    2011-01-01

    A simple, PCB compatible integrated solution is proposed for the current balancing requirement of the primary parallel isolated boost converter (PPIBC). Input inductor and the current balancing transformer are merged into the same core, which reduces the number of components allowing a cheaper...

  11. Exact parallel maximum clique algorithm for general and protein graphs.

    Science.gov (United States)

    Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka

    2013-09-23

    A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.

  12. Integrated Current Balancing Transformer for Primary Parallel Isolated Boost Converter

    DEFF Research Database (Denmark)

    Sen, Gökhan; Ouyang, Ziwei; Thomsen, Ole Cornelius;

    2011-01-01

    A simple, PCB compatible integrated solution is proposed for the current balancing requirement of the primary parallel isolated boost converter (PPIBC). Input inductor and the current balancing transformer are merged into the same core, which reduces the number of components allowing a cheaper...

  13. Applied Parallel Metadata Indexing

    Energy Technology Data Exchange (ETDEWEB)

    Jacobi, Michael R [Los Alamos National Laboratory

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  14. Theory of Parallel Mechanisms

    CERN Document Server

    Huang, Zhen; Ding, Huafeng

    2013-01-01

    This book contains mechanism analysis and synthesis. In mechanism analysis, a mobility methodology is first systematically presented. This methodology, based on the author's screw theory, proposed in 1997, of which the generality and validity was only proved recently,  is a very complex issue, researched by various scientists over the last 150 years. The principle of kinematic influence coefficient and its latest developments are described. This principle is suitable for kinematic analysis of various 6-DOF and lower-mobility parallel manipulators. The singularities are classified by a new point of view, and progress in position-singularity and orientation-singularity is stated. In addition, the concept of over-determinate input is proposed and a new method of force analysis based on screw theory is presented. In mechanism synthesis, the synthesis for spatial parallel mechanisms is discussed, and the synthesis method of difficult 4-DOF and 5-DOF symmetric mechanisms, which was first put forward by the a...

  15. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    In this paper, we study parallel algorithms for private-cache chip multiprocessors (CMPs), focusing on methods for foundational problems that are scalable with the number of cores. By focusing on private-cache CMPs, we show that we can design efficient algorithms that need no additional assumptions...... about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks...

  16. ONE SEGMENT OF THE BULGARIAN-ENGLISH PAREMIOLOGICAL CORE

    Directory of Open Access Journals (Sweden)

    KOTOVA M.Y.

    2015-12-01

    Full Text Available The English proverbial parallels of the Russian-Bulgarian paremiological core are analysed in the article. The comparison of current Bulgarian proverbs and their English proverbial parallels is based upon the material of the author’s multi-lingual dictionary and her collection of Bulgarian-Russian proverbial parallels published as a result of her sociolinguistic paremiological experiment from 2003 (on the basis of 100 questionnaires filled by 100 Bulgarian respondents and supported in 2013 with the current Bulgarian contexts from the Bulgarian Internet. The number of 'alive' Bulgarian-English proverbial parallels, constructed from the paremiological questionnaires (pointed out by 70 % - 100 % respondents is 62, the biggest part of which belongs to the proverbial parallels with a similar inner form (35, i.e. the biggest part of the segment of the current Bulgarian-English paremiological core (reflecting the Russian paremiological minimum contains proverbial parallels with a similar inner form.

  17. Parallelization of a Monte Carlo particle transport simulation code

    Science.gov (United States)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  18. Domain Specific Language for Geant4 Parallelization for Space-based Applications Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A major limiting factor in HPC growth is the requirement to parallelize codes to leverage emerging architectures, especially as single core performance has plateaued...

  19. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  20. C++ and Massively Parallel Computers

    Directory of Open Access Journals (Sweden)

    Daniel J. Lickly

    1993-01-01

    Full Text Available Our goal is to apply the software engineering advantages of object-oriented programming to the raw power of massively parallel architectures. To do this we have constructed a hierarchy of C++ classes to support the data-parallel paradigm. Feasibility studies and initial coding can be supported by any serial machine that has a C++ compiler. Parallel execution requires an extended Cfront, which understands the data-parallel classes and generates C* code. (C* is a data-parallel superset of ANSI C developed by Thinking Machines Corporation. This approach provides potential portability across parallel architectures and leverages the existing compiler technology for translating data-parallel programs onto both SIMD and MIMD hardware.

  1. Computer Assisted Parallel Program Generation

    CERN Document Server

    Kawata, Shigeo

    2015-01-01

    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  2. Parallel Ecological Speciation in Plants?

    Directory of Open Access Journals (Sweden)

    Katherine L. Ostevik

    2012-01-01

    Full Text Available Populations that have independently evolved reproductive isolation from their ancestors while remaining reproductively cohesive have undergone parallel speciation. A specific type of parallel speciation, known as parallel ecological speciation, is one of several forms of evidence for ecology's role in speciation. In this paper we search the literature for candidate examples of parallel ecological speciation in plants. We use four explicit criteria (independence, isolation, compatibility, and selection to judge the strength of evidence for each potential case. We find that evidence for parallel ecological speciation in plants is unexpectedly scarce, especially relative to the many well-characterized systems in animals. This does not imply that ecological speciation is uncommon in plants. It only implies that evidence from parallel ecological speciation is rare. Potential explanations for the lack of convincing examples include a lack of rigorous testing and the possibility that plants are less prone to parallel ecological speciation than animals.

  3. Multi-Core BDD Operations for Symbolic Reachability

    NARCIS (Netherlands)

    van Dijk, Tom; Laarman, Alfons; van de Pol, Jan Cornelis; Heljanko, K.; Knottenbelt, W.J.

    2012-01-01

    This paper presents scalable parallel BDD operations for modern multi-core hardware. We aim at increasing the performance of reachability analysis in the context of model checking. Existing approaches focus on performing multiple independent BDD operations rather than parallelizing the BDD

  4. Parallel Computing in SCALE

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D [ORNL; Williams, Mark L [ORNL; Bowman, Stephen M [ORNL

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  5. WASCAL - West African Science Service Center on Climate Change and Adapted Land Use Regional Climate Simulations and Land-Atmosphere Simulations for West Africa at DKRZ and elsewhere

    Science.gov (United States)

    Hamann, Ilse; Arnault, Joel; Bliefernicht, Jan; Klein, Cornelia; Heinzeller, Dominikus; Kunstmann, Harald

    2014-05-01

    accompanied by the WASCAL Graduate Research Program on the West African Climate System. The GRP-WACS provides ten scholarships per year for West African PhD students with a duration of three years. Present and future WASCAL PhD students will constitute one important user group of the Linux cluster that will be installed at the Competence Center in Ouagadougou, Burkina Faso. Regional Land-Atmosphere Simulations A key research activity of the WASCAL Core Research Program is the analysis of interactions between the land surface and the atmosphere to investigate how land surface changes affect hydro-meteorological surface fluxes such as evapotranspiration. Since current land surface models of global and regional climate models neglect dominant lateral hydrological processes such as surface runoff, a novel land surface model is used, the NCAR Distributed Hydrological Modeling System (NDHMS). This model can be coupled to WRF (WRF-Hydro) to perform two-way coupled atmospheric-hydrological simulations for the watershed of interest. Hardware and network prerequisites include a HPC cluster, network switches, internal storage media, Internet connectivity of sufficient bandwidth. Competences needed are HPC, storage, and visualization systems optimized for climate research, parallelization and optimization of climate models and workflows, efficient management of highest data volumes.

  6. MPI/OpenMP Hybrid Parallel Algorithm of Resolution of Identity Second-Order Møller-Plesset Perturbation Calculation for Massively Parallel Multicore Supercomputers.

    Science.gov (United States)

    Katouda, Michio; Nakajima, Takahito

    2013-12-10

    A new algorithm for massively parallel calculations of electron correlation energy of large molecules based on the resolution of identity second-order Møller-Plesset perturbation (RI-MP2) technique is developed and implemented into the quantum chemistry software NTChem. In this algorithm, a Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) hybrid parallel programming model is applied to attain efficient parallel performance on massively parallel supercomputers. An in-core storage scheme of intermediate data of three-center electron repulsion integrals utilizing the distributed memory is developed to eliminate input/output (I/O) overhead. The parallel performance of the algorithm is tested on massively parallel supercomputers such as the K computer (using up to 45 992 central processing unit (CPU) cores) and a commodity Intel Xeon cluster (using up to 8192 CPU cores). The parallel RI-MP2/cc-pVTZ calculation of two-layer nanographene sheets (C150H30)2 (number of atomic orbitals is 9640) is performed using 8991 node and 71 288 CPU cores of the K computer.

  7. Five-year external reviews of the eight Department of Interior Climate Science Centers: Southeast Climate Science Center

    Science.gov (United States)

    Rice, Kenneth G.; Beier, Paul; Breault, Tim; Middleton, Beth A.; Peck, Myron A.; Tirpak, John M.; Ratnaswamy, Mary; Austen, Douglas; Harrison, Sarah

    2017-01-01

    In 2008, the U.S. Congress authorized the establishment of the National Climate Change and Wildlife Science Center (NCCWSC) within the U.S. Department of Interior (DOI). Housed administratively within the U.S. Geological Survey (USGS), NCCWSC is part of the DOI’s ongoing mission to meet the challenges of climate change and its effects on wildlife and aquatic resources. From 2010 through 2012, NCCWSC established eight regional DOI Climate Science Centers (CSCs). Each of these regional CSCs operated with the mission to “synthesize and integrate climate change impact data and develop tools that the Department’s managers and partners can use when managing the Department’s land, water, fish and wildlife, and cultural heritage resources” (Salazar 2009). The model developed by NCCWSC for the regional CSCs employed a dual approach of a federal USGS-staffed component and a parallel host-university component established competitively through a 5-year cooperative agreement with NCCWSC. At the conclusion of this 5-year agreement, a review of each CSC was undertaken, with the Southeast Climate Science Center (SE CSC) review in February 2016. The SE CSC is hosted by North Carolina State University (NCSU) in Raleigh, North Carolina, and is physically housed within the NCSU Department of Applied Ecology along with the Center for Applied Aquatic Ecology, the North Carolina Cooperative Fish and Wildlife Research Unit (CFWRU), and the North Carolina Agromedicine Institute. The U.S. Department of Agriculture Southeast Regional Climate Hub is based at NCSU as is the National Oceanic and Atmospheric Administration (NOAA) Southeast Regional Climate Center, the North Carolina Institute for Climate Studies, the North Carolina Wildlife Resources Commission, the NOAA National Weather Service, the State Climate Office of North Carolina, and the U.S. Forest Service Eastern Forest Environmental Threat Assessment Center. This creates a strong core of organizations operating in

  8. Parallel Polarization State Generation

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  9. Accelerated Parallel Texture Optimization

    Institute of Scientific and Technical Information of China (English)

    Hao-Da Huang; Xin Tong; Wen-Cheng Wang

    2007-01-01

    Texture optimization is a texture synthesis method that can efficiently reproduce various features of exemplar textures. However, its slow synthesis speed limits its usage in many interactive or real time applications. In this paper, we propose a parallel texture optimization algorithm to run on GPUs. In our algorithm, k-coherence search and principle component analysis (PCA) are used for hardware acceleration, and two acceleration techniques are further developed to speed up our GPU-based texture optimization. With a reasonable precomputation cost, the online synthesis speed of our algorithm is 4000+ times faster than that of the original texture optimization algorithm and thus our algorithm is capable of interactive applications. The advantages of the new scheme are demonstrated by applying it to interactive editing of flow-guided synthesis.

  10. Parallel Polarization State Generation

    CERN Document Server

    She, Alan

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristi...

  11. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

  12. Parallel algorithm strategies for circuit simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Schiek, Richard Louis; Keiter, Eric Richard

    2010-01-01

    Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. However, they have been pushed to their performance limits in addressing circuit design challenges that come from the technology drivers of smaller feature scales and higher integration. Improving the performance of circuit simulation tools through exploiting new opportunities in widely-available multi-processor architectures is a logical next step. Unfortunately, not all traditional simulation applications are inherently parallel, and quickly adapting mature application codes (even codes designed to parallel applications) to new parallel paradigms can be prohibitively difficult. In general, performance is influenced by many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, the use of mini-applications small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. In this report we present a multi-core performance study of Xyce, a transistor-level circuit simulation tool, and describe the future development of a mini-application for circuit simulation.

  13. Parallel Binomial American Option Pricing with (and without) Transaction Costs

    CERN Document Server

    Zhang, Nan; Zastawniak, Tomasz

    2011-01-01

    We present a parallel algorithm that computes the ask and bid prices of an American option when proportional transaction costs apply to the trading of the underlying asset. The algorithm computes the prices on recombining binomial trees, and is designed for modern multi-core processors. Although parallel option pricing has been well studied, none of the existing approaches takes transaction costs into consideration. The algorithm that we propose partitions a binomial tree into blocks. In any round of computation a block is further partitioned into regions which are assigned to distinct processors. To minimise load imbalance the assignment of nodes to processors is dynamically adjusted before each new round starts. Synchronisation is required both within a round and between two successive rounds. The parallel speedup of the algorithm is proportional to the number of processors used. The parallel algorithm was implemented in C/C++ via POSIX Threads, and was tested on a machine with 8 processors. In the pricing ...

  14. schwimmbad: A uniform interface to parallel processing pools in Python

    Science.gov (United States)

    Price-Whelan, Adrian M.; Foreman-Mackey, Daniel

    2017-09-01

    Many scientific and computing problems require doing some calculation on all elements of some data set. If the calculations can be executed in parallel (i.e. without any communication between calculations), these problems are said to be perfectly parallel. On computers with multiple processing cores, these tasks can be distributed and executed in parallel to greatly improve performance. A common paradigm for handling these distributed computing problems is to use a processing "pool": the "tasks" (the data) are passed in bulk to the pool, and the pool handles distributing the tasks to a number of worker processes when available. schwimmbad provides a uniform interface to parallel processing pools and enables switching easily between local development (e.g., serial processing or with multiprocessing) and deployment on a cluster or supercomputer (via, e.g., MPI or JobLib).

  15. A Prototype Embedded Microprocessor Interconnect for Distributed and Parallel Computing

    Directory of Open Access Journals (Sweden)

    Bryan Hughes

    2008-08-01

    Full Text Available Parallel computing is currently undergoing a transition from a niche use to widespread acceptance due to new, computationally intensive applications and multi-core processors. While parallel processing is an invaluable tool for increasing performance, more time and expertise are required to develop a parallel system than are required for sequential systems. This paper discusses a toolkit currently in development that will simplify both the hardware and software development of embedded distributed and parallel systems. The hardware interconnection mechanism uses the Serial Peripheral Interface as a physical medium and provides routing and management services for the system. The topics in this paper are primarily limited to the interconnection aspect of the toolkit.

  16. Heterogeneous Highly Parallel Implementation of Matrix Exponentiation Using GPU

    CERN Document Server

    Raja, Chittampally Vasanth; Raghavendra, Prakash S; 10.5121/ijdps.2012.3209

    2012-01-01

    The vision of super computer at every desk can be realized by powerful and highly parallel CPUs or GPUs or APUs. Graphics processors once specialized for the graphics applications only, are now used for the highly computational intensive general purpose applications. Very expensive GFLOPs and TFLOP performance has become very cheap with the GPGPUs. Current work focuses mainly on the highly parallel implementation of Matrix Exponentiation. Matrix Exponentiation is widely used in many areas of scientific community ranging from highly critical flight, CAD simulations to financial, statistical applications. Proposed solution for Matrix Exponentiation uses OpenCL for exploiting the hyper parallelism offered by the many core GPGPUs. It employs many general GPU optimizations and architectural specific optimizations. This experimentation covers the optimizations targeted specific to the Scientific Graphics cards (Tesla-C2050). Heterogeneous Highly Parallel Matrix Exponentiation method has been tested for matrices of ...

  17. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  18. An evaluation of parallel optimization for OpenSolaris Network Stack

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Hongbo; Wu, Wenji; /Fermilab; Sun, Xian-He; /IIT, Chicago; DeMar, Phil; Crawford, Matt; /Fermilab

    2010-10-01

    Computing is now shifting towards multiprocessing. The fundamental goal of multiprocessing is improved performance through the introduction of additional hardware threads or cores (referred to as 'cores' for simplicity). Modern network stacks can exploit parallel cores to allow either message-based parallelism or connection-based parallelism as a means to enhance performance. OpenSolaris has redesigned and parallelized to better utilize additional cores. Three special technologies, named Softring Set, Soft ring and Squeue are introduced in OpenSolaris for stack parallelization. In this paper, we study the OpenSolaris packet receiving process and its core parallelism optimization techniques. Experiment results show that these techniques allow OpenSolaris to achieve better network I/O performance in multiprocessing environments; however, network stack parallelization has also brought extra overheads for system. An effective and efficient network I/O optimization in multiprocessing environments is required to cross all levers of the network stack from network interface to application.

  19. Climate change and population history in the Pacific Lowlands of Southern Mesoamerica

    Science.gov (United States)

    Neff, Hector; Pearsall, Deborah M.; Jones, John G.; Arroyo de Pieters, Bárbara; Freidel, Dorothy E.

    2006-05-01

    Core MAN015 from Pacific coastal Guatemala contains sediments accumulated in a mangrove setting over the past 6500 yr. Chemical, pollen, and phytolith data, which indicate conditions of estuarine deposition and terrigenous inputs from adjacent dry land, document Holocene climate variability that parallels the Maya lowlands and other New World tropical locations. Human population history in this region may be driven partly by climate variation: sedentary human populations spread rapidly through the estuarine zone of the lower coast during a dry and variable 4th millennium B.P. Population growth and cultural florescence during a long, relatively moist period (2800-1200 B.P.) ended around 1200 B.P., a drying event that coincided with the Classic Maya collapse.

  20. The Parallel Curriculum: A Design To Develop High Potential and Challenge High-Ability Learners.

    Science.gov (United States)

    Tomlinson, Carol Ann; Kaplan, Sandra N.; Renzulli, Joseph S.; Purcell, Jeanne; Leppien, Jann; Burns, Deborah

    This book presents a model of curriculum development for gifted students and offers four parallel approaches that focus on ascending intellectual demand as students develop expertise in learning. The parallel curriculum's four approaches include: (1) the core or basic curriculum; (2) the curriculum of connections, which expands on the core…