WorldWideScience

Sample records for parallel columns mapping

  1. Column-Oriented Storage Techniques for MapReduce

    OpenAIRE

    Floratou, Avrilia; Patel, Jignesh; Shekita, Eugene; Tata, Sandeep

    2011-01-01

    Users of MapReduce often run into performance problems when they scale up their workloads. Many of the problems they encounter can be overcome by applying techniques learned from over three decades of research on parallel DBMSs. However, translating these techniques to a MapReduce implementation such as Hadoop presents unique challenges that can lead to new design choices. This paper describes how column-oriented storage techniques can be incorporated in Hadoop in a way that preserves its pop...

  2. Adaptive query parallelization in multi-core column stores

    NARCIS (Netherlands)

    M.M. Gawade (Mrunal); M.L. Kersten (Martin); M.M. Gawade (Mrunal); M.L. Kersten (Martin)

    2016-01-01

    htmlabstractWith the rise of multi-core CPU platforms, their optimal utilization for in-memory OLAP workloads using column store databases has become one of the biggest challenges. Some of the inherent limi- tations in the achievable query parallelism are due to the degree of parallelism

  3. Multi-core parallelism in a column-store

    NARCIS (Netherlands)

    Gawade, M.M.

    2017-01-01

    The research reported in this thesis addresses several challenges of improving the efficiency and effectiveness of parallel processing of analytical database queries on modern multi- and many-core systems, using an open-source column-oriented analytical database management system, MonetDB, for

  4. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  5. Topology in Synthetic Column Density Maps for Interstellar Turbulence

    Science.gov (United States)

    Putko, Joseph; Burkhart, B. K.; Lazarian, A.

    2013-01-01

    We show how the topology tool known as the genus statistic can be utilized to characterize magnetohydrodyanmic (MHD) turbulence in the ISM. The genus is measured with respect to a given density threshold and varying the threshold produces a genus curve, which can suggest an overall ‘‘meatball,’’ neutral, or ‘‘Swiss cheese’’ topology through its integral. We use synthetic column density maps made from three-dimensional 5123 compressible MHD isothermal simulations performed for different sonic and Alfvénic Mach numbers (Ms and MA respectively). We study eight different Ms values each with one sub- and one super-Alfvénic counterpart. We consider sight-lines both parallel (x) and perpendicular (y and z) to the mean magnetic field. We find that the genus integral shows a dependence on both Mach numbers, and this is still the case even after adding beam smoothing and Gaussian noise to the maps to mimic observational data. The genus integral increases with higher Ms values (but saturates after about Ms = 4) for all lines of sight. This is consistent with greater values of Ms resulting in stronger shocks, which results in a clumpier topology. We observe a larger genus integral for the sub-Alfvénic cases along the perpendicular lines of sight due to increased compression from the field lines and enhanced anisotropy. Application of the genus integral to column density maps should allow astronomers to infer the Mach numbers and thus learn about the environments of interstellar turbulence. This work was supported by the National Science Foundation’s REU program through NSF Award AST-1004881.

  6. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  7. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  8. Parallel image encryption algorithm based on discretized chaotic map

    International Nuclear Information System (INIS)

    Zhou Qing; Wong Kwokwo; Liao Xiaofeng; Xiang Tao; Hu Yue

    2008-01-01

    Recently, a variety of chaos-based algorithms were proposed for image encryption. Nevertheless, none of them works efficiently in parallel computing environment. In this paper, we propose a framework for parallel image encryption. Based on this framework, a new algorithm is designed using the discretized Kolmogorov flow map. It fulfills all the requirements for a parallel image encryption algorithm. Moreover, it is secure and fast. These properties make it a good choice for image encryption on parallel computing platforms

  9. A Design of a New Column-Parallel Analog-to-Digital Converter Flash for Monolithic Active Pixel Sensor.

    Science.gov (United States)

    Chakir, Mostafa; Akhamal, Hicham; Qjidaa, Hassan

    2017-01-01

    The CMOS Monolithic Active Pixel Sensor (MAPS) for the International Linear Collider (ILC) vertex detector (VXD) expresses stringent requirements on their analog readout electronics, specifically on the analog-to-digital converter (ADC). This paper concerns designing and optimizing a new architecture of a low power, high speed, and small-area 4-bit column-parallel ADC Flash. Later in this study, we propose to interpose an S/H block in the converter. This integration of S/H block increases the sensitiveness of the converter to the very small amplitude of the input signal from the sensor and provides a sufficient time to the converter to be able to code the input signal. This ADC is developed in 0.18  μ m CMOS process with a pixel pitch of 35  μ m. The proposed ADC responds to the constraints of power dissipation, size, and speed for the MAPS composed of a matrix of 64 rows and 48 columns where each column ADC covers a small area of 35 × 336.76  μ m 2 . The proposed ADC consumes low power at a 1.8 V supply and 100 MS/s sampling rate with dynamic range of 125 mV. Its DNL and INL are 0.0812/-0.0787 LSB and 0.0811/-0.0787 LSB, respectively. Furthermore, this ADC achieves a high speed more than 5 GHz.

  10. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  11. Unpacking the cognitive map: the parallel map theory of hippocampal function.

    Science.gov (United States)

    Jacobs, Lucia F; Schenk, Françoise

    2003-04-01

    In the parallel map theory, the hippocampus encodes space with 2 mapping systems. The bearing map is constructed primarily in the dentate gyrus from directional cues such as stimulus gradients. The sketch map is constructed within the hippocampus proper from positional cues. The integrated map emerges when data from the bearing and sketch maps are combined. Because the component maps work in parallel, the impairment of one can reveal residual learning by the other. Such parallel function may explain paradoxes of spatial learning, such as learning after partial hippocampal lesions, taxonomic and sex differences in spatial learning, and the function of hippocampal neurogenesis. By integrating evidence from physiology to phylogeny, the parallel map theory offers a unified explanation for hippocampal function.

  12. Implementing Parallel Google Map-Reduce in Eden

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Loogen, Rita

    2009-01-01

    Recent publications have emphasised map-reduce as a general programming model (labelled Google map-reduce), and described existing high-performance implementations for large data sets. We present two parallel implementations for this Google map-reduce skeleton, one following earlier work, and one...... of the Google map-reduce skeleton in usage and performance, and deliver runtime analyses for example applications. Although very flexible, the Google map-reduce skeleton is often too general, and typical examples reveal a better runtime behaviour using alternative skeletons....

  13. A 10-bit column-parallel cyclic ADC for high-speed CMOS image sensors

    International Nuclear Information System (INIS)

    Han Ye; Li Quanliang; Shi Cong; Wu Nanjian

    2013-01-01

    This paper presents a high-speed column-parallel cyclic analog-to-digital converter (ADC) for a CMOS image sensor. A correlated double sampling (CDS) circuit is integrated in the ADC, which avoids a stand-alone CDS circuit block. An offset cancellation technique is also introduced, which reduces the column fixed-pattern noise (FPN) effectively. One single channel ADC with an area less than 0.02 mm 2 was implemented in a 0.13 μm CMOS image sensor process. The resolution of the proposed ADC is 10-bit, and the conversion rate is 1.6 MS/s. The measured differential nonlinearity and integral nonlinearity are 0.89 LSB and 6.2 LSB together with CDS, respectively. The power consumption from 3.3 V supply is only 0.66 mW. An array of 48 10-bit column-parallel cyclic ADCs was integrated into an array of CMOS image sensor pixels. The measured results indicated that the ADC circuit is suitable for high-speed CMOS image sensors. (semiconductor integrated circuits)

  14. A Design of a New Column-Parallel Analog-to-Digital Converter Flash for Monolithic Active Pixel Sensor

    Directory of Open Access Journals (Sweden)

    Mostafa Chakir

    2017-01-01

    Full Text Available The CMOS Monolithic Active Pixel Sensor (MAPS for the International Linear Collider (ILC vertex detector (VXD expresses stringent requirements on their analog readout electronics, specifically on the analog-to-digital converter (ADC. This paper concerns designing and optimizing a new architecture of a low power, high speed, and small-area 4-bit column-parallel ADC Flash. Later in this study, we propose to interpose an S/H block in the converter. This integration of S/H block increases the sensitiveness of the converter to the very small amplitude of the input signal from the sensor and provides a sufficient time to the converter to be able to code the input signal. This ADC is developed in 0.18 μm CMOS process with a pixel pitch of 35 μm. The proposed ADC responds to the constraints of power dissipation, size, and speed for the MAPS composed of a matrix of 64 rows and 48 columns where each column ADC covers a small area of 35 × 336.76 μm2. The proposed ADC consumes low power at a 1.8 V supply and 100 MS/s sampling rate with dynamic range of 125 mV. Its DNL and INL are 0.0812/−0.0787 LSB and 0.0811/−0.0787 LSB, respectively. Furthermore, this ADC achieves a high speed more than 5 GHz.

  15. A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map

    Directory of Open Access Journals (Sweden)

    Xizhong Wang

    2013-01-01

    Full Text Available We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI. The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.

  16. Long Read Alignment with Parallel MapReduce Cloud Platform

    Directory of Open Access Journals (Sweden)

    Ahmed Abdulhakim Al-Absi

    2015-01-01

    Full Text Available Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner’s Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms.

  17. Long Read Alignment with Parallel MapReduce Cloud Platform

    Science.gov (United States)

    Al-Absi, Ahmed Abdulhakim; Kang, Dae-Ki

    2015-01-01

    Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner's Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR) cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms. PMID:26839887

  18. Automated integration of genomic physical mapping data via parallel simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  19. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    Science.gov (United States)

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  20. Column-by-column compositional mapping by Z-contrast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Molina, S.I. [Departamento de Ciencia de los Materiales e I.M. y Q.I., Facultad de Ciencias, Universidad de Cadiz, Campus Rio San Pedro, s/n, 11510 Puerto Real, Cadiz (Spain)], E-mail: sergio.molina@uca.es; Sales, D.L. [Departamento de Ciencia de los Materiales e I.M. y Q.I., Facultad de Ciencias, Universidad de Cadiz, Campus Rio San Pedro, s/n, 11510 Puerto Real, Cadiz (Spain); Galindo, P.L. [Departamento de Lenguajes y Sistemas Informaticos, CASEM, Universidad de Cadiz, Campus Rio San Pedro, s/n, 11510 Puerto Real, Cadiz (Spain); Fuster, D.; Gonzalez, Y.; Alen, B.; Gonzalez, L. [Instituto de Microelectronica de Madrid (CNM, CSIC), Isaac Newton 8, 28760 Tres Cantos, Madrid (Spain); Varela, M.; Pennycook, S.J. [Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2009-01-15

    A phenomenological method is developed to determine the composition of materials, with atomic column resolution, by analysis of integrated intensities of aberration-corrected Z-contrast scanning transmission electron microscopy images. The method is exemplified for InAs{sub x}P{sub 1-x} alloys using epitaxial thin films with calibrated compositions as standards. Using this approach we have determined the composition of the two-dimensional wetting layer formed between self-assembled InAs quantum wires on InP(0 0 1) substrates.

  1. Parallel pipeline algorithm of real time star map preprocessing

    Science.gov (United States)

    Wang, Hai-yong; Qin, Tian-mu; Liu, Jia-qi; Li, Zhi-feng; Li, Jian-hua

    2016-03-01

    To improve the preprocessing speed of star map and reduce the resource consumption of embedded system of star tracker, a parallel pipeline real-time preprocessing algorithm is presented. The two characteristics, the mean and the noise standard deviation of the background gray of a star map, are firstly obtained dynamically by the means that the intervene of the star image itself to the background is removed in advance. The criterion on whether or not the following noise filtering is needed is established, then the extraction threshold value is assigned according to the level of background noise, so that the centroiding accuracy is guaranteed. In the processing algorithm, as low as two lines of pixel data are buffered, and only 100 shift registers are used to record the connected domain label, by which the problems of resources wasting and connected domain overflow are solved. The simulating results show that the necessary data of the selected bright stars could be immediately accessed in a delay time as short as 10us after the pipeline processing of a 496×496 star map in 50Mb/s is finished, and the needed memory and registers resource total less than 80kb. To verify the accuracy performance of the algorithm proposed, different levels of background noise are added to the processed ideal star map, and the statistic centroiding error is smaller than 1/23 pixel under the condition that the signal to noise ratio is greater than 1. The parallel pipeline algorithm of real time star map preprocessing helps to increase the data output speed and the anti-dynamic performance of star tracker.

  2. Column-Parallel Single Slope ADC with Digital Correlated Multiple Sampling for Low Noise CMOS Image Sensors

    NARCIS (Netherlands)

    Chen, Y.; Theuwissen, A.J.P.; Chae, Y.

    2011-01-01

    This paper presents a low noise CMOS image sensor (CIS) using 10/12 bit configurable column-parallel single slope ADCs (SS-ADCs) and digital correlated multiple sampling (CMS). The sensor used is a conventional 4T active pixel with a pinned-photodiode as photon detector. The test sensor was

  3. Supercritical Fluid Chromatography of Drugs: Parallel Factor Analysis for Column Testing in a Wide Range of Operational Conditions

    Science.gov (United States)

    Al-Degs, Yahya; Andri, Bertyl; Thiébaut, Didier; Vial, Jérôme

    2017-01-01

    Retention mechanisms involved in supercritical fluid chromatography (SFC) are influenced by interdependent parameters (temperature, pressure, chemistry of the mobile phase, and nature of the stationary phase), a complexity which makes the selection of a proper stationary phase for a given separation a challenging step. For the first time in SFC studies, Parallel Factor Analysis (PARAFAC) was employed to evaluate the chromatographic behavior of eight different stationary phases in a wide range of chromatographic conditions (temperature, pressure, and gradient elution composition). Design of Experiment was used to optimize experiments involving 14 pharmaceutical compounds present in biological and/or environmental samples and with dissimilar physicochemical properties. The results showed the superiority of PARAFAC for the analysis of the three-way (column × drug × condition) data array over unfolding the multiway array to matrices and performing several classical principal component analyses. Thanks to the PARAFAC components, similarity in columns' function, chromatographic trend of drugs, and correlation between separation conditions could be simply depicted: columns were grouped according to their H-bonding forces, while gradient composition was dominating for condition classification. Also, the number of drugs could be efficiently reduced for columns classification as some of them exhibited a similar behavior, as shown by hierarchical clustering based on PARAFAC components. PMID:28695040

  4. Supercritical Fluid Chromatography of Drugs: Parallel Factor Analysis for Column Testing in a Wide Range of Operational Conditions

    Directory of Open Access Journals (Sweden)

    Ramia Z. Al Bakain

    2017-01-01

    Full Text Available Retention mechanisms involved in supercritical fluid chromatography (SFC are influenced by interdependent parameters (temperature, pressure, chemistry of the mobile phase, and nature of the stationary phase, a complexity which makes the selection of a proper stationary phase for a given separation a challenging step. For the first time in SFC studies, Parallel Factor Analysis (PARAFAC was employed to evaluate the chromatographic behavior of eight different stationary phases in a wide range of chromatographic conditions (temperature, pressure, and gradient elution composition. Design of Experiment was used to optimize experiments involving 14 pharmaceutical compounds present in biological and/or environmental samples and with dissimilar physicochemical properties. The results showed the superiority of PARAFAC for the analysis of the three-way (column × drug × condition data array over unfolding the multiway array to matrices and performing several classical principal component analyses. Thanks to the PARAFAC components, similarity in columns’ function, chromatographic trend of drugs, and correlation between separation conditions could be simply depicted: columns were grouped according to their H-bonding forces, while gradient composition was dominating for condition classification. Also, the number of drugs could be efficiently reduced for columns classification as some of them exhibited a similar behavior, as shown by hierarchical clustering based on PARAFAC components.

  5. Hardware-Oblivious Parallelism for In-Memory Column-Stores

    NARCIS (Netherlands)

    M. Heimel; M. Saecker; H. Pirk (Holger); S. Manegold (Stefan); V. Markl

    2013-01-01

    htmlabstractThe multi-core architectures of today’s computer systems make parallelism a necessity for performance critical applications. Writing such applications in a generic, hardware-oblivious manner is a challenging problem: Current database systems thus rely on labor-intensive and error-prone

  6. Selective loss of orientation column maps in visual cortex during brief elevation of intraocular pressure.

    Science.gov (United States)

    Chen, Xin; Sun, Chao; Huang, Luoxiu; Shou, Tiande

    2003-01-01

    To compare the orientation column maps elicited by different spatial frequency gratings in cortical area 17 of cats before and during brief elevation of intraocular pressure (IOP). IOP was elevated by injecting saline into the anterior chamber of a cat's eye through a syringe needle. The IOP was elevated enough to cause a retinal perfusion pressure (arterial pressure minus IOP) of approximately 30 mm Hg during a brief elevation of IOP. The visual stimulus gratings were varied in spatial frequency, whereas other parameters were kept constant. The orientation column maps of the cortical area 17 were monocularly elicited by drifting gratings of different spatial frequencies and revealed by a brain intrinsic signal optical imaging system. These maps were compared before and during short-term elevation of IOP. The response amplitude of the orientation maps in area 17 decreased during a brief elevation of IOP. This decrease was dependent on the retinal perfusion pressure but not on the absolute IOP. The location of the most visible maps was spatial-frequency dependent. The blurring or loss of the pattern of the orientation maps was most severe when high-spatial-frequency gratings were used and appeared most significantly on the posterior part of the exposed cortex while IOP was elevated. However, the basic patterns of the maps remained unchanged. Changes in cortical signal were not due to changes in the optics of the eye with elevation of IOP. A stable normal IOP is essential for maintaining normal visual cortical functions. During a brief and high elevation of IOP, the cortical processing of high-spatial-frequency visual information was diminished because of a selectively functional decline of the retinogeniculocortical X pathway by a mechanism of retinal circulation origin.

  7. Nearly auto-parallel maps and conservation laws on curved spaces

    International Nuclear Information System (INIS)

    Vacaru, S.

    1994-01-01

    The theory of nearly auto-parallel maps (na-maps, generalization of conformal transforms) of Einstein-Cartan spaces is formulated. The transformation laws of geometrical objects and gravitational and matter field equations under superpositions of na-maps are considered. A special attention is paid to the very important problem of definition of conservation laws for gravitational fields. (Author)

  8. NOAA JPSS Ozone Mapping and Profiler Suite (OMPS) Nadir Total Column Sensor Data Record (SDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ozone Mapping and Profiler Suite (OMPS) onboard the Suomi NPP satellite monitors ozone from space. OMPS will collect total column and vertical profile ozone data...

  9. DIMACS Workshop on Interconnection Networks and Mapping, and Scheduling Parallel Computations

    CERN Document Server

    Rosenberg, Arnold L; Sotteau, Dominique; NSF Science and Technology Center in Discrete Mathematics and Theoretical Computer Science; Interconnection networks and mapping and scheduling parallel computations

    1995-01-01

    The interconnection network is one of the most basic components of a massively parallel computer system. Such systems consist of hundreds or thousands of processors interconnected to work cooperatively on computations. One of the central problems in parallel computing is the task of mapping a collection of processes onto the processors and routing network of a parallel machine. Once this mapping is done, it is critical to schedule computations within and communication among processor from universities and laboratories, as well as practitioners involved in the design, implementation, and application of massively parallel systems. Focusing on interconnection networks of parallel architectures of today and of the near future , the book includes topics such as network topologies,network properties, message routing, network embeddings, network emulation, mappings, and efficient scheduling. inputs for a process are available where and when the process is scheduled to be computed. This book contains the refereed pro...

  10. Parallel keyed hash function construction based on chaotic maps

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Deng Shaojiang

    2008-01-01

    Recently, a variety of chaos-based hash functions have been proposed. Nevertheless, none of them works efficiently in parallel computing environment. In this Letter, an algorithm for parallel keyed hash function construction is proposed, whose structure can ensure the uniform sensitivity of hash value to the message. By means of the mechanism of both changeable-parameter and self-synchronization, the keystream establishes a close relation with the algorithm key, the content and the order of each message block. The entire message is modulated into the chaotic iteration orbit, and the coarse-graining trajectory is extracted as the hash value. Theoretical analysis and computer simulation indicate that the proposed algorithm can satisfy the performance requirements of hash function. It is simple, efficient, practicable, and reliable. These properties make it a good choice for hash on parallel computing platform

  11. Massively parallel read mapping on GPUs with the q-group index and PEANUT

    NARCIS (Netherlands)

    J. Köster (Johannes); S. Rahmann (Sven)

    2014-01-01

    textabstractWe present the q-group index, a novel data structure for read mapping tailored towards graphics processing units (GPUs) with a small memory footprint and efficient parallel algorithms for querying and building. On top of the q-group index we introduce PEANUT, a highly parallel GPU-based

  12. Parallel Tracking and Mapping for Controlling VTOL Airframe

    Directory of Open Access Journals (Sweden)

    Michal Jama

    2011-01-01

    Full Text Available This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV. This is a monocular vision based, simultaneous localization and mapping (SLAM system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1 extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2 improved performance of the SLAM algorithm for lower camera frame rates; and (3 the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.

  13. Parallel Mappings as a Key for Understanding the Bioinorganic Materials

    International Nuclear Information System (INIS)

    Kuczumow, A.; Nowak, J.; Chalas, R.

    2009-01-01

    Important bio inorganic objects, both living and fossilized are as a rule characterized by a complex microscopic structure. For biological samples, the cell-like and laminar as well as growth ring structures are among most significant. Moreover, these objects belong to a now widely studied category of bio minerals with composite, inorganic-organic structure. Such materials are composed of a limited number of inorganic compounds and several natural organic polymers. This apparently simple composition leads to an abnormal variety of constructions significant from the medical (repairs and implants), natural (ecological effectiveness) and material science (biomimetic synthesis) point of view. The analysis of an image obtained in an optical microscope, optionally in a scanning electron microscope is a topographical reference for further investigations. For the characterization of the distribution of chemical elements and compounds in a material, techniques such as X-ray, electron- or proton microprobes are applied. Essentially, elemental mappings are collected in this stage. The need for the application of an X-ray diffraction microprobe is obvious and our experience indicates on the necessity of using the synchrotron-based devices due to their better spatial resolution and good X-ray intensity. To examine the presence of the organic compounds, the Raman microprobe measurements are good options. They deliver information about the spatial distribution of functional groups and oscillating fragments of molecules. For the comprehensive investigation of bio inorganic material structural and chemical features, we propose the following sequence of methods: optical imaging, elemental mapping, crystallographic mapping, organic mapping and micromechanical mapping. The examples of such an approach are given for: petrified wood, human teeth, and an ammonite shell. (authors)

  14. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    Directory of Open Access Journals (Sweden)

    Min-Kyu Kim

    2015-12-01

    Full Text Available This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs. The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  15. Cryptanalysis on a parallel keyed hash function based on chaotic maps

    International Nuclear Information System (INIS)

    Guo Wei; Wang Xiaoming; He Dake; Cao Yang

    2009-01-01

    This Letter analyzes the security of a novel parallel keyed hash function based on chaotic maps, proposed by Xiao et al. to improve the efficiency in parallel computing environment. We show how to devise forgery attacks on Xiao's scheme with differential cryptanalysis and give the experiment results of two kinds of forgery attacks firstly. Furthermore, we discuss the problem of weak keys in the scheme and demonstrate how to utilize weak keys to construct collision.

  16. Improving the security of a parallel keyed hash function based on chaotic maps

    Energy Technology Data Exchange (ETDEWEB)

    Xiao Di, E-mail: xiaodi_cqu@hotmail.co [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); Liao Xiaofeng [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); Wang Yong [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)] [College of Economy and Management, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China)

    2009-11-23

    In this Letter, we analyze the cause of vulnerability of the original parallel keyed hash function based on chaotic maps in detail, and then propose the corresponding enhancement measures. Theoretical analysis and computer simulation indicate that the modified hash function is more secure than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function.

  17. Improving the security of a parallel keyed hash function based on chaotic maps

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Wang Yong

    2009-01-01

    In this Letter, we analyze the cause of vulnerability of the original parallel keyed hash function based on chaotic maps in detail, and then propose the corresponding enhancement measures. Theoretical analysis and computer simulation indicate that the modified hash function is more secure than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function.

  18. Signal-to-noise ratio measurement in parallel MRI with subtraction mapping and consecutive methods

    International Nuclear Information System (INIS)

    Imai, Hiroshi; Miyati, Tosiaki; Ogura, Akio; Doi, Tsukasa; Tsuchihashi, Toshio; Machida, Yoshio; Kobayashi, Masato; Shimizu, Kouzou; Kitou, Yoshihiro

    2008-01-01

    When measuring the signal-to-noise ratio (SNR) of an image the used parallel magnetic resonance imaging, it was confirmed that there was a problem in the application of past SNR measurement. With the method of measuring the noise from the background signal, SNR with parallel imaging was higher than that without parallel imaging. In the subtraction method (NEMA standard), which sets a wide region of interest, the white noise was not evaluated correctly although SNR was close to the theoretical value. We proposed two techniques because SNR in parallel imaging was not uniform according to inhomogeneity of the coil sensitivity distribution and geometry factor. Using the first method (subtraction mapping), two images were scanned with identical parameters. The SNR in each pixel divided the running mean (7 by 7 pixels in neighborhood) by standard deviation/√2 in the same region of interest. Using the second (consecutive) method, more than fifty consecutive scans of the uniform phantom were obtained with identical scan parameters. Then the SNR was calculated from the ratio of mean signal intensity to the standard deviation in each pixel on a series of images. Moreover, geometry factors were calculated from SNRs with and without parallel imaging. The SNR and geometry factor using parallel imaging in the subtraction mapping method agreed with those of the consecutive method. Both methods make it possible to obtain a more detailed determination of SNR in parallel imaging and to calculate the geometry factor. (author)

  19. TME (Task Mapping Editor): tool for executing distributed parallel computing. TME user's manual

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Yamagishi, Nobuhiro; Imamura, Toshiyuki

    2000-03-01

    At the Center for Promotion of Computational Science and Engineering, a software environment PPExe has been developed to support scientific computing on a parallel computer cluster (distributed parallel scientific computing). TME (Task Mapping Editor) is one of components of the PPExe and provides a visual programming environment for distributed parallel scientific computing. Users can specify data dependence among tasks (programs) visually as a data flow diagram and map these tasks onto computers interactively through GUI of TME. The specified tasks are processed by other components of PPExe such as Meta-scheduler, RIM (Resource Information Monitor), and EMS (Execution Management System) according to the execution order of these tasks determined by TME. In this report, we describe the usage of TME. (author)

  20. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available Artificial neural networks (ANNs have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  1. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  2. An image-space parallel convolution filtering algorithm based on shadow map

    Science.gov (United States)

    Li, Hua; Yang, Huamin; Zhao, Jianping

    2017-07-01

    Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.

  3. Column ratio mapping: a processing technique for atomic resolution high-angle annular dark-field (HAADF) images.

    Science.gov (United States)

    Robb, Paul D; Craven, Alan J

    2008-12-01

    An image processing technique is presented for atomic resolution high-angle annular dark-field (HAADF) images that have been acquired using scanning transmission electron microscopy (STEM). This technique is termed column ratio mapping and involves the automated process of measuring atomic column intensity ratios in high-resolution HAADF images. This technique was developed to provide a fuller analysis of HAADF images than the usual method of drawing single intensity line profiles across a few areas of interest. For instance, column ratio mapping reveals the compositional distribution across the whole HAADF image and allows a statistical analysis and an estimation of errors. This has proven to be a very valuable technique as it can provide a more detailed assessment of the sharpness of interfacial structures from HAADF images. The technique of column ratio mapping is described in terms of a [110]-oriented zinc-blende structured AlAs/GaAs superlattice using the 1 angstroms-scale resolution capability of the aberration-corrected SuperSTEM 1 instrument.

  4. Column ratio mapping: A processing technique for atomic resolution high-angle annular dark-field (HAADF) images

    International Nuclear Information System (INIS)

    Robb, Paul D.; Craven, Alan J.

    2008-01-01

    An image processing technique is presented for atomic resolution high-angle annular dark-field (HAADF) images that have been acquired using scanning transmission electron microscopy (STEM). This technique is termed column ratio mapping and involves the automated process of measuring atomic column intensity ratios in high-resolution HAADF images. This technique was developed to provide a fuller analysis of HAADF images than the usual method of drawing single intensity line profiles across a few areas of interest. For instance, column ratio mapping reveals the compositional distribution across the whole HAADF image and allows a statistical analysis and an estimation of errors. This has proven to be a very valuable technique as it can provide a more detailed assessment of the sharpness of interfacial structures from HAADF images. The technique of column ratio mapping is described in terms of a [1 1 0]-oriented zinc-blende structured AlAs/GaAs superlattice using the 1 A-scale resolution capability of the aberration-corrected SuperSTEM 1 instrument.

  5. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    Science.gov (United States)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  6. Optimal task mapping in safety-critical real-time parallel systems

    International Nuclear Information System (INIS)

    Aussagues, Ch.

    1998-01-01

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author)

  7. Optimising parallel R correlation matrix calculations on gene expression data using MapReduce.

    Science.gov (United States)

    Wang, Shicai; Pandis, Ioannis; Johnson, David; Emam, Ibrahim; Guitton, Florian; Oehmichen, Axel; Guo, Yike

    2014-11-05

    High-throughput molecular profiling data has been used to improve clinical decision making by stratifying subjects based on their molecular profiles. Unsupervised clustering algorithms can be used for stratification purposes. However, the current speed of the clustering algorithms cannot meet the requirement of large-scale molecular data due to poor performance of the correlation matrix calculation. With high-throughput sequencing technologies promising to produce even larger datasets per subject, we expect the performance of the state-of-the-art statistical algorithms to be further impacted unless efforts towards optimisation are carried out. MapReduce is a widely used high performance parallel framework that can solve the problem. In this paper, we evaluate the current parallel modes for correlation calculation methods and introduce an efficient data distribution and parallel calculation algorithm based on MapReduce to optimise the correlation calculation. We studied the performance of our algorithm using two gene expression benchmarks. In the micro-benchmark, our implementation using MapReduce, based on the R package RHIPE, demonstrates a 3.26-5.83 fold increase compared to the default Snowfall and 1.56-1.64 fold increase compared to the basic RHIPE in the Euclidean, Pearson and Spearman correlations. Though vanilla R and the optimised Snowfall outperforms our optimised RHIPE in the micro-benchmark, they do not scale well with the macro-benchmark. In the macro-benchmark the optimised RHIPE performs 2.03-16.56 times faster than vanilla R. Benefiting from the 3.30-5.13 times faster data preparation, the optimised RHIPE performs 1.22-1.71 times faster than the optimised Snowfall. Both the optimised RHIPE and the optimised Snowfall successfully performs the Kendall correlation with TCGA dataset within 7 hours. Both of them conduct more than 30 times faster than the estimated vanilla R. The performance evaluation found that the new MapReduce algorithm and its

  8. Encoding methods for B1+ mapping in parallel transmit systems at ultra high field

    Science.gov (United States)

    Tse, Desmond H. Y.; Poole, Michael S.; Magill, Arthur W.; Felder, Jörg; Brenner, Daniel; Jon Shah, N.

    2014-08-01

    Parallel radiofrequency (RF) transmission, either in the form of RF shimming or pulse design, has been proposed as a solution to the B1+ inhomogeneity problem in ultra high field magnetic resonance imaging. As a prerequisite, accurate B1+ maps from each of the available transmit channels are required. In this work, four different encoding methods for B1+ mapping, namely 1-channel-on, all-channels-on-except-1, all-channels-on-1-inverted and Fourier phase encoding, were evaluated using dual refocusing acquisition mode (DREAM) at 9.4 T. Fourier phase encoding was demonstrated in both phantom and in vivo to be the least susceptible to artefacts caused by destructive RF interference at 9.4 T. Unlike the other two interferometric encoding schemes, Fourier phase encoding showed negligible dependency on the initial RF phase setting and therefore no prior B1+ knowledge is required. Fourier phase encoding also provides a flexible way to increase the number of measurements to increase SNR, and to allow further reduction of artefacts by weighted decoding. These advantages of Fourier phase encoding suggest that it is a good choice for B1+ mapping in parallel transmit systems at ultra high field.

  9. MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control

    Science.gov (United States)

    Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming

    2017-09-01

    Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.

  10. An Efficient MapReduce-Based Parallel Clustering Algorithm for Distributed Traffic Subarea Division

    Directory of Open Access Journals (Sweden)

    Dawen Xia

    2015-01-01

    Full Text Available Traffic subarea division is vital for traffic system management and traffic network analysis in intelligent transportation systems (ITSs. Since existing methods may not be suitable for big traffic data processing, this paper presents a MapReduce-based Parallel Three-Phase K-Means (Par3PKM algorithm for solving traffic subarea division problem on a widely adopted Hadoop distributed computing platform. Specifically, we first modify the distance metric and initialization strategy of K-Means and then employ a MapReduce paradigm to redesign the optimized K-Means algorithm for parallel clustering of large-scale taxi trajectories. Moreover, we propose a boundary identifying method to connect the borders of clustering results for each cluster. Finally, we divide traffic subarea of Beijing based on real-world trajectory data sets generated by 12,000 taxis in a period of one month using the proposed approach. Experimental evaluation results indicate that when compared with K-Means, Par2PK-Means, and ParCLARA, Par3PKM achieves higher efficiency, more accuracy, and better scalability and can effectively divide traffic subarea with big taxi trajectory data.

  11. High-resolution 2-deoxyglucose mapping of functional cortical columns in mouse barrel cortex.

    Science.gov (United States)

    McCasland, J S; Woolsey, T A

    1988-12-22

    Cortical columns associated with barrels in layer IV of the somatosensory cortex were characterized by high-resolution 2-deoxy-D-glucose (2DG) autoradiography in freely behaving mice. The method demonstrates a more exact match between columnar labeling and cytoarchitectonic barrel boundaries than previously reported. The pattern of cortical activation seen with stimulation of a single whisker (third whisker in the middle row of large hairs--C3) was compared with the patterns from two control conditions--normal animals with all whiskers present ("positive control")--and with all large whiskers clipped ("negative control"). Two types of measurements were made from 2DG autoradiograms of tangential cortical sections: 1) labeled cells were identified by eye and tabulated with a computer, and 2) grain densities were obtained automatically with a computer-controlled microscope and image processor. We studied the fine-grained patterns of 2DG labeling in a nine-barrel grid with the C3 barrel in the center. From the analysis we draw five major conclusions. 1. Approximately 30-40% of the total number of neurons in the C3 barrel column are activated when only the C3 whisker is stimulated. This is about twice the number of neurons labeled in the C3 column when all whiskers are stimulated and about ten times the number of neurons labeled when all large whiskers are clipped. 2. There is evidence for a vertical functional organization within a barrel-related whisker column which has smaller dimensions in the tangential direction than a barrel. There are densely labeled patches within a barrel which are unique to an individual cortex. The same patchy pattern is found in the appropriate regions of sections above and below the barrels through the full thickness of the cortex. This functional arrangement could be considered to be a "minicolumn" or more likely a group of "minicolumns" (Mountcastle: In G.M. Edelman and U.B. Mountcastle (eds): The Material Brain: Cortical Organization

  12. Decreasing Data Analytics Time: Hybrid Architecture MapReduce-Massive Parallel Processing for a Smart Grid

    Directory of Open Access Journals (Sweden)

    Abdeslam Mehenni

    2017-03-01

    Full Text Available As our populations grow in a world of limited resources enterprise seek ways to lighten our load on the planet. The idea of modifying consumer behavior appears as a foundation for smart grids. Enterprise demonstrates the value available from deep analysis of electricity consummation histories, consumers’ messages, and outage alerts, etc. Enterprise mines massive structured and unstructured data. In a nutshell, smart grids result in a flood of data that needs to be analyzed, for better adjust to demand and give customers more ability to delve into their power consumption. Simply put, smart grids will increasingly have a flexible data warehouse attached to them. The key driver for the adoption of data management strategies is clearly the need to handle and analyze the large amounts of information utilities are now faced with. New approaches to data integration are nauseating moment; Hadoop is in fact now being used by the utility to help manage the huge growth in data whilst maintaining coherence of the Data Warehouse. In this paper we define a new Meter Data Management System Architecture repository that differ with three leaders MDMS, where we use MapReduce programming model for ETL and Parallel DBMS in Query statements(Massive Parallel Processing MPP.

  13. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    Science.gov (United States)

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  14. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    Science.gov (United States)

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  15. Optimal task mapping in safety-critical real-time parallel systems; Placement optimal de taches pour les systemes paralleles temps-reel critiques

    Energy Technology Data Exchange (ETDEWEB)

    Aussagues, Ch

    1998-12-11

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.

  16. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  17. Probabilistic global maps of the CO2 column at daily and monthly scales from sparse satellite measurements

    Science.gov (United States)

    Chevallier, Frédéric; Broquet, Grégoire; Pierangelo, Clémence; Crisp, David

    2017-07-01

    The column-average dry air-mole fraction of carbon dioxide in the atmosphere (XCO2) is measured by scattered satellite measurements like those from the Orbiting Carbon Observatory (OCO-2). We show that global continuous maps of XCO2 (corresponding to level 3 of the satellite data) at daily or coarser temporal resolution can be inferred from these data with a Kalman filter built on a model of persistence. Our application of this approach on 2 years of OCO-2 retrievals indicates that the filter provides better information than a climatology of XCO2 at both daily and monthly scales. Provided that the assigned observation uncertainty statistics are tuned in each grid cell of the XCO2 maps from an objective method (based on consistency diagnostics), the errors predicted by the filter at daily and monthly scales represent the true error statistics reasonably well, except for a bias in the high latitudes of the winter hemisphere and a lack of resolution (i.e., a too small discrimination skill) of the predicted error standard deviations. Due to the sparse satellite sampling, the broad-scale patterns of XCO2 described by the filter seem to lag behind the real signals by a few weeks. Finally, the filter offers interesting insights into the quality of the retrievals, both in terms of random and systematic errors.

  18. Using Hadoop MapReduce for Parallel Genetic Algorithms: A Comparison of the Global, Grid and Island Models.

    Science.gov (United States)

    Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica

    2017-06-29

    The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store

  19. A MapReduce-Based Parallel Frequent Pattern Growth Algorithm for Spatiotemporal Association Analysis of Mobile Trajectory Big Data

    Directory of Open Access Journals (Sweden)

    Dawen Xia

    2018-01-01

    Full Text Available Frequent pattern mining is an effective approach for spatiotemporal association analysis of mobile trajectory big data in data-driven intelligent transportation systems. While existing parallel algorithms have been successfully applied to frequent pattern mining of large-scale trajectory data, two major challenges are how to overcome the inherent defects of Hadoop to cope with taxi trajectory big data including massive small files and how to discover the implicitly spatiotemporal frequent patterns with MapReduce. To conquer these challenges, this paper presents a MapReduce-based Parallel Frequent Pattern growth (MR-PFP algorithm to analyze the spatiotemporal characteristics of taxi operating using large-scale taxi trajectories with massive small file processing strategies on a Hadoop platform. More specifically, we first implement three methods, that is, Hadoop Archives (HAR, CombineFileInputFormat (CFIF, and Sequence Files (SF, to overcome the existing defects of Hadoop and then propose two strategies based on their performance evaluations. Next, we incorporate SF into Frequent Pattern growth (FP-growth algorithm and then implement the optimized FP-growth algorithm on a MapReduce framework. Finally, we analyze the characteristics of taxi operating in both spatial and temporal dimensions by MR-PFP in parallel. The results demonstrate that MR-PFP is superior to existing Parallel FP-growth (PFP algorithm in efficiency and scalability.

  20. Parallel definition of tear film maps on distributed-memory clusters for the support of dry eye diagnosis.

    Science.gov (United States)

    González-Domínguez, Jorge; Remeseiro, Beatriz; Martín, María J

    2017-02-01

    The analysis of the interference patterns on the tear film lipid layer is a useful clinical test to diagnose dry eye syndrome. This task can be automated with a high degree of accuracy by means of the use of tear film maps. However, the time required by the existing applications to generate them prevents a wider acceptance of this method by medical experts. Multithreading has been previously successfully employed by the authors to accelerate the tear film map definition on multicore single-node machines. In this work, we propose a hybrid message-passing and multithreading parallel approach that further accelerates the generation of tear film maps by exploiting the computational capabilities of distributed-memory systems such as multicore clusters and supercomputers. The algorithm for drawing tear film maps is parallelized using Message Passing Interface (MPI) for inter-node communications and the multithreading support available in the C++11 standard for intra-node parallelization. The original algorithm is modified to reduce the communications and increase the scalability. The hybrid method has been tested on 32 nodes of an Intel cluster (with two 12-core Haswell 2680v3 processors per node) using 50 representative images. Results show that maximum runtime is reduced from almost two minutes using the previous only-multithreaded approach to less than ten seconds using the hybrid method. The hybrid MPI/multithreaded implementation can be used by medical experts to obtain tear film maps in only a few seconds, which will significantly accelerate and facilitate the diagnosis of the dry eye syndrome. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Biomechanical properties of orthogonal plate configuration versus parallel plate configuration using the same locking plate system for intra-articular distal humeral fractures under radial or ulnar column axial load.

    Science.gov (United States)

    Kudo, Toshiya; Hara, Akira; Iwase, Hideaki; Ichihara, Satoshi; Nagao, Masashi; Maruyama, Yuichiro; Kaneko, Kazuo

    2016-10-01

    Previous reports have questioned whether an orthogonal or parallel configuration is superior for distal humeral articular fractures. In previous clinical and biomechanical studies, implant failure of the posterolateral plate has been reported with orthogonal configurations; however, the reason for screw loosening in the posterolateral plate is unclear. The purpose of this study was to evaluate biomechanical properties and to clarify the causes of posterolateral plate loosening using a humeral fracture model under axial compression on the radial or ulnar column separately. And we changed only the plate set up: parallel or orthogonal. We used artificial bone to create an Association for the Study of Internal Fixation type 13-C2.3 intra-articular fracture model with a 1-cm supracondylar gap. We used an anatomically-preshaped distal humerus locking compression plate system (Synthes GmbH, Solothurn, Switzerland). Although this is originally an orthogonal plate system, we designed a mediolateral parallel configuration to use the contralateral medial plate instead of the posterolateral plate in the system. We calculated the stiffness of the radial and ulnar columns and anterior movement of the condylar fragment in the lateral view. The parallel configuration was superior to the orthogonal configuration regarding the stiffness of the radial column axial compression. There were significant differences between the two configurations regarding anterior movement of the capitellum during axial loading of the radial column. The posterolateral plate tended to bend anteriorly under axial compression compared with the medial or lateral plate. We believe that in the orthogonal configuration axial compression induced more anterior displacement of the capitellum than the trochlea, which eventually induced secondary fragment or screw dislocation on the posterolateral plate, or nonunion at the supracondylar level. In the parallel configuration, anterior movement of the capitellum or

  2. Spatial frequency-dependent feedback of visual cortical area 21a modulating functional orientation column maps in areas 17 and 18 of the cat.

    Science.gov (United States)

    Huang, Luoxiu; Chen, Xin; Shou, Tiande

    2004-02-20

    The feedback effect of activity of area 21a on orientation maps of areas 17 and 18 was investigated in cats using intrinsic signal optical imaging. A spatial frequency-dependent decrease in response amplitude of orientation maps to grating stimuli was observed in areas 17 and 18 when area 21a was inactivated by local injection of GABA, or by a lesion induced by liquid nitrogen freezing. The decrease in response amplitude of orientation maps of areas 17 and 18 after the area 21a inactivation paralleled the normal response without the inactivation. Application in area 21a of bicuculline, a GABAa receptor antagonist caused an increase in response amplitude of orientation maps of area 17. The results indicate a positive feedback from high-order visual cortical area 21a to lower-order areas underlying a spatial frequency-dependent mechanism.

  3. Mapping of synchronous dataflow graphs on MPSoCs based on parallelism enhancement

    NARCIS (Netherlands)

    Tang, Q.; Basten, T.; Geilen, M.; Stuijk, S.; Wei, J.B.

    2017-01-01

    Multi-processor systems-on-chips are widely adopted in implementing modern streaming applications to satisfy the ever increasing computation requirements. To take advantage of this kind of platform, it is necessary to map tasks of the application properly to different processors, so as to fully

  4. Differential receptive field organizations give rise to nearly identical neural correlations across three parallel sensory maps in weakly electric fish.

    Science.gov (United States)

    Hofmann, Volker; Chacron, Maurice J

    2017-09-01

    Understanding how neural populations encode sensory information thereby leading to perception and behavior (i.e., the neural code) remains an important problem in neuroscience. When investigating the neural code, one must take into account the fact that neural activities are not independent but are actually correlated with one another. Such correlations are seen ubiquitously and have a strong impact on neural coding. Here we investigated how differences in the antagonistic center-surround receptive field (RF) organization across three parallel sensory maps influence correlations between the activities of electrosensory pyramidal neurons. Using a model based on known anatomical differences in receptive field center size and overlap, we initially predicted large differences in correlated activity across the maps. However, in vivo electrophysiological recordings showed that, contrary to modeling predictions, electrosensory pyramidal neurons across all three segments displayed nearly identical correlations. To explain this surprising result, we incorporated the effects of RF surround in our model. By systematically varying both the RF surround gain and size relative to that of the RF center, we found that multiple RF structures gave rise to similar levels of correlation. In particular, incorporating known physiological differences in RF structure between the three maps in our model gave rise to similar levels of correlation. Our results show that RF center overlap alone does not determine correlations which has important implications for understanding how RF structure influences correlated neural activity.

  5. Map-Based Power-Split Strategy Design with Predictive Performance Optimization for Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jixiang Fan

    2015-09-01

    Full Text Available In this paper, a map-based optimal energy management strategy is proposed to improve the consumption economy of a plug-in parallel hybrid electric vehicle. In the design of the maps, which provide both the torque split between engine and motor and the gear shift, not only the current vehicle speed and power demand, but also the optimality based on the predicted trajectory of vehicle dynamics are considered. To seek the optimality, the equivalent consumption, which trades off the fuel and electricity usages, is chosen as the cost function. Moreover, in order to decrease the model errors in the process of optimization conducted in the discrete time domain, the variational integrator is employed to calculate the evolution of the vehicle dynamics. To evaluate the proposed energy management strategy, the simulation results performed on a professional GT-Suit simulator are demonstrated and the comparison to a real-time optimization method is also given to show the advantage of the proposed off-line optimization approach.

  6. Position Based Visual Servoing control of a Wheelchair Mounter Robotic Arm using Parallel Tracking and Mapping of task objects

    Directory of Open Access Journals (Sweden)

    Alessandro Palla

    2017-05-01

    Full Text Available In the last few years power wheelchairs have been becoming the only device able to provide autonomy and independence to people with motor skill impairments. In particular, many power wheelchairs feature robotic arms for gesture emulation, like the interaction with objects. However, complex robotic arms often require a joystic to be controlled; this feature make the arm hard to be controlled by impaired users. Paradoxically, if the user were able to proficiently control such devices, he would not need them. For that reason, this paper presents a highly autonomous robotic arm, designed in order to minimize the effort necessary for the control of the arm. In order to do that, the arm feature an easy to use human - machine interface and is controlled by Computer Vison algorithm, implementing a Position Based Visual Servoing (PBVS control. It was realized by extracting features by the camera and fusing them with the distance from the target, obtained by a proximity sensor. The Parallel Tracking and Mapping (PTAM algorithm was used to find the 3D position of the task object in the camera reference system. The visual servoing algorithm was implemented in an embedded platform, in real time. Each part of the control loop was developed in Robotic Operative System (ROS Environment, which allows to implement the previous algorithms as different nodes. Theoretical analysis, simulations and in system measurements proved the effectiveness of the proposed solution.

  7. Tracking senescence-induced patterns in leaf litter leachate using parallel factor analysis (PARAFAC) modeling and self-organizing maps

    Science.gov (United States)

    Wheeler, K. I.; Levia, D. F.; Hudson, J. E.

    2017-09-01

    In autumn, the dissolved organic matter (DOM) contribution of leaf litter leachate to streams in forested watersheds changes as trees undergo resorption, senescence, and leaf abscission. Despite its biogeochemical importance, little work has investigated how leaf litter leachate DOM changes throughout autumn and how any changes might differ interspecifically and intraspecifically. Since climate change is expected to cause vegetation migration, it is necessary to learn how changes in forest composition could affect DOM inputs via leaf litter leachate. We examined changes in leaf litter leachate fluorescent DOM (FDOM) from American beech (Fagus grandifolia Ehrh.) leaves in Maryland, Rhode Island, Vermont, and North Carolina and from yellow poplar (Liriodendron tulipifera L.) leaves from Maryland. FDOM in leachate samples was characterized by excitation-emission matrices (EEMs). A six-component parallel factor analysis (PARAFAC) model was created to identify components that accounted for the majority of the variation in the data set. Self-organizing maps (SOM) compared the PARAFAC component proportions of leachate samples. Phenophase and species exerted much stronger influence on the determination of a sample's SOM placement than geographic origin. As expected, FDOM from all trees transitioned from more protein-like components to more humic-like components with senescence. Percent greenness of sampled leaves and the proportion of tyrosine-like component 1 were found to be significantly different between the two genetic beech clusters, suggesting differences in photosynthesis and resorption. Our results highlight the need to account for interspecific and intraspecific variations in leaf litter leachate FDOM throughout autumn when examining the influence of allochthonous inputs to streams.

  8. QuBiLS-MIDAS: a parallel free-software for molecular descriptors computation based on multilinear algebraic maps.

    Science.gov (United States)

    García-Jacas, César R; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Valdés-Martiní, José R; Contreras-Torres, Ernesto

    2014-07-05

    The present report introduces the QuBiLS-MIDAS software belonging to the ToMoCoMD-CARDD suite for the calculation of three-dimensional molecular descriptors (MDs) based on the two-linear (bilinear), three-linear, and four-linear (multilinear or N-linear) algebraic forms. Thus, it is unique software that computes these tensor-based indices. These descriptors, establish relations for two, three, and four atoms by using several (dis-)similarity metrics or multimetrics, matrix transformations, cutoffs, local calculations and aggregation operators. The theoretical background of these N-linear indices is also presented. The QuBiLS-MIDAS software was developed in the Java programming language and employs the Chemical Development Kit library for the manipulation of the chemical structures and the calculation of the atomic properties. This software is composed by a desktop user-friendly interface and an Abstract Programming Interface library. The former was created to simplify the configuration of the different options of the MDs, whereas the library was designed to allow its easy integration to other software for chemoinformatics applications. This program provides functionalities for data cleaning tasks and for batch processing of the molecular indices. In addition, it offers parallel calculation of the MDs through the use of all available processors in current computers. The studies of complexity of the main algorithms demonstrate that these were efficiently implemented with respect to their trivial implementation. Lastly, the performance tests reveal that this software has a suitable behavior when the amount of processors is increased. Therefore, the QuBiLS-MIDAS software constitutes a useful application for the computation of the molecular indices based on N-linear algebraic maps and it can be used freely to perform chemoinformatics studies. Copyright © 2014 Wiley Periodicals, Inc.

  9. PULSE COLUMN

    Science.gov (United States)

    Grimmett, E.S.

    1964-01-01

    This patent covers a continuous countercurrent liquidsolids contactor column having a number of contactor states each comprising a perforated plate, a layer of balls, and a downcomer tube; a liquid-pulsing piston; and a solids discharger formed of a conical section at the bottom of the column, and a tubular extension on the lowest downcomer terminating in the conical section. Between the conical section and the downcomer extension is formed a small annular opening, through which solids fall coming through the perforated plate of the lowest contactor stage. This annular opening is small enough that the pressure drop thereacross is greater than the pressure drop upward through the lowest contactor stage. (AEC)

  10. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul; Woo, Jongwook; Goodman, Matthew; Huffman, Todd; Choe, Yoonsuck

    2013-01-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy

  11. Mapping caribou habitat north of the 51st parallel in Québec using Landsat imagery

    Directory of Open Access Journals (Sweden)

    Stéphanie Chalifoux

    2003-04-01

    Full Text Available A methodology using Landsat Thematic Mapper (TM images and vegetation typology, based on lichens as the principal component of caribou winter diet, was developed to map caribou habitat over a large and diversified area of Northern Québec. This approach includes field validation by aerial surveys (helicopter, classification of vegetation types, image enhancement, visual interpretation and computer assisted mapping. Measurements from more than 1500 field sites collected over six field campaigns from 1989 to 1996 represented the data analysed in this study. As the study progressed, 14 vegetation classes were defined and retained for analyses. Vegetation classes denoting important caribou habitat included six classes of upland lichen communities (Lichen, Lichen-Shrub, Shrub-Lichen, Lichen-Graminoid-Shrub, Lichen-Woodland, Lichen-Shrub-Woodland. Two classes (Burnt-over area, Regenerating burnt-over area are related to forest fire, and as they develop towards lichen communities, will become important for caribou. The last six classes are retained to depict remaining vegetation cover types. A total of 37 Landsat TM scenes were geocoded and enhanced using two methods: the Taylor method and the false colour composite method (bands combination and stretching. Visual inter¬pretation was chosen as the most efficient and reliable method to map vegetation types related to caribou habitat. The 43 maps produced at the scale of 1:250 000 and the synthesis map (1:2 000 000 provide a regional perspective of caribou habitat over 1200 000 km2 covering the entire range of the George river herd. The numerical nature of the data allows rapid spatial analysis and map updating.

  12. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul

    2013-08-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy techniques such as Knife-Edge Scanning Microscopy (KESM) are enabling whole-brain survey of neuronal distributions. Data from such techniques pose serious challenges to quantitative analysis due to the massive, growing, and sparsely labeled nature of the data. In this paper, we present a scalable, incremental learning algorithm for cell body detection that can address these issues. Our algorithm is computationally efficient (linear mapping, non-iterative) and does not require retraining (unlike gradient-based approaches) or retention of old raw data (unlike instance-based learning). We tested our algorithm on our rat brain Nissl data set, showing superior performance compared to an artificial neural network-based benchmark, and also demonstrated robust performance in a scenario where the data set is rapidly growing in size. Our algorithm is also highly parallelizable due to its incremental nature, and we demonstrated this empirically using a MapReduce-based implementation of the algorithm. We expect our scalable, incremental learning approach to be widely applicable to medical imaging domains where there is a constant flux of new data. © 2013 IEEE.

  13. Convolutional Codes with Maximum Column Sum Rank for Network Streaming

    OpenAIRE

    Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish

    2015-01-01

    The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...

  14. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    Science.gov (United States)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  15. Calculating electronic tunnel currents in networks of disordered irregularly shaped nanoparticles by mapping networks to arrays of parallel nonlinear resistors

    Energy Technology Data Exchange (ETDEWEB)

    Aghili Yajadda, Mir Massoud [CSIRO Manufacturing Flagship, P.O. Box 218, Lindfield NSW 2070 (Australia)

    2014-10-21

    We have shown both theoretically and experimentally that tunnel currents in networks of disordered irregularly shaped nanoparticles (NPs) can be calculated by considering the networks as arrays of parallel nonlinear resistors. Each resistor is described by a one-dimensional or a two-dimensional array of equal size nanoparticles that the tunnel junction gaps between nanoparticles in each resistor is assumed to be equal. The number of tunnel junctions between two contact electrodes and the tunnel junction gaps between nanoparticles are found to be functions of Coulomb blockade energies. In addition, the tunnel barriers between nanoparticles were considered to be tilted at high voltages. Furthermore, the role of thermal expansion coefficient of the tunnel junction gaps on the tunnel current is taken into account. The model calculations fit very well to the experimental data of a network of disordered gold nanoparticles, a forest of multi-wall carbon nanotubes, and a network of few-layer graphene nanoplates over a wide temperature range (5-300 K) at low and high DC bias voltages (0.001 mV–50 V). Our investigations indicate, although electron cotunneling in networks of disordered irregularly shaped NPs may occur, non-Arrhenius behavior at low temperatures cannot be described by the cotunneling model due to size distribution in the networks and irregular shape of nanoparticles. Non-Arrhenius behavior of the samples at zero bias voltage limit was attributed to the disorder in the samples. Unlike the electron cotunneling model, we found that the crossover from Arrhenius to non-Arrhenius behavior occurs at two temperatures, one at a high temperature and the other at a low temperature.

  16. Using parallel factor analysis modeling (PARAFAC) and self-organizing maps to track senescence-induced patterns in leaf litter leachate

    Science.gov (United States)

    Wheeler, K. I.; Levia, D. F., Jr.; Hudson, J. E.

    2017-12-01

    As trees undergo autumnal processes such as resorption, senescence, and leaf abscission, the dissolved organic matter (DOM) contribution of leaf litter leachate to streams changes. However, little research has investigated how the fluorescent DOM (FDOM) changes throughout the autumn and how this differs inter- and intraspecifically. Two of the major impacts of global climate change on forested ecosystems include altering phenology and causing forest community species and subspecies composition restructuring. We examined changes in FDOM in leachate from American beech (Fagus grandifolia Ehrh.) leaves in Maryland, Rhode Island, Vermont, and North Carolina and yellow poplar (Liriodendron tulipifera L.) leaves from Maryland throughout three different phenophases: green, senescing, and freshly abscissed. Beech leaves from Maryland and Rhode Island have previously been identified as belonging to the same distinct genetic cluster and beech trees from Vermont and the study site in North Carolina from the other. FDOM in samples was characterized using excitation-emission matrices (EEMs) and a six-component parallel factor analysis (PARAFAC) model was created to identify components. Self-organizing maps (SOMs) were used to visualize variation and patterns in the PARAFAC component proportions of the leachate samples. Phenophase and species had the greatest influence on determining where a sample mapped on the SOM when compared to genetic clusters and geographic origin. Throughout senescence, FDOM from all the trees transitioned from more protein-like components to more humic-like ones. Percent greenness of the sampled leaves and the proportion of the tyrosine-like component 1 were found to significantly differ between the two genetic beech clusters. This suggests possible differences in photosynthesis and resorption between the two genetic clusters of beech. The use of SOMs to visualize differences in patterns of senescence between the different species and genetic

  17. Modeling Stone Columns.

    Science.gov (United States)

    Castro, Jorge

    2017-07-11

    This paper reviews the main modeling techniques for stone columns, both ordinary stone columns and geosynthetic-encased stone columns. The paper tries to encompass the more recent advances and recommendations in the topic. Regarding the geometrical model, the main options are the "unit cell", longitudinal gravel trenches in plane strain conditions, cylindrical rings of gravel in axial symmetry conditions, equivalent homogeneous soil with improved properties and three-dimensional models, either a full three-dimensional model or just a three-dimensional row or slice of columns. Some guidelines for obtaining these simplified geometrical models are provided and the particular case of groups of columns under footings is also analyzed. For the latter case, there is a column critical length that is around twice the footing width for non-encased columns in a homogeneous soft soil. In the literature, the column critical length is sometimes given as a function of the column length, which leads to some disparities in its value. Here it is shown that the column critical length mainly depends on the footing dimensions. Some other features related with column modeling are also briefly presented, such as the influence of column installation. Finally, some guidance and recommendations are provided on parameter selection for the study of stone columns.

  18. ( Anogeissus leiocarpus ) timber columns

    African Journals Online (AJOL)

    A procedure for designing axially loaded Ayin (Anogeissus leiocarpus) wood column or strut has been investigated. Instead of the usual categorization of columns into short, intermediate and slender according to the value of slenderness ratio, a continuous column formula representing the three categories was derived.

  19. Column Liquid Chromatography.

    Science.gov (United States)

    Majors, Ronald E.; And Others

    1984-01-01

    Reviews literature covering developments of column liquid chromatography during 1982-83. Areas considered include: books and reviews; general theory; columns; instrumentation; detectors; automation and data handling; multidimensional chromatographic and column switching techniques; liquid-solid chromatography; normal bonded-phase, reversed-phase,…

  20. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  1. Parallel Access of Out-Of-Core Dense Extendible Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Rotem, Doron

    2007-07-26

    Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements.

  2. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  3. Evaluation of the chiral recognition properties as well as the column performance of four chiral stationary phases based on cellulose (3,5-dimethylphenylcarbamate) by parallel HPLC and SFC.

    Science.gov (United States)

    Nelander, Hanna; Andersson, Shalini; Ohlén, Kristina

    2011-12-30

    The performance of four commercially available cellulose tris(3,5-dimethylphenylcarbamate) based chiral stationary phases (CSPs) was evaluated with parallel high performance liquid chromatography (HPLC) and super critical fluid chromatography (SFC). Retention, enantioselectivity, resolution and efficiency were compared for a set of neutral, basic and acidic compounds having different physico-chemical properties by using different mobile phase conditions. Although the chiral selector is the same in all the four CSPs, a large difference in the ability to retain and resolve enantiomers was observed under the same chromatographic conditions. We believe that this is mainly due to differences in the silica matrix and immobilization techniques used by the different vendors. An extended study of metoprolol and structure analogues gave a deeper understanding of the accessibility of the chiral discriminating interactions and its impact on the resolution of the racemic compounds on the four CSPs studied. Also, a clear difference in enantioselectivity is observed between SFC and LC mode, hydrogen bonding was found to play an important role in the differential binding of the enantiomers to the CSPs. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  5. Small Column Ion Exchange

    International Nuclear Information System (INIS)

    Huff, Thomas

    2010-01-01

    Small Column Ion Exchange (SCIX) leverages a suite of technologies developed by DOE across the complex to achieve lifecycle savings. Technologies are applicable to multiple sites. Early testing supported multiple sites. Balance of SRS SCIX testing supports SRS deployment. A forma Systems Engineering Evaluation (SEE) was performed and selected Small Column Ion Exchange columns containing Crystalline Silicotitanate (CST) in a 2-column lead/lag configuration. SEE considered use of Spherical Resorcinol-Formaldehyde (sRF). Advantages of approach at SRS include: (1) no new buildings, (2) low volume of Cs waste in solid form compared to aqueous strip effluent; and availability of downstream processing facilities for immediate processing of spent resin.

  6. JCE Feature Columns

    Science.gov (United States)

    Holmes, Jon L.

    1999-05-01

    The Features area of JCE Online is now readily accessible through a single click from our home page. In the Features area each column is linked to its own home page. These column home pages also have links to them from the online Journal Table of Contents pages or from any article published as part of that feature column. Using these links you can easily find abstracts of additional articles that are related by topic. Of course, JCE Online+ subscribers are then just one click away from the entire article. Finding related articles is easy because each feature column "site" contains links to the online abstracts of all the articles that have appeared in the column. In addition, you can find the mission statement for the column and the email link to the column editor that I mentioned above. At the discretion of its editor, a feature column site may contain additional resources. As an example, the Chemical Information Instructor column edited by Arleen Somerville will have a periodically updated bibliography of resources for teaching and using chemical information. Due to the increase in the number of these resources available on the WWW, it only makes sense to publish this information online so that you can get to these resources with a simple click of the mouse. We expect that there will soon be additional information and resources at several other feature column sites. Following in the footsteps of the Chemical Information Instructor, up-to-date bibliographies and links to related online resources can be made available. We hope to extend the online component of our feature columns with moderated online discussion forums. If you have a suggestion for an online resource you would like to see included, let the feature editor or JCE Online (jceonline@chem.wisc.edu) know about it. JCE Internet Features JCE Internet also has several feature columns: Chemical Education Resource Shelf, Conceptual Questions and Challenge Problems, Equipment Buyers Guide, Hal's Picks, Mathcad

  7. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  8. Distillation Column Flooding Predictor

    Energy Technology Data Exchange (ETDEWEB)

    George E. Dzyacky

    2010-11-23

    The Flooding Predictor™ is a patented advanced control technology proven in research at the Separations Research Program, University of Texas at Austin, to increase distillation column throughput by over 6%, while also increasing energy efficiency by 10%. The research was conducted under a U. S. Department of Energy Cooperative Agreement awarded to George Dzyacky of 2ndpoint, LLC. The Flooding Predictor™ works by detecting the incipient flood point and controlling the column closer to its actual hydraulic limit than historical practices have allowed. Further, the technology uses existing column instrumentation, meaning no additional refining infrastructure is required. Refiners often push distillation columns to maximize throughput, improve separation, or simply to achieve day-to-day optimization. Attempting to achieve such operating objectives is a tricky undertaking that can result in flooding. Operators and advanced control strategies alike rely on the conventional use of delta-pressure instrumentation to approximate the column’s approach to flood. But column delta-pressure is more an inference of the column’s approach to flood than it is an actual measurement of it. As a consequence, delta pressure limits are established conservatively in order to operate in a regime where the column is never expected to flood. As a result, there is much “left on the table” when operating in such a regime, i.e. the capacity difference between controlling the column to an upper delta-pressure limit and controlling it to the actual hydraulic limit. The Flooding Predictor™, an innovative pattern recognition technology, controls columns at their actual hydraulic limit, which research shows leads to a throughput increase of over 6%. Controlling closer to the hydraulic limit also permits operation in a sweet spot of increased energy-efficiency. In this region of increased column loading, the Flooding Predictor is able to exploit the benefits of higher liquid

  9. Nuclear reactor control column

    International Nuclear Information System (INIS)

    Bachovchin, D.M.

    1982-01-01

    The nuclear reactor control column comprises a column disposed within the nuclear reactor core having a variable cross-section hollow channel and containing balls whose vertical location is determined by the flow of the reactor coolant through the column. The control column is divided into three basic sections wherein each of the sections has a different cross-sectional area. The uppermost section of the control column has the greatest crosssectional area, the intermediate section of the control column has the smallest cross-sectional area, and the lowermost section of the control column has the intermediate cross-sectional area. In this manner, the area of the uppermost section can be established such that when the reactor coolant is flowing under normal conditions therethrough, the absorber balls will be lifted and suspended in a fluidized bed manner in the upper section. However, when the reactor coolant flow falls below a predetermined value, the absorber balls will fall through the intermediate section and into the lowermost section, thereby reducing the reactivity of the reactor core and shutting down the reactor

  10. Improvements in solvent extraction columns

    International Nuclear Information System (INIS)

    Aughwane, K.R.

    1987-01-01

    Solvent extraction columns are used in the reprocessing of irradiated nuclear fuel. For an effective reprocessing operation a solvent extraction column is required which is capable of distributing the feed over most of the column. The patent describes improvements in solvent extractions columns which allows the feed to be distributed over an increased length of column than was previously possible. (U.K.)

  11. EX1701 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1701: Kingman/Palmyra, Jarvis (Mapping)...

  12. EX1403 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1403: East Coast Mapping and Exploration...

  13. EX0905 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0905: Mapping Field Trials II Mendocino...

  14. EX0801 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0801: Mapping Operations Shakedown...

  15. ON THE ORIGIN OF THE HIGH COLUMN DENSITY TURNOVER IN THE H I COLUMN DENSITY DISTRIBUTION

    International Nuclear Information System (INIS)

    Erkal, Denis; Gnedin, Nickolay Y.; Kravtsov, Andrey V.

    2012-01-01

    We study the high column density regime of the H I column density distribution function and argue that there are two distinct features: a turnover at N H I ≈ 10 21 cm –2 , which is present at both z = 0 and z ≈ 3, and a lack of systems above N H I ≈ 10 22 cm –2 at z = 0. Using observations of the column density distribution, we argue that the H I-H 2 transition does not cause the turnover at N H I ≈ 10 21 cm –2 but can plausibly explain the turnover at N H I ∼> 10 22 cm –2 . We compute the H I column density distribution of individual galaxies in the THINGS sample and show that the turnover column density depends only weakly on metallicity. Furthermore, we show that the column density distribution of galaxies, corrected for inclination, is insensitive to the resolution of the H I map or to averaging in radial shells. Our results indicate that the similarity of H I column density distributions at z = 3 and 0 is due to the similarity of the maximum H I surface densities of high-z and low-z disks, set presumably by universal processes that shape properties of the gaseous disks of galaxies. Using fully cosmological simulations, we explore other candidate physical mechanisms that could produce a turnover in the column density distribution. We show that while turbulence within giant molecular clouds cannot affect the damped Lyα column density distribution, stellar feedback can affect it significantly if the feedback is sufficiently effective in removing gas from the central 2-3 kpc of high-redshift galaxies. Finally, we argue that it is meaningful to compare column densities averaged over ∼ kpc scales with those estimated from quasar spectra that probe sub-pc scales due to the steep power spectrum of H I column density fluctuations observed in nearby galaxies.

  16. A hybrid genetic linkage map of two ecologically and morphologically divergent Midas cichlid fishes (Amphilophus spp.) obtained by massively parallel DNA sequencing (ddRADSeq).

    Science.gov (United States)

    Recknagel, Hans; Elmer, Kathryn R; Meyer, Axel

    2013-01-01

    Cichlid fishes are an excellent model system for studying speciation and the formation of adaptive radiations because of their tremendous species richness and astonishing phenotypic diversity. Most research has focused on African rift lake fishes, although Neotropical cichlid species display much variability as well. Almost one dozen species of the Midas cichlid species complex (Amphilophus spp.) have been described so far and have formed repeated adaptive radiations in several Nicaraguan crater lakes. Here we apply double-digest restriction-site associated DNA sequencing to obtain a high-density linkage map of an interspecific cross between the benthic Amphilophus astorquii and the limnetic Amphilophus zaliosus, which are sympatric species endemic to Crater Lake Apoyo, Nicaragua. A total of 755 RAD markers were genotyped in 343 F(2) hybrids. The map resolved 25 linkage groups and spans a total distance of 1427 cM with an average marker spacing distance of 1.95 cM, almost matching the total number of chromosomes (n = 24) in these species. Regions of segregation distortion were identified in five linkage groups. Based on the pedigree of parents to F(2) offspring, we calculated a genome-wide mutation rate of 6.6 × 10(-8) mutations per nucleotide per generation. This genetic map will facilitate the mapping of ecomorphologically relevant adaptive traits in the repeated phenotypes that evolved within the Midas cichlid lineage and, as the first linkage map of a Neotropical cichlid, facilitate comparative genomic analyses between African cichlids, Neotropical cichlids and other teleost fishes.

  17. Buckling of liquid columns

    NARCIS (Netherlands)

    Habibi, M.; Rahmani, Y.; Bonn, D.; Ribe, N.M.

    2010-01-01

    Under appropriate conditions, a column of viscous liquid falling onto a rigid surface undergoes a buckling instability. Here we show experimentally and theoretically that liquid buckling exhibits a hitherto unsuspected complexity involving three different modes—viscous, gravitational, and

  18. Solvent extraction columns

    International Nuclear Information System (INIS)

    Middleton, P.; Smith, J.R.

    1979-01-01

    In pulsed columns for use in solvent extraction processes, e.g. the reprocessing of nuclear fuel, the horizontal perforated plates inside the column are separated by interplate spacers manufactured from metallic neutron absorbing material. The spacer may be in the form of a spiral or concentric circles separated by radial limbs, or may be of egg-box construction. Suitable neutron absorbing materials include stainless steel containing boron or gadolinium, hafnium metal or alloys of hafnium. (UK)

  19. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  20. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  1. Assembly for connecting the column ends of two capillary columns

    International Nuclear Information System (INIS)

    Kolb, B.; Auer, M.; Pospisil, P.

    1984-01-01

    In gas chromatography, the column ends of two capillary columns are inserted into a straight capillary from both sides forming annular gaps. The capillary is located in a tee out of which the capillary columns are sealingly guided, and to which carrier gas is supplied by means of a flushing flow conduit. A ''straight-forward operation'' having capillary columns connected in series and a ''flush-back operation'' are possible. The dead volume between the capillary columns can be kept small

  2. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  3. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  4. Columns in Clay

    Science.gov (United States)

    Leenhouts, Robin

    2010-01-01

    This article describes a clay project for students studying Greece and Rome. It provides a wonderful way to learn slab construction techniques by making small clay column capitols. With this lesson, students learn architectural vocabulary and history, understand the importance of classical architectural forms and their influence on today's…

  5. Slender CRC Columns

    DEFF Research Database (Denmark)

    Aarup, Bendt; Jensen, Lars Rom; Ellegaard, Peter

    2005-01-01

    CRC is a high-performance steel fibre reinforced concrete with a typical compressive strength of 150 MPa. Design methods for a number of structural elements have been developed since CRC was invented in 1986, but the current project set out to further investigate the range of columns for which...

  6. Practical column design guide

    CERN Document Server

    Nitsche, M

    2017-01-01

    This book highlights the aspects that need to be considered when designing distillation columns in practice. It discusses the influencing parameters as well as the equations governing them, and presents several numerical examples. The book is intended both for experienced designers and for those who are new to the subject.

  7. Real-Time Spaceborne Synthetic Aperture Radar Float-Point Imaging System Using Optimized Mapping Methodology and a Multi-Node Parallel Accelerating Technique

    Science.gov (United States)

    Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long

    2018-01-01

    With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637

  8. CUB DI (Deionization) column control system

    International Nuclear Information System (INIS)

    Seino, K.C.

    1999-01-01

    For the old MR (Main Ring), deionization was done with two columns in CUB, using an ion exchange process. Typically 65 GPM of LCW flew through a column, and the resistivity was raised from 3 Mohm-cm to over 12 Mohm-cm. After a few weeks, columns lost their effectiveness and had to be regenerated in a process involving backwashing and adding hydrochloric acid and sodium hydroxide. For normal MR operations, LCW returned from the ring and passed through the two columns in parallel for deionization, although the system could have been operated satisfactorily with only one in use. A 3000 gallon reservoir (the Spheres) provided a reserve of LCW for allowing water leaks and expansions in the MR. During the MI (Main Injector) construction period, the third DI column was added to satisfy requirements for the MI. When the third column was added, the old regeneration controller was replaced with a new controller based on an Allen-Bradley PLC (i.e., SLC-5/04). The PLC is widely used and well documented, and therefore it may allow us to modify the regeneration programs in the future. In addition to the above regeneration controller, the old control panels (which were used to manipulate pumps and valves to supply LCW in Normal mode and to do Int. Recir. (Internal Recirculation) and Makeup) were replaced with a new control system based on Sixtrak Gateway and I/O modules. For simplicity, the new regeneration controller is called as the US Filter system, and the new control system is called as the Fermilab system in this writing

  9. Nine Words - Nine Columns

    DEFF Research Database (Denmark)

    Trempe Jr., Robert B.; Buthke, Jan

    2016-01-01

    This book records the efforts of a one-week joint workshop between Master students from Studio 2B of Arkitektskolen Aarhus and Master students from the Harbin Institute of Technology in Harbin, China. The workshop employed nine action words to instigate team-based investigation into the effects o...... as formwork for the shaping of wood veneer. The resulting columns ‘wear’ every aspect of this design pipeline process and display the power of process towards an architectural resolution....

  10. NMFS Water Column Sonar Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Water column sonar data are an important component of fishery independent surveys, habitat studies and other research. NMFS water column sonar data are archived here.

  11. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  12. Streaming nested data parallelism on multicores

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2016-01-01

    The paradigm of nested data parallelism (NDP) allows a variety of semi-regular computation tasks to be mapped onto SIMD-style hardware, including GPUs and vector units. However, some care is needed to keep down space consumption in situations where the available parallelism may vastly exceed...

  13. Elevator frames two columns

    OpenAIRE

    Marín Jiménez, Juan Francisco

    2015-01-01

    This project aims to solve the problem of vertical transport of charges raised by a company with the standard UNE 58-132-91/6. The purpose of this project is the industrial design of a system of load handling by a bi-columned lifting device, tractioned by flat belts and steel cables from a transport level to a different level in order to connect two different assembly lines situated at different heights. The goal of this project is lifting a 780 Kg load at a 2.400 mm height....

  14. Column: Every Last Byte

    Directory of Open Access Journals (Sweden)

    Simson Garfinkel

    2011-06-01

    Full Text Available Inheritance powder is the name that was given to poisons, especially arsenic, that were commonly used in the 17th and early 18th centuries to hasten the death of the elderly. For most of the 17th century, arsenic was deadly but undetectable, making it nearly impossible to prove that someone had been poisoned. The first arsenic test produced a gas—hardly something that a scientist could show to a judge. Faced with a growing epidemic of poisonings, doctors and chemists spent decades searching for something better.(see PDF for full column

  15. Retrograde lag screw placement in anterior acetabular column with regard to the anterior pelvic plane and midsagittal plane -- virtual mapping of 260 three-dimensional hemipelvises for quantitative anatomic analysis.

    Science.gov (United States)

    Ochs, Bjoern Gunnar; Stuby, Fabian Maria; Ateschrang, Atesch; Stoeckle, Ulrich; Gonser, Christoph Emanuel

    2014-10-01

    Percutaneous screw placement can be used for minimally invasive treatment of none or minimally displaced fractures of the anterior column. The complex pelvic geometry can pose a major challenge even for experienced surgeons. The present study examined the preformed bone stock of the anterior column in 260 hemipelvises (130 male and 130 female). Screws were virtually implanted using iPlan(®) CMF (BrainLAB AG, Feldkirchen, Germany); the maximal implant length and the maximal implant diameter were assessed. The study showed, that 6.5mm can generally be used in men; in women however individual planning is essential in regard to the maximal implant diameter since we found that in 15.4% of women, screws with a diameter less than 6.5mm were necessary. The virtual analysis of the preformed bone stock corridor of the anterior column showed two constrictions of crucial clinical importance. These can be found after 18% and 55% (men) respectively 16% and 55% (women) measured from the entry point along the axis of the implant. The entry point of the retrograde anterior column screw in our collective was located lateral of tuberculum pubicum at the level of the superior-medial margin of foramen obturatum. In female patients, the entry point was located significantly more lateral of symphysis and closer to the cranial margin of ramus superior ossis pubis. The mean angle between the screw trajectory and the anterior pelvic plane in sagittal section was 31.6 ± 5.5°, the mean angle between the screw trajectory and the midsagittal plane in axial section was 55.9 ± 4.6° and the mean angle between the screw trajectory and the midsagittal plane in coronal section was 42.1 ± 3.9° with no significant deviation between both sexes. The individual angles formed by the screw trajectory and the anterior pelvic and midsagittal plane are independent from anthropometric parameters sex, age, body length and weight. Therefore, they can be used for orientation in lag screw placement keeping

  16. Annular pulse column development studies

    International Nuclear Information System (INIS)

    Benedict, G.E.

    1980-01-01

    The capacity of critically safe cylindrical pulse columns limits the size of nuclear fuel solvent extraction plants because of the limited cross-sectional area of plutonium, U-235, or U-233 processing columns. Thus, there is a need to increase the cross-sectional area of these columns. This can be accomplished through the use of a column having an annular cross section. The preliminary testing of a pilot-plant-scale annular column has been completed and is reported herein. The column is made from 152.4-mm (6-in.) glass pipe sections with an 89-mm (3.5-in.) o.d. internal tube, giving an annular width of 32-mm (1.25-in.). Louver plates are used to swirl the column contents to prevent channeling of the phases. The data from this testing indicate that this approach can successfully provide larger-cross-section critically safe pulse columns. While the capacity is only 70% of that of a cylindrical column of similar cross section, the efficiency is almost identical to that of a cylindrical column. No evidence was seen of any non-uniform pulsing action from one side of the column to the other

  17. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  18. DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics

    OpenAIRE

    Liu, Yanchen; Cao, Fang; Mortazavi, Masood; Chen, Mengmeng; Yan, Ning; Ku, Chi; Adnaik, Aniket; Morgan, Stephen; Shi, Guangyu; Wang, Yuhu; Fang, Fan

    2015-01-01

    Part 10: Big Data and Text Mining; International audience; We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange op...

  19. Column-to-column packing variation of disposable pre-packed columns for protein chromatography.

    Science.gov (United States)

    Schweiger, Susanne; Hinterberger, Stephan; Jungbauer, Alois

    2017-12-08

    In the biopharmaceutical industry, pre-packed columns are the standard for process development, but they must be qualified before use in experimental studies to confirm the required performance of the packed bed. Column qualification is commonly done by pulse response experiments and depends highly on the experimental testing conditions. Additionally, the peak analysis method, the variation in the 3D packing structure of the bed, and the measurement precision of the workstation influence the outcome of qualification runs. While a full body of literature on these factors is available for HPLC columns, no comparable studies exist for preparative columns for protein chromatography. We quantified the influence of these parameters for commercially available pre-packed and self-packed columns of disposable and non-disposable design. Pulse response experiments were performed on 105 preparative chromatography columns with volumes of 0.2-20ml. The analyte acetone was studied at six different superficial velocities (30, 60, 100, 150, 250 and 500cm/h). The column-to-column packing variation between disposable pre-packed columns of different diameter-length combinations varied by 10-15%, which was acceptable for the intended use. The column-to-column variation cannot be explained by the packing density, but is interpreted as a difference in particle arrangement in the column. Since it was possible to determine differences in the column-to-column performance, we concluded that the columns were well-packed. The measurement precision of the chromatography workstation was independent of the column volume and was in a range of±0.01ml for the first peak moment and±0.007 ml 2 for the second moment. The measurement precision must be considered for small columns in the range of 2ml or less. The efficiency of disposable pre-packed columns was equal or better than that of self-packed columns. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  20. Column: File Cabinet Forensics

    Directory of Open Access Journals (Sweden)

    Simson Garfinkel

    2011-12-01

    Full Text Available Researchers can spend their time reverse engineering, performing reverse analysis, or making substantive contributions to digital forensics science. Although work in all of these areas is important, it is the scientific breakthroughs that are the most critical for addressing the challenges that we face.Reverse Engineering is the traditional bread-and-butter of digital forensics research. Companies like Microsoft and Apple deliver computational artifacts (operating systems, applications and phones to the commercial market. These artifacts are bought and used by billions. Some have evil intent, and (if society is lucky, the computers end up in the hands of law enforcement. Unfortunately the original vendors rarely provide digital forensics tools that make their systems amenable to analysis by law enforcement. Hence the need for reverse engineering.(see PDF for full column

  1. Commissioning and operation of distillation column at Madras Atomic Power Station (Paper No. 1.10)

    International Nuclear Information System (INIS)

    Neelakrishnan, G.; Subramanian, N.

    1992-01-01

    In Madras Atomic Power Station (MAPS), an upgrading plant based on vacuum distillation was constructed to upgrade the downgraded heavy water collected in vapor recovery dryers. There are two distillation columns and each having a capacity of 77.5 tonne per annum of reactor grade heavy water with average feed concentration of 30% IP. The performance of the distillation columns has been very good. The column I and column II have achieved an operating factor of 92% and 90% respectively. The commissioning activities, and subsequent improvements carried out in the distillation columns are described. (author)

  2. Compact electron beam focusing column

    Science.gov (United States)

    Persaud, Arun; Leung, Ka-Ngo; Reijonen, Jani

    2001-12-01

    A novel design for an electron beam focusing column has been developed at LBNL. The design is based on a low-energy spread multicusp plasma source which is used as a cathode for electron beam production. The focusing column is 10 mm in length. The electron beam is focused by means of electrostatic fields. The column is designed for a maximum voltage of 50 kV. Simulations of the electron trajectories have been performed by using the 2D simulation code IGUN and EGUN. The electron temperature has also been incorporated into the simulations. The electron beam simulations, column design and fabrication will be discussed in this presentation.

  3. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  4. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  5. Safety barriers and lighting columns.

    NARCIS (Netherlands)

    Schreuder, D.A.

    1972-01-01

    Problems arising from the sitting of lighting columns on the central reserve are reviewed, and remedial measures such as break-away lighting supports and installation of safety fences on the central reserve on both sides of the lighting columns are examined.

  6. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  7. Mush Column Magma Chambers

    Science.gov (United States)

    Marsh, B. D.

    2002-12-01

    Magma chambers are a necessary concept in understanding the chemical and physical evolution of magma. The concept may well be similar to a transfer function in circuit or time series analysis. It does what needs to be done to transform source magma into eruptible magma. In gravity and geodetic interpretations the causative body is (usually of necessity) geometrically simple and of limited vertical extent; it is clearly difficult to `see' through the uppermost manifestation of the concentrated magma. The presence of plutons in the upper crust has reinforced the view that magma chambers are large pots of magma, but as in the physical representation of a transfer function, actual magma chambers are clearly distinct from virtual magma chambers. Two key features to understanding magmatic systems are that they are vertically integrated over large distances (e.g., 30-100 km), and that all local magmatic processes are controlled by solidification fronts. Heat transfer considerations show that any viable volcanic system must be supported by a vertically extensive plumbing system. Field and geophysical studies point to a common theme of an interconnected stack of sill-like structures extending to great depth. This is a magmatic Mush Column. The large-scale (10s of km) structure resembles the vertical structure inferred at large volcanic centers like Hawaii (e.g., Ryan et al.), and the fine scale (10s to 100s of m) structure is exemplified by ophiolites and deeply eroded sill complexes like the Ferrar dolerites of the McMurdo Dry Valleys, Antarctica. The local length scales of the sill reservoirs and interconnecting conduits produce a rich spectrum of crystallization environments with distinct solidification time scales. Extensive horizontal and vertical mushy walls provide conditions conducive to specific processes of differentiation from solidification front instability to sidewall porous flow and wall rock slumping. The size, strength, and time series of eruptive behavior

  8. Column-Oriented Database Systems (Tutorial)

    NARCIS (Netherlands)

    D. Abadi; P.A. Boncz (Peter); S. Harizopoulos

    2009-01-01

    textabstractColumn-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as

  9. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  10. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  11. EX0909L3 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0909L3: Mapping Field Trials - Hawaiian...

  12. EX1103L1 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1103: Exploration and Mapping, Galapagos...

  13. EX1502L1 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1502L1: Caribbean Exploration (Mapping)...

  14. EX1502L2 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1502L2: Caribbean Exploration (Mapping)...

  15. EX1503L1 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1503L1: Tropical Exploration (Mapping I)...

  16. EX0909L4 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0909L4: Mapping Field Trials -...

  17. EX1402L3 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1402L3: Gulf of Mexico Mapping and ROV...

  18. EX1402L1 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1402L1: Gulf of Mexico Mapping and...

  19. EX1402L2 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1402L2: Gulf of Mexico Mapping and...

  20. EX0909L2 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0909L2: Mapping Field Trials - Necker...

  1. On-column reduction of catecholamine quinones in stainless steel columns during liquid chromatography.

    Science.gov (United States)

    Xu, R; Huang, X; Kramer, K J; Hawley, M D

    1995-10-10

    The chromatographic behavior of quinones derived from the oxidation of dopamine and N-acetyldopamine has been studied using liquid chromatography (LC) with both a diode array detector and an electrochemical detector that has parallel dual working electrodes. When stainless steel columns are used, an anodic peak for the oxidation of the catecholamine is observed at the same retention time as a cathodic peak for the reduction of the catecholamine quinone. In addition, the anodic peak exhibits a tail that extends to a second anodic peak for the catecholamine. The latter peak occurs at the normal retention time of the catecholamine. The origin of this phenomenon has been studied and metallic iron in the stainless steel components of the LC system has been found to reduce the quinones to their corresponding catecholamines. The simultaneous appearance of a cathodic peak for the reduction of catecholamine quinone and an anodic peak for the oxidation of the corresponding catecholamine occurs when metallic iron in the exit frit reduces some of the quinones as the latter exits the column. This phenomenon is designated as the "concurrent anodic-cathodic response." It is also observed for quinones of of 3,4-dihydroxybenzoic acid and probably occurs with o- or p-quinones of other dihydroxyphenyl compounds. The use of nonferrous components in LC systems is recommended to eliminate possible on-column reduction of quinones.

  2. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  3. Water Column Sonar Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The collection and analysis of water column sonar data is a relatively new avenue of research into the marine environment. Primary uses include assessing biological...

  4. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  5. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  6. LIQUID-LIQUID EXTRACTION COLUMNS

    Science.gov (United States)

    Thornton, J.D.

    1957-12-31

    This patent relates to liquid-liquid extraction columns having a means for pulsing the liquid in the column to give it an oscillatory up and down movement, and consists of a packed column, an inlet pipe for the dispersed liquid phase and an outlet pipe for the continuous liquid phase located in the direct communication with the liquid in the lower part of said column, an inlet pipe for the continuous liquid phase and an outlet pipe for the dispersed liquid phase located in direct communication with the liquid in the upper part of said column, a tube having one end communicating with liquid in the lower part of said column and having its upper end located above the level of said outlet pipe for the dispersed phase, and a piston and cylinder connected to the upper end of said tube for applying a pulsating pneumatic pressure to the surface of the liquid in said tube so that said surface rises and falls in said tube.

  7. Exhaust properties of centre-column-limited plasmas on MAST

    International Nuclear Information System (INIS)

    Maddison, G.P.; Akers, R.J.; Brickley, C.; Gryaznevich, M.P.; Lott, F.C.; Patel, A.; Sykes, A.; Turner, A.; Valovic, M.

    2007-01-01

    The lowest aspect ratio possible in a spherical tokamak is defined by limiting the plasma on its centre column, which might therefore maximize many physics benefits of this fusion approach. A key issue for such discharges is whether loads exhausted onto the small surface area of the column remain acceptable. A first series of centre-column-limited pulses has been examined on MAST using fast infra-red thermography to infer incident power densities as neutral-beam heating was scanned from 0 to 2.5 MW. Simple mapping shows that efflux distributions on the column armour are governed mostly by magnetic geometry, which moreover spreads them advantageously over almost the whole vertical length. Hence steady peak power densities between sawteeth remained low, -2 , comparable with the target strike-point value in a reference diverted plasma at lower power. Plasma purity and normalized thermal energy confinement through the centre-column-limited (CCL) series were also similar to properties of MAST diverted cases. A major bonus of CCL geometry is a propensity for exhaust to penetrate through its inner scrape-off layer connecting to the column into an expanding outer plume, which forms a 'natural divertor'. Effectiveness of this process may even increase with plasma heating, owing to rising Shafranov shift and/or toroidal rotation. A larger CCL device could potentially offer a simpler, more economic next-step design

  8. Column-Oriented Database Systems (Tutorial)

    OpenAIRE

    Abadi, D.; Boncz, Peter; Harizopoulos, S.

    2009-01-01

    textabstractColumn-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as opposed to traditional database systems that store entire records (rows) one after the other. Reading a subset of a table’s columns becomes faster, at the potential expense of excessive disk-head s...

  9. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  10. Simulation Exploration through Immersive Parallel Planes: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve

    2016-03-01

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  11. Simulation Exploration through Immersive Parallel Planes

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Steve [Los Alamos Visualization Associates

    2017-05-25

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  12. Radiotracer Imaging of Sediment Columns

    Science.gov (United States)

    Moses, W. W.; O'Neil, J. P.; Boutchko, R.; Nico, P. S.; Druhan, J. L.; Vandehey, N. T.

    2010-12-01

    Nuclear medical PET and SPECT cameras routinely image radioactivity concentration of gamma ray emitting isotopes (PET - 511 keV; SPECT - 75-300 keV). We have used nuclear medical imaging technology to study contaminant transport in sediment columns. Specifically, we use Tc-99m (T1/2 = 6 h, Eγ = 140 keV) and a SPECT camera to image the bacteria mediated reduction of pertechnetate, [Tc(VII)O4]- + Fe(II) → Tc(IV)O2 + Fe(III). A 45 mL bolus of Tc-99m (32 mCi) labeled sodium pertechnetate was infused into a column (35cm x 10cm Ø) containing uranium-contaminated subsurface sediment from the Rifle, CO site. A flow rate of 1.25 ml/min of artificial groundwater was maintained in the column. Using a GE Millennium VG camera, we imaged the column for 12 hours, acquiring 44 frames. As the microbes in the sediment were inactive, we expected most of the iron to be Fe(III). The images were consistent with this hypothesis, and the Tc-99m pertechnetate acted like a conservative tracer. Virtually no binding of the Tc-99m was observed, and while the bolus of activity propagated fairly uniformly through the column, some inhomogeneity attributed to sediment packing was observed. We expect that after augmentation by acetate, the bacteria will metabolically reduce Fe(III) to Fe(II), leading to significant Tc-99m binding. Imaging sediment columns using nuclear medicine techniques has many attractive features. Trace quantities of the radiolabeled compounds are used (micro- to nano- molar) and the half-lives of many of these tracers are short (Image of Tc-99m distribution in a column containing Rifle sediment at four times.

  13. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  14. Topographic shear and the relation of ocular dominance columns to orientation columns in primate and cat visual cortex.

    Science.gov (United States)

    Wood, Richard J.; Schwartz, Eric L.

    1999-03-01

    Shear has been known to exist for many years in the topographic structure of the primary visual cortex, but has received little attention in the modeling literature. Although the topographic map of V1 is largely conformal (i.e. zero shear), several groups have observed topographic shear in the region of the V1/V2 border. Furthermore, shear has also been revealed by anisotropy of cortical magnification factor within a single ocular dominance column. In the present paper, we make a functional hypothesis: the major axis of the topographic shear tensor provides cortical neurons with a preferred direction of orientation tuning. We demonstrate that isotropic neuronal summation of a sheared topographic map, in the presence of additional random shear, can provide the major features of cortical functional architecture with the ocular dominance column system acting as the principal source of the shear tensor. The major principal axis of the shear tensor determines the direction and its eigenvalues the relative strength of cortical orientation preference. This hypothesis is then shown to be qualitatively consistent with a variety of experimental results on cat and monkey orientation column properties obtained from optical recording and from other anatomical and physiological techniques. In addition, we show that a recent result of Das and Gilbert (Das, A., & Gilbert, C. D., 1997. Distortions of visuotopic map match orientation singularities in primary visual cortex. Nature, 387, 594-598) is consistent with an infinite set of parameterized solutions for the cortical map. We exploit this freedom to choose a particular instance of the Das-Gilbert solution set which is consistent with the full range of local spatial structure in V1. These results suggest that further relationships between ocular dominance columns, orientation columns, and local topography may be revealed by experimental testing.

  15. Performance evaluation of a rectifier column using gamma column scanning

    Directory of Open Access Journals (Sweden)

    Aquino Denis D.

    2017-12-01

    Full Text Available Rectifier columns are considered to be a critical component in petroleum refineries and petrochemical processing installations as they are able to affect the overall performance of these facilities. It is deemed necessary to monitor the operational conditions of such vessels to optimize processes and prevent anomalies which could pose undesired consequences on product quality that might lead to huge financial losses. A rectifier column was subjected to gamma scanning using a 10-mCi Co-60 source and a 2-inch-long detector in tandem. Several scans were performed to gather information on the operating conditions of the column under different sets of operating parameters. The scan profiles revealed unexpected decreases in the radiation intensity at vapour levels between trays 2 and 3, and between trays 4 and 5. Flooding also occurred during several scans which could be attributed to parametric settings.

  16. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  17. Post column derivatisation analyses review. Is post-column derivatisation incompatible with modern HPLC columns?

    Science.gov (United States)

    Jones, Andrew; Pravadali-Cekic, Sercan; Dennis, Gary R; Shalliker, R Andrew

    2015-08-19

    Post Column derivatisation (PCD) coupled with high performance liquid chromatography or ultra-high performance liquid chromatography is a powerful tool in the modern analytical laboratory, or at least it should be. One drawback with PCD techniques is the extra post-column dead volume due to reaction coils used to enable adequate reaction time and the mixing of reagents which causes peak broadening, hence a loss of separation power. This loss of efficiency is counter-productive to modern HPLC technologies, -such as UHPLC. We reviewed 87 PCD methods published from 2009 to 2014. We restricted our review to methods published between 2009 and 2014, because we were interested in the uptake of PCD methods in UHPLC environments. Our review focused on a range of system parameters including: column dimensions, stationary phase and particle size, as well as the geometry of the reaction loop. The most commonly used column in the methods investigated was not in fact a modern UHPLC version with sub-2-micron, (or even sub-3-micron) particles, but rather, work-house columns, such as, 250 × 4.6 mm i.d. columns packed with 5 μm C18 particles. Reaction loops were varied, even within the same type of analysis, but the majority of methods employed loop systems with volumes greater than 500 μL. A second part of this review illustrated briefly the effect of dead volume on column performance. The experiment evaluated the change in resolution and separation efficiency of some weak to moderately retained solutes on a 250 × 4.6 mm i.d. column packed with 5 μm particles. The data showed that reaction loops beyond 100 μL resulted in a very serious loss of performance. Our study concluded that practitioners of PCD methods largely avoid the use of UHPLC-type column formats, so yes, very much, PCD is incompatible with the modern HPLC column. Copyright © 2015. Published by Elsevier B.V.

  18. NOx retention in scrubbing column

    International Nuclear Information System (INIS)

    Nakazone, A.K.; Costa, R.E.; Lobao, A.S.T.; Matsuda, H.T.; Araujo, B.F.

    1988-07-01

    During the UO 2 dissolution in nitric acid, some different species of NO x are released. The off gas can either be refluxed to the dissolver or be released and retained on special columns. The final composition of the solution is the main parameter to take in account. A process for nitrous gases retention using scubber columns containing H 2 O or diluted HNO 3 is presented. Chemiluminescence measurement was employed to NO x evalution before and after scrubbing. Gas flow, temperature, residence time are the main parameters considered in this paper. For the dissolution of 100g UO 2 in 8M nitric acid, a 6NL/h O 2 flow was the best condition for the NO/NO 2 oxidation with maximum adsorption in the scrubber columns. (author) [pt

  19. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  20. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  1. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  2. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  3. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  4. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  5. Chromatographic properties PLOT multicapillary columns.

    Science.gov (United States)

    Nikolaeva, O A; Patrushev, Y V; Sidelnikov, V N

    2017-03-10

    Multicapillary columns (MCCs) for gas chromatography make it possible to perform high-speed analysis of the mixtures of gaseous and volatile substances at a relatively large amount of the loaded sample. The study was performed using PLOT MCCs for gas-solid chromatography (GSC) with different stationary phases (SP) based on alumina, silica and poly-(1-trimethylsilyl-1-propyne) (PTMSP) polymer as well as porous polymers divinylbenzene-styrene (DVB-St), divinylbenzene-vinylimidazole (DVB-VIm) and divinylbenzene-ethylene glycol dimethacrylate (DVB-EGD). These MCCs have the efficiency of 4000-10000 theoretical plates per meter (TP/m) and at a column length of 25-30cm can separate within 10-20s multicomponent mixtures of substances belonging to different classes of chemical compounds. The sample amount not overloading the column is 0.03-1μg and depends on the features of a porous layer. Examples of separations on some of the studied columns are considered. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Automation of column-based radiochemical separations. A comparison of fluidic, robotic, and hybrid architectures

    Energy Technology Data Exchange (ETDEWEB)

    Grate, J.W.; O' Hara, M.J.; Farawila, A.F.; Ozanich, R.M.; Owsley, S.L. [Pacific Northwest National Laboratory, Richland, WA (United States)

    2011-07-01

    Two automated systems have been developed to perform column-based radiochemical separation procedures. These new systems are compared with past fluidic column separation architectures, with emphasis on using disposable components so that no sample contacts any surface that any other sample has contacted, and setting up samples and columns in parallel for subsequent automated processing. In the first new approach, a general purpose liquid handling robot has been modified and programmed to perform anion exchange separations using 2 mL bed columns in 6 mL plastic disposable column bodies. In the second new approach, a fluidic system has been developed to deliver clean reagents through disposable manual valves to six disposable columns, with a mechanized fraction collector that positions one of four rows of six vials below the columns. The samples are delivered to each column via a manual 3-port disposable valve from disposable syringes. This second approach, a hybrid of fluidic and mechanized components, is a simpler more efficient approach for performing anion exchange procedures for the recovery and purification of plutonium from samples. The automation architectures described can also be adapted to column-based extraction chromatography separations. (orig.)

  7. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  8. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  9. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  10. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  11. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  12. Mapping out Map Libraries

    Directory of Open Access Journals (Sweden)

    Ferjan Ormeling

    2008-09-01

    Full Text Available Discussing the requirements for map data quality, map users and their library/archives environment, the paper focuses on the metadata the user would need for a correct and efficient interpretation of the map data. For such a correct interpretation, knowledge of the rules and guidelines according to which the topographers/cartographers work (such as the kind of data categories to be collected, and the degree to which these rules and guidelines were indeed followed are essential. This is not only valid for the old maps stored in our libraries and archives, but perhaps even more so for the new digital files as the format in which we now have to access our geospatial data. As this would be too much to ask from map librarians/curators, some sort of web 2.0 environment is sought where comments about data quality, completeness and up-to-dateness from knowledgeable map users regarding the specific maps or map series studied can be collected and tagged to scanned versions of these maps on the web. In order not to be subject to the same disadvantages as Wikipedia, where the ‘communis opinio’ rather than scholarship, seems to be decisive, some checking by map curators of this tagged map use information would still be needed. Cooperation between map curators and the International Cartographic Association ( ICA map and spatial data use commission to this end is suggested.

  13. Modeling of column apparatus processes

    CERN Document Server

    Boyadjiev, Christo; Boyadjiev, Boyan; Popova-Krumova, Petya

    2016-01-01

    This book presents a new approach for the modeling of chemical and interphase mass transfer processes in industrial column apparatuses, using convection-diffusion and average-concentration models. The convection-diffusion type models are used for a qualitative analysis of the processes and to assess the main, small and slight physical effects, and then reject the slight effects. As a result, the process mechanism can be identified. It also introduces average concentration models for quantitative analysis, which use the average values of the velocity and concentration over the cross-sectional area of the column. The new models are used to analyze different processes (simple and complex chemical reactions, absorption, adsorption and catalytic reactions), and make it possible to model the processes of gas purification with sulfur dioxide, which form the basis of several patents.

  14. The dorsal tectal longitudinal column (TLCd): a second longitudinal column in the paramedian region of the midbrain tectum.

    Science.gov (United States)

    Aparicio, M-Auxiliadora; Saldaña, Enrique

    2014-03-01

    The tectal longitudinal column (TLC) is a longitudinally oriented, long and narrow nucleus that spans the paramedian region of the midbrain tectum of a large variety of mammals (Saldaña et al. in J Neurosci 27:13108-13116, 2007). Recent analysis of the organization of this region revealed another novel nucleus located immediately dorsal, and parallel, to the TLC. Because the name "tectal longitudinal column" also seems appropriate for this novel nucleus, we suggest the TLC described in 2007 be renamed the "ventral tectal longitudinal column (TLCv)", and the newly discovered nucleus termed the "dorsal tectal longitudinal column (TLCd)". This work represents the first characterization of the rat TLCd. A constellation of anatomical techniques was used to demonstrate that the TLCd differs from its surrounding structures (TLCv and superior colliculus) cytoarchitecturally, myeloarchitecturally, neurochemically and hodologically. The distinct expression of vesicular amino acid transporters suggests that TLCd neurons are GABAergic. The TLCd receives major projections from various areas of the cerebral cortex (secondary visual mediomedial area, and granular and dysgranular retrosplenial cortices) and from the medial pretectal nucleus. It densely innervates the ipsilateral lateral posterior and laterodorsal nuclei of the thalamus. Thus, the TLCd is connected with vision-related neural centers. The TLCd may be unique as it constitutes the only known nucleus made of GABAergic neurons dedicated to providing massive inhibition to higher order thalamic nuclei of a specific sensory modality.

  15. Studies of column supported towers

    International Nuclear Information System (INIS)

    Chauvel, D.; Costaz, J.-L.

    1991-01-01

    As a result of a research and development programme into the civil engineering of cooling towers launched in 1978 by Electricite de France, very high cooling towers were built at Golfech and Chooz, in France, using column supports. This paper discusses the evolution of this new type of support from classical diagonal supports, presents some of the results of design calculations and survey measurements taken during construction of the shell and analyses the behaviour of the structure. (author)

  16. SPEEDUPtrademark ion exchange column model

    International Nuclear Information System (INIS)

    Hang, T.

    2000-01-01

    A transient model to describe the process of loading a solute onto the granular fixed bed in an ion exchange (IX) column has been developed using the SpeedUptrademark software package. SpeedUp offers the advantage of smooth integration into other existing SpeedUp flowsheet models. The mathematical algorithm of a porous particle diffusion model was adopted to account for convection, axial dispersion, film mass transfer, and pore diffusion. The method of orthogonal collocation on finite elements was employed to solve the governing transport equations. The model allows the use of a non-linear Langmuir isotherm based on an effective binary ionic exchange process. The SpeedUp column model was tested by comparing to the analytical solutions of three transport problems from the ion exchange literature. In addition, a sample calculation of a train of three crystalline silicotitanate (CST) IX columns in series was made using both the SpeedUp model and Purdue University's VERSE-LC code. All test cases showed excellent agreement between the SpeedUp model results and the test data. The model can be readily used for SuperLigtrademark ion exchange resins, once the experimental data are complete

  17. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    Science.gov (United States)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  18. Analyzing thematic maps and mapping for accuracy

    Science.gov (United States)

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  19. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  20. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  1. Two generalizations of column-convex polygons

    International Nuclear Information System (INIS)

    Feretic, Svjetlan; Guttmann, Anthony J

    2009-01-01

    Column-convex polygons were first counted by area several decades ago, and the result was found to be a simple, rational, generating function. In this work we generalize that result. Let a p-column polyomino be a polyomino whose columns can have 1, 2, ..., p connected components. Then column-convex polygons are equivalent to 1-convex polyominoes. The area generating function of even the simplest generalization, namely 2-column polyominoes, is unlikely to be solvable. We therefore define two classes of polyominoes which interpolate between column-convex polygons and 2-column polyominoes. We derive the area generating functions of those two classes, using extensions of existing algorithms. The growth constants of both classes are greater than the growth constant of column-convex polyominoes. Rather tight lower bounds on the growth constants complement a comprehensive asymptotic analysis.

  2. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  3. Relationship between surface, free tropospheric and total column ozone in 2 contrasting areas in South-Africa

    CSIR Research Space (South Africa)

    Combrink, J

    1995-04-01

    Full Text Available Measurements of surface ozone in two contrasting areas of South Africa are compared with free tropospheric and Total Ozone Mapping Spectrometer (TOMS) total column ozone data. Cape Point is representative of a background monitoring station which...

  4. Hemifield columns co-opt ocular dominance column structure in human achiasma.

    Science.gov (United States)

    Olman, Cheryl A; Bao, Pinglei; Engel, Stephen A; Grant, Andrea N; Purington, Chris; Qiu, Cheng; Schallmo, Michael-Paul; Tjan, Bosco S

    2018-01-01

    In the absence of an optic chiasm, visual input to the right eye is represented in primary visual cortex (V1) in the right hemisphere, while visual input to the left eye activates V1 in the left hemisphere. Retinotopic mapping In V1 reveals that in each hemisphere left and right visual hemifield representations are overlaid (Hoffmann et al., 2012). To explain how overlapping hemifield representations in V1 do not impair vision, we tested the hypothesis that visual projections from nasal and temporal retina create interdigitated left and right visual hemifield representations in V1, similar to the ocular dominance columns observed in neurotypical subjects (Victor et al., 2000). We used high-resolution fMRI at 7T to measure the spatial distribution of responses to left- and right-hemifield stimulation in one achiasmic subject. T 2 -weighted 2D Spin Echo images were acquired at 0.8mm isotropic resolution. The left eye was occluded. To the right eye, a presentation of flickering checkerboards alternated between the left and right visual fields in a blocked stimulus design. The participant performed a demanding orientation-discrimination task at fixation. A general linear model was used to estimate the preference of voxels in V1 to left- and right-hemifield stimulation. The spatial distribution of voxels with significant preference for each hemifield showed interdigitated clusters which densely packed V1 in the right hemisphere. The spatial distribution of hemifield-preference voxels in the achiasmic subject was stable between two days of testing and comparable in scale to that of human ocular dominance columns. These results are the first in vivo evidence showing that visual hemifield representations interdigitate in achiasmic V1 following a similar developmental course to that of ocular dominance columns in V1 with intact optic chiasm. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Okeanos Explorer (EX1402L3): Gulf of Mexico Mapping and ROV Exploration

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam mapping, single beam, water column sonar, sub-bottom profile, water column profile, ship sensor, ROV sensor, video and image data will all be collected...

  6. 29 CFR 1926.755 - Column anchorage.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 8 2010-07-01 2010-07-01 false Column anchorage. 1926.755 Section 1926.755 Labor... (CONTINUED) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Steel Erection § 1926.755 Column anchorage. (a) General requirements for erection stability. (1) All columns shall be anchored by a minimum of 4 anchor...

  7. Adsorption columns for use in radioimmunoassays

    International Nuclear Information System (INIS)

    1976-01-01

    Adsorption columns are provided which can be utilized in radioimmunoassay systems such as those involving the separation of antibody-antigen complexes from free antigens. The preparation of the columns includes the treatment of retaining substrate material to render it hydrophilic, preparation and degassing of the separation material and loading the column

  8. Thermal process of an air column

    International Nuclear Information System (INIS)

    Lee, F.T.

    1994-01-01

    Thermal process of a hot air column is discussed based on laws of thermodynamics. The kinetic motion of the air mass in the column can be used as a power generator. Alternatively, the column can also function as a exhaust/cooler

  9. Water Column Correction for Coral Reef Studies by Remote Sensing

    Science.gov (United States)

    Zoffoli, Maria Laura; Frouin, Robert; Kampel, Milton

    2014-01-01

    Human activity and natural climate trends constitute a major threat to coral reefs worldwide. Models predict a significant reduction in reef spatial extension together with a decline in biodiversity in the relatively near future. In this context, monitoring programs to detect changes in reef ecosystems are essential. In recent years, coral reef mapping using remote sensing data has benefited from instruments with better resolution and computational advances in storage and processing capabilities. However, the water column represents an additional complexity when extracting information from submerged substrates by remote sensing that demands a correction of its effect. In this article, the basic concepts of bottom substrate remote sensing and water column interference are presented. A compendium of methodologies developed to reduce water column effects in coral ecosystems studied by remote sensing that include their salient features, advantages and drawbacks is provided. Finally, algorithms to retrieve the bottom reflectance are applied to simulated data and actual remote sensing imagery and their performance is compared. The available methods are not able to completely eliminate the water column effect, but they can minimize its influence. Choosing the best method depends on the marine environment, available input data and desired outcome or scientific application. PMID:25215941

  10. Water Column Correction for Coral Reef Studies by Remote Sensing

    Directory of Open Access Journals (Sweden)

    Maria Laura Zoffoli

    2014-09-01

    Full Text Available Human activity and natural climate trends constitute a major threat to coral reefs worldwide. Models predict a significant reduction in reef spatial extension together with a decline in biodiversity in the relatively near future. In this context, monitoring programs to detect changes in reef ecosystems are essential. In recent years, coral reef mapping using remote sensing data has benefited from instruments with better resolution and computational advances in storage and processing capabilities. However, the water column represents an additional complexity when extracting information from submerged substrates by remote sensing that demands a correction of its effect. In this article, the basic concepts of bottom substrate remote sensing and water column interference are presented. A compendium of methodologies developed to reduce water column effects in coral ecosystems studied by remote sensing that include their salient features, advantages and drawbacks is provided. Finally, algorithms to retrieve the bottom reflectance are applied to simulated data and actual remote sensing imagery and their performance is compared. The available methods are not able to completely eliminate the water column effect, but they can minimize its influence. Choosing the best method depends on the marine environment, available input data and desired outcome or scientific application.

  11. Water column correction for coral reef studies by remote sensing.

    Science.gov (United States)

    Zoffoli, Maria Laura; Frouin, Robert; Kampel, Milton

    2014-09-11

    Human activity and natural climate trends constitute a major threat to coral reefs worldwide. Models predict a significant reduction in reef spatial extension together with a decline in biodiversity in the relatively near future. In this context, monitoring programs to detect changes in reef ecosystems are essential. In recent years, coral reef mapping using remote sensing data has benefited from instruments with better resolution and computational advances in storage and processing capabilities. However, the water column represents an additional complexity when extracting information from submerged substrates by remote sensing that demands a correction of its effect. In this article, the basic concepts of bottom substrate remote sensing and water column interference are presented. A compendium of methodologies developed to reduce water column effects in coral ecosystems studied by remote sensing that include their salient features, advantages and drawbacks is provided. Finally, algorithms to retrieve the bottom reflectance are applied to simulated data and actual remote sensing imagery and their performance is compared. The available methods are not able to completely eliminate the water column effect, but they can minimize its influence. Choosing the best method depends on the marine environment, available input data and desired outcome or scientific application.

  12. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  13. High-field fMRI unveils orientation columns in humans.

    Science.gov (United States)

    Yacoub, Essa; Harel, Noam; Ugurbil, Kâmil

    2008-07-29

    Functional (f)MRI has revolutionized the field of human brain research. fMRI can noninvasively map the spatial architecture of brain function via localized increases in blood flow after sensory or cognitive stimulation. Recent advances in fMRI have led to enhanced sensitivity and spatial accuracy of the measured signals, indicating the possibility of detecting small neuronal ensembles that constitute fundamental computational units in the brain, such as cortical columns. Orientation columns in visual cortex are perhaps the best known example of such a functional organization in the brain. They cannot be discerned via anatomical characteristics, as with ocular dominance columns. Instead, the elucidation of their organization requires functional imaging methods. However, because of insufficient sensitivity, spatial accuracy, and image resolution of the available mapping techniques, thus far, they have not been detected in humans. Here, we demonstrate, by using high-field (7-T) fMRI, the existence and spatial features of orientation- selective columns in humans. Striking similarities were found with the known spatial features of these columns in monkeys. In addition, we found that a larger number of orientation columns are devoted to processing orientations around 90 degrees (vertical stimuli with horizontal motion), whereas relatively similar fMRI signal changes were observed across any given active column. With the current proliferation of high-field MRI systems and constant evolution of fMRI techniques, this study heralds the exciting prospect of exploring unmapped and/or unknown columnar level functional organizations in the human brain.

  14. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  15. TRACING H2 COLUMN DENSITY WITH ATOMIC CARBON (C I) AND CO ISOTOPOLOGS

    International Nuclear Information System (INIS)

    Lo, N.; Bronfman, L.; Cunningham, M. R.; Jones, P. A.; Lowe, V.; Cortes, P. C.; Simon, R.; Fissel, L.; Novak, G.

    2014-01-01

    We present the first results of neutral carbon ([C I] 3 P 1 - 3 P 0 at 492 GHz) and carbon monoxide ( 13 CO, J = 1-0) mapping in the Vela Molecular Ridge cloud C (VMR-C) and the G333 giant molecular cloud complexes with the NANTEN2 and Mopra telescopes. For the four regions mapped in this work, we find that [C I] has very similar spectral emission profiles to 13 CO, with comparable line widths. We find that [C I] has an opacity of 0.1-1.3 across the mapped region while the [C I]/ 13 CO peak brightness temperature ratio is between 0.2 and 0.8. The [C I] column density is an order of magnitude lower than that of 13 CO. The H 2 column density derived from [C I] is comparable to values obtained from 12 CO. Our maps show that C I is preferentially detected in gas with low temperatures (below 20 K), which possibly explains the comparable H 2 column density calculated from both tracers (both C I and 12 CO underestimate column density), as a significant amount of the C I in the warmer gas is likely in the higher energy state transition ([C I] 3 P 2 - 3 P 1 at 810 GHz), and thus it is likely that observations of both the above [C I] transitions are needed in order to recover the total H 2 column density

  16. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  17. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  18. Derringer desirability and kinetic plot LC-column comparison approach for MS-compatible lipopeptide analysis.

    Science.gov (United States)

    D'Hondt, Matthias; Verbeke, Frederick; Stalmans, Sofie; Gevaert, Bert; Wynendaele, Evelien; De Spiegeleer, Bart

    2014-06-01

    Lipopeptides are currently re-emerging as an interesting subgroup in the peptide research field, having historical applications as antibacterial and antifungal agents and new potential applications as antiviral, antitumor, immune-modulating and cell-penetrating compounds. However, due to their specific structure, chromatographic analysis often requires special buffer systems or the use of trifluoroacetic acid, limiting mass spectrometry detection. Therefore, we used a traditional aqueous/acetonitrile based gradient system, containing 0.1% (m/v) formic acid, to separate four pharmaceutically relevant lipopeptides (polymyxin B 1 , caspofungin, daptomycin and gramicidin A 1 ), which were selected based upon hierarchical cluster analysis (HCA) and principal component analysis (PCA). In total, the performance of four different C18 columns, including one UPLC column, were evaluated using two parallel approaches. First, a Derringer desirability function was used, whereby six single and multiple chromatographic response values were rescaled into one overall D -value per column. Using this approach, the YMC Pack Pro C18 column was ranked as the best column for general MS-compatible lipopeptide separation. Secondly, the kinetic plot approach was used to compare the different columns at different flow rate ranges. As the optimal kinetic column performance is obtained at its maximal pressure, the length elongation factor λ ( P max / P exp ) was used to transform the obtained experimental data (retention times and peak capacities) and construct kinetic performance limit (KPL) curves, allowing a direct visual and unbiased comparison of the selected columns, whereby the YMC Triart C18 UPLC and ACE C18 columns performed as best. Finally, differences in column performance and the (dis)advantages of both approaches are discussed.

  19. Scalability of Parallel Scientific Applications on the Cloud

    Directory of Open Access Journals (Sweden)

    Satish Narayana Srirama

    2011-01-01

    Full Text Available Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study the effects of moving parallel scientific applications onto the cloud, we deployed several benchmark applications like matrix–vector operations and NAS parallel benchmarks, and DOUG (Domain decomposition On Unstructured Grids on the cloud. DOUG is an open source software package for parallel iterative solution of very large sparse systems of linear equations. The detailed analysis of DOUG on the cloud showed that parallel applications benefit a lot and scale reasonable on the cloud. We could also observe the limitations of the cloud and its comparison with cluster in terms of performance. However, for efficiently running the scientific applications on the cloud infrastructure, the applications must be reduced to frameworks that can successfully exploit the cloud resources, like the MapReduce framework. Several iterative and embarrassingly parallel algorithms are reduced to the MapReduce model and their performance is measured and analyzed. The analysis showed that Hadoop MapReduce has significant problems with iterative methods, while it suits well for embarrassingly parallel algorithms. Scientific computing often uses iterative methods to solve large problems. Thus, for scientific computing on the cloud, this paper raises the necessity for better frameworks or optimizations for MapReduce.

  20. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  1. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  2. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  3. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  4. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  5. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  6. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  7. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  8. Evaluation of Packed Distillation Columns I - Atmospheric Pressure

    National Research Council Canada - National Science Library

    Reynolds, Thaine

    1951-01-01

    .... Four column-packing combinations of the glass columns and four column-packing combinations of the steel columns were investigated at atmospheric pressure using a test mixture of methylcyclohexane...

  9. Oscillating water column structural model

    Energy Technology Data Exchange (ETDEWEB)

    Copeland, Guild [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jepsen, Richard Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Margaret Ellen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by a BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.

  10. Picobubble enhanced column flotation of fine coal

    Energy Technology Data Exchange (ETDEWEB)

    Tao, D.; Yu, S.; Parekh, B.K. [University of Kentucky, Lexington, KY (United States). Mining Engineering

    2006-07-01

    The purpose is to study the effectiveness of picobubbles in the column flotation of -28 mesh fine coal particles. A flotation column with a picobubble generator was developed and tested for enhancing the recovery of ultrafine coal particles. The picobubble generator was designed using the hydrodynamic cavitation principle. A metallurgical and a steam coal were tested in the apparatus. The results show that the use of picobubbles in a 2in. flotation column increased the recovery of fine coal by 10 to 30%. The recovery rate varied with feed rate, collector dosage, and other column conditions. 40 refs., 8 figs., 2 tabs.

  11. Thermally stable dexsil-400 glass capillary columns

    International Nuclear Information System (INIS)

    Maskarinec, M.P.; Olerich, G.

    1980-01-01

    The factors affecting efficiency, thermal stability, and reproducibility of Dexsil-400 glass capillary columns for gas chromatography in general, and for polycyclic aromatic hydrocarbons (PAHs) in particular were investigated. Columns were drawn from Kimble KG-6 (soda-lime) glass or Kimox (borosilicate) glass. All silylation was carried out at 200 0 C. Columns were coated according to the static method. Freshly prepared, degassed solutions of Dexsil-400 in pentane or methylene chloride were used. Thermal stability of the Dexsil 400 columns with respect to gas chromatography/mass spectrometry (GC/MS) were tested. Column-to-column variability is a function of each step in the fabrication of the columns. The degree of etching, extent of silylation, and stationary phase film thickness must be carefully controlled. The variability in two Dexsil-400 capillary column prepared by etching, silylation with solution of hexa methyl disilazone (HMDS), and static coating is shown and also indicates the excellent selectivity of Dexsil-400 for the separation of alkylated aromatic compounds. The wide temperature range of Dexsil-400 and the high efficiency of the capillary columns also allow the analysis of complex mixtures with minimal prefractionation. Direct injection of a coal liquefaction product is given. Analysis by GC/MS indicated the presence of parent PAHs, alkylated PAHs, nitrogen and sulfur heterocycles, and their alkylated derivatives. 4 figures

  12. Laser surface wakefield in a plasma column

    International Nuclear Information System (INIS)

    Gorbunov, L.M.; Mora, P.; Ramazashvili, R.R.

    2003-01-01

    The structure of the wakefield in a plasma column, produced by a short intense laser pulse, propagating through a gas affected by tunneling ionization is investigated. It is shown that besides the usual plasma waves in the bulk part of the plasma column [see Andreev et al., Phys. Plasmas 9, 3999 (2002)], the laser pulse also generates electromagnetic surface waves propagating along the column boundary. The length of the surface wake wave substantially exceeds the length of the plasma wake wave and its electromagnetic field extends far outside the plasma column

  13. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  14. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  15. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  16. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  17. Mechanized Sephadex LH-20 multiple column chromatography as a prerequisite for automated multi-steroid radioimmunoassays

    International Nuclear Information System (INIS)

    Sippell, W.G.; Bidlingmaier, F.; Knorr, D.

    1978-01-01

    To establish a procedure for the simultaneous determination of all major corticosteroid hormones and their immediate biological precursors in the same plasma sample, two different mechanized methods for the simultaneous isolation of aldosterone (A), corticosterone (B), 11-deoxycorticosterone (DOC), progesterone (P), 17-hydroxyprogesterone (17-OHP), 11-deoxycortisol (S), cortisol (F) and cortisone (E) from the methylene chloride extracts of 0.1 to 2.0ml plasma samples have been developed. In method I, steroids are separated with methylene chloride:methanol=98:2 as solvent system on 60-cm Sephadex LH-20 columns, up to eight of which are eluted in parallel using a multi-channel peristaltic pump and individual flow-rate control (40ml/h) by capillary valves and micro-flowmeters. Method II, on the other hand, utilizes the same solvent system on ten 75-cm LH-20 columns which are eluted in reversed flow simultaneously by a ten-channel, double-piston pump that precisely maintains an elution flow rate of 40ml/h in every column. In both methods, eluate fractions of each of the isolated steroids are automatically pooled and collected from all parallel columns by one programmable linear fraction collector. As a result of the high reproducibility of the elution patterns, both between different parallel columns and between 30 to 40 consecutive elutions, mean recoveries of tritiated steroids including extraction are 60 to 84% after a single separation and still over 50% after an additional separation on 40-cm LH-20 columns, with coefficients of variation below 15% (method II). Thus, the eight steroids can be completely isolated from each of ten plasma extracts within 3 to 4 hours, yielding 80 samples readily prepared for subsequent quantitation by radioimmunoassay. (author)

  18. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  19. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  20. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  1. Column: Factors Affecting Data Decay

    Directory of Open Access Journals (Sweden)

    Kevin Fairbanks

    2012-06-01

    Full Text Available In nuclear physics, the phrase decay rate is used to denote the rate that atoms and other particles spontaneously decompose. Uranium-235 famously decays into a variety of daughter isotopes including Thorium and Neptunium, which themselves decay to others. Decay rates are widely observed and wildly different depending on many factors, both internal and external. U-235 has a half-life of 703,800,000 years, for example, while free neutrons have a half-life of 611 seconds and neutrons in an atomic nucleus are stable.We posit that data in computer systems also experiences some kind of statistical decay process and thus also has a discernible decay rate. Like atomic decay, data decay fluctuates wildly. But unlike atomic decay, data decay rates are the result of so many different interplaying processes that we currently do not understand them well enough to come up with quantifiable numbers. Nevertheless, we believe that it is useful to discuss some of the factors that impact the data decay rate, for these factors frequently determine whether useful data about a subject can be recovered by forensic investigation.(see PDF for full column

  2. Gaseous carbon dioxide absorbing column

    International Nuclear Information System (INIS)

    Harashina, Heihachi.

    1994-01-01

    The absorbing column of the present invention comprises a cyclone to which CO 2 gas and Ca(OH) 2 are blown to form CaCO 3 , a water supply means connected to an upper portion of the cyclone for forming a thin water membrane on the inner wall thereof, and a water processing means connected to a lower portion of the cyclone for draining water incorporating CaCO 3 . If a mixed fluid of CO 2 gas and Ca(OH) 2 is blown in a state where a flowing water membrane is formed on the inner wall of the cyclone, formation of CaCO 3 is promoted also in the inside of the cyclone in addition to the formation of CaCO 3 in the course of blowing. Then, formed CaCO 3 is discharged from the lower portion of the cyclone together with downwardly flowing water. With such procedures, solid contents such as CaCO 3 separated at the inner circumferential wall are sent into the thin water membrane, adsorbed and captured, and the solid contents are successively washed out, so that a phenomenon that the solid contents deposit and grow on the inner wall of the cyclone can be prevented effectively. (T.M.)

  3. 1979-1999 satellite total ozone column measurements over West Africa

    Directory of Open Access Journals (Sweden)

    P. Di Carlo

    2000-06-01

    Full Text Available Total Ozone Mapping Spectrometer (TOMS instruments have been flown on NASA/GSFC satellites for over 20 years. They provide near real-time ozone data for Atmospheric Science Research. As part of preliminary efforts aimed to develop a Lidar station in Nigeria for monitoring the atmospheric ozone and aerosol levels, the monthly mean TOMS total column ozone measurements between 1979 to 1999 have been analysed. The trends of the total column ozone showed a spatial and temporal variation with signs of the Quasi Biennial Oscillation (QBO during the 20-year study period. The values of the TOMS total ozone column, over Nigeria (4-14°N is within the range of 230-280 Dobson Units, this is consistent with total ozone column data, measured since April 1993 with a Dobson Spectrophotometer at Lagos (3°21¢E, 6°33¢N, Nigeria.

  4. Rasch models with exchangeable rows and columns

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt

    The article studies distributions of doubly infinite binary matrices with exchangeable rows and columns which satify the further property that the probability of any $m \\times n$ submatrix is a function of the row- and column sums of that matrix. We show that any such distribution is a (unique...

  5. The general packed column : an analytical solution

    NARCIS (Netherlands)

    Gielen, J.L.W.

    2000-01-01

    The transient behaviour of a packed column is considered. The column, uniformly packed on a macroscopic scale, is multi-structured on the microscopic level: the solid phase consists of particles, which may differ in incidence, shape or size, and other relevant physical properties. Transport in the

  6. Fringing-field effects in acceleration columns

    International Nuclear Information System (INIS)

    Yavor, M.I.; Weick, H.; Wollnik, H.

    1999-01-01

    Fringing-field effects in acceleration columns are investigated, based on the fringing-field integral method. Transfer matrices at the effective boundaries of the acceleration column are obtained, as well as the general transfer matrix of the region separating two homogeneous electrostatic fields with different field strengths. The accuracy of the fringing-field integral method is investigated

  7. Automatic parallelization of while-Loops using speculative execution

    International Nuclear Information System (INIS)

    Collard, J.F.

    1995-01-01

    Automatic parallelization of imperative sequential programs has focused on nests of for-loops. The most recent of them consist in finding an affine mapping with respect to the loop indices to simultaneously capture the temporal and spatial properties of the parallelized program. Such a mapping is usually called a open-quotes space-time transformation.close quotes This work describes an extension of these techniques to while-loops using speculative execution. We show that space-time transformations are a good framework for summing up previous restructuration techniques of while-loop, such as pipelining. Moreover, we show that these transformations can be derived and applied automatically

  8. QDP++: Data Parallel Interface for QCD

    Energy Technology Data Exchange (ETDEWEB)

    Robert Edwards

    2003-03-01

    This is a user's guide for the C++ binding for the QDP Data Parallel Applications Programmer Interface developed under the auspices of the US Department of Energy Scientific Discovery through Advanced Computing (SciDAC) program. The QDP Level 2 API has the following features: (1) Provides data parallel operations (logically SIMD) on all sites across the lattice or subsets of these sites. (2) Operates on lattice objects, which have an implementation-dependent data layout that is not visible above this API. (3) Hides details of how the implementation maps onto a given architecture, namely how the logical problem grid (i.el lattice) is mapped onto the machine architecture. (4) Allows asynchronous (non-blocking) shifts of lattice level objects over any permutation map of site sonto sites. However, from the user's view these instructions appear blocking and in fact may be so in some implementation. (5) Provides broadcast operations (filling a lattice quantity from a scalar value(s)), global reduction operations, and lattice-wide operations on various data-type primitives, such as matrices, vectors, and tensor products of matrices (propagators). (6) Operator syntax that support complex expression constructions.

  9. Optimal Operation and Stabilising Control of the Concentric Heat-Integrated Distillation Column

    DEFF Research Database (Denmark)

    Bisgaard, Thomas; Skogestad, Sigurd; Huusom, Jakob Kjøbsted

    2016-01-01

    A systematic control structure design method is applied on the concentric heat integrated distillation column (HIDiC) separating benzene and toluene. A degrees of freedom analysis is provided for identifying potential manipulated and controlled variables. Optimal operation is mapped and active...

  10. Center column design of the PLT

    International Nuclear Information System (INIS)

    Citrolo, J.; Frankenberg, J.

    1975-01-01

    The center column of the PLT machine is a secondary support member for the toroidal field coils. Its purpose is to decrease the bending moment at the nose of the coils. The center column design was to have been a stainless steel casting with the toroidal field coils grouped around the casting at installation, trapping it in place. However, the castings developed cracks during fabrication and were unsuitable for use. Installation of the coils proceeded without the center column. It then became necessary to redesign a center column which would be capable of installation with the toroidal field coils in place. The final design consists of three A-286 forgings. This paper discusses the final center column design and the influence that new knowledge, obtained during the power tests, had on the new design

  11. Experimental study of parallel multi-tungsten wire Z-pinch

    International Nuclear Information System (INIS)

    Huang Xianbin; China Academy of Engineering Physics, Mianyang; Lin Libin; Yang Libing; Deng Jianjun; Gu Yuanchao; Ye Shican; Yue Zhengpu; Zhou Shaotong; Li Fengping; Zhang Siqun

    2005-01-01

    The study of three parallel tungsten wire loads and five parallel tungsten wire loads implosion experiment on accelerator 'Yang' are reported. Tungsten wires (φ17 μm) with separation of 1 mm were used. The pinch was driven by a 350 kA peak current, 80 ns 10%-90% rise time. By means of pinhole camera and X-ray diagnostics technology, a non-uniform plasma column is formed among the wires and soft X-ray pulse are observed. the change of load current are analyzed, the development of sausage instability and kink instability, 'hot spot' effect and dispersion spot for plasma column are also discussed. (authors)

  12. Admittance Scanning for Whole Column Detection.

    Science.gov (United States)

    Stamos, Brian N; Dasgupta, Purnendu K; Ohira, Shin-Ichi

    2017-07-05

    Whole column detection (WCD) is as old as chromatography itself. WCD requires an ability to interrogate column contents from the outside. Other than the obvious case of optical detection through a transparent column, admittance (often termed contactless conductance) measurements can also sense changes in the column contents (especially ionic content) from the outside without galvanic contact with the solution. We propose here electromechanically scanned admittance imaging and apply this to open tubular (OT) chromatography. The detector scans across the column; the length resolution depends on the scanning velocity and the data acquisition frequency, ultimately limited by the physical step resolution (40 μm in the present setup). Precision equal to this step resolution was observed for locating an interface between two immiscible liquids inside a 21 μm capillary. Mechanically, the maximum scanning speed was 100 mm/s, but at 1 kHz sampling rate and a time constant of 25 ms, the highest practical scan speed (no peak distortion) was 28 mm/s. At scanning speeds of 0, 4, and 28 mm/s, the S/N for 180 pL (zone length of 1.9 mm in a 11 μm i.d. column) of 500 μM KCl injected into water was 6450, 3850, and 1500, respectively. To facilitate constant and reproducible contact with the column regardless of minor variations in outer diameter, a double quadrupole electrode system was developed. Columns of significant length (>1 m) can be readily scanned. We demonstrate its applicability with both OT and commercial packed columns and explore uniformity of retention along a column, increasing S/N by stopped-flow repeat scans, etc. as unique applications.

  13. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  14. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  15. Establishing column batch repeatability according to Quality by Design (QbD) principles using modeling software.

    Science.gov (United States)

    Rácz, Norbert; Kormány, Róbert; Fekete, Jenő; Molnár, Imre

    2015-04-10

    Column technology needs further improvement even today. To get information of batch-to-batch repeatability, intelligent modeling software was applied. Twelve columns from the same production process, but from different batches were compared in this work. In this paper, the retention parameters of these columns with real life sample solutes were studied. The following parameters were selected for measurements: gradient time, temperature and pH. Based on calculated results, batch-to-batch repeatability of BEH columns was evaluated. Two parallel measurements on two columns from the same batch were performed to obtain information about the quality of packing. Calculating the average of individual working points at the highest critical resolution (R(s,crit)) it was found that the robustness, calculated with a newly released robustness module, had a success rate >98% among the predicted 3(6) = 729 experiments for all 12 columns. With the help of retention modeling all substances could be separated independently from the batch and/or packing, using the same conditions, having high robustness of the experiments. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  17. Smooth H I Low Column Density Outskirts in Nearby Galaxies

    Science.gov (United States)

    Ianjamasimanana, R.; Walter, Fabian; de Blok, W. J. G.; Heald, George H.; Brinks, Elias

    2018-06-01

    The low column density gas at the outskirts of galaxies as traced by the 21 cm hydrogen line emission (H I) represents the interface between galaxies and the intergalactic medium, i.e., where galaxies are believed to get their supply of gas to fuel future episodes of star formation. Photoionization models predict a break in the radial profiles of H I at a column density of ∼5 × 1019 cm‑2 due to the lack of self-shielding against extragalactic ionizing photons. To investigate the prevalence of such breaks in galactic disks and to characterize what determines the potential edge of the H I disks, we study the azimuthally averaged H I column density profiles of 17 nearby galaxies from the H I Nearby Galaxy Survey and supplemented in two cases with published Hydrogen Accretion in LOcal GAlaxieS data. To detect potential faint H I emission that would otherwise be undetected using conventional moment map analysis, we line up individual profiles to the same reference velocity and average them azimuthally to derive stacked radial profiles. To do so, we use model velocity fields created from a simple extrapolation of the rotation curves to align the profiles in velocity at radii beyond the extent probed with the sensitivity of traditional integrated H I maps. With this method, we improve our sensitivity to outer-disk H I emission by up to an order of magnitude. Except for a few disturbed galaxies, none show evidence of a sudden change in the slope of the H I radial profiles: the alleged signature of ionization by the extragalactic background.

  18. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  19. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  20. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  1. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  2. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  3. Collapse of tall granular columns in fluid

    Science.gov (United States)

    Kumar, Krishna; Soga, Kenichi; Delenne, Jean-Yves

    2017-06-01

    Avalanches, landslides, and debris flows are geophysical hazards, which involve rapid mass movement of granular solids, water, and air as a multi-phase system. In order to describe the mechanism of immersed granular flows, it is important to consider both the dynamics of the solid phase and the role of the ambient fluid. In the present study, the collapse of a granular column in fluid is studied using 2D LBM - DEM. The flow kinematics are compared with the dry and buoyant granular collapse to understand the influence of hydrodynamic forces and lubrication on the run-out. In the case of tall columns, the amount of material destabilised above the failure plane is larger than that of short columns. Therefore, the surface area of the mobilised mass that interacts with the surrounding fluid in tall columns is significantly higher than the short columns. This increase in the area of soil - fluid interaction results in an increase in the formation of turbulent vortices thereby altering the deposit morphology. It is observed that the vortices result in the formation of heaps that significantly affects the distribution of mass in the flow. In order to understand the behaviour of tall columns, the run-out behaviour of a dense granular column with an initial aspect ratio of 6 is studied. The collapse behaviour is analysed for different slope angles: 0°, 2.5°, 5° and 7.5°.

  4. Field Applications of Gamma Column Scanning Technology

    International Nuclear Information System (INIS)

    Aquino, Denis D.; Mallilin, Janice P.; Nuñez, Ivy Angelica A.; Bulos, Adelina DM.

    2015-01-01

    The Isotope Techniques Section (ITS) under the Nuclear Service Division (NSD) of the Philippine Nuclear Research Institute (PNRI) conducts services, research and development on radioisotope and sealed source application in the industry. This aims to benefit the manufacturing industries such as petroleum, petrochemical, chemical, energy, waste, column treatment plant, etc. through on line inspection and troubleshooting of a process vessel, column or pipe that could optimize the process operation and increase production efficiency. One of the most common sealed source techniques for industrial applications is the gamma column scanning technology. Gamma column scanning technology is an established technique for inspection, analysis and diagnosis of industrial columns for process optimization, solving operational malfunctions and management of resources. It is a convenient non-intrusive, cost effective and cost-efficient technique to examine inner details of an industrial process vessel such as a distillation column while it is in operation. The Philippine Nuclear Research Institute (PNRI) recognize the importance and benefits of this technology and has implemented activities to make gamma column scanning locally available to benefit the Philippine industries. Continuous effort for capacity building is being pursued thru the implementation of in-house and on-the-job training abroad and upgrading of equipment. (author)

  5. Parallel beam dynamics simulation of linear accelerators

    International Nuclear Information System (INIS)

    Qiang, Ji; Ryne, Robert D.

    2002-01-01

    In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies

  6. THE 'TRUE' COLUMN DENSITY DISTRIBUTION IN STAR-FORMING MOLECULAR CLOUDS

    International Nuclear Information System (INIS)

    Goodman, Alyssa A.; Pineda, Jaime E.; Schnee, Scott L.

    2009-01-01

    We use the COMPLETE Survey's observations of the Perseus star-forming region to assess and intercompare the three methods used for measuring column density in molecular clouds: near-infrared (NIR) extinction mapping; thermal emission mapping in the far-IR; and mapping the intensity of CO isotopologues. Overall, the structures shown by all three tracers are morphologically similar, but important differences exist among the tracers. We find that the dust-based measures (NIR extinction and thermal emission) give similar, log-normal, distributions for the full (∼20 pc scale) Perseus region, once careful calibration corrections are made. We also compare dust- and gas-based column density distributions for physically meaningful subregions of Perseus, and we find significant variations in the distributions for those (smaller, ∼few pc scale) regions. Even though we have used 12 CO data to estimate excitation temperatures, and we have corrected for opacity, the 13 CO maps seem unable to give column distributions that consistently resemble those from dust measures. We have edited out the effects of the shell around the B-star HD 278942 from the column density distribution comparisons. In that shell's interior and in the parts where it overlaps the molecular cloud, there appears to be a dearth of 13 CO, which is likely due either to 13 CO not yet having had time to form in this young structure and/or destruction of 13 CO in the molecular cloud by the HD 278942's wind and/or radiation. We conclude that the use of either dust or gas measures of column density without extreme attention to calibration (e.g., of thermal emission zero-levels) and artifacts (e.g., the shell) is more perilous than even experts might normally admit. And, the use of 13 CO data to trace total column density in detail, even after proper calibration, is unavoidably limited in utility due to threshold, depletion, and opacity effects. If one's main aim is to map column density (rather than temperature

  7. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    Science.gov (United States)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  8. Novel design for centrifugal counter-current chromatography: VI. Ellipsoid column.

    Science.gov (United States)

    Gu, Dongyu; Yang, Yi; Xin, Xuelei; Aisa, Haji Akber; Ito, Yoichiro

    2015-01-01

    A novel ellipsoid column was designed for centrifugal counter-current chromatography. Performance of the ellipsoid column with a capacity of 3.4 mL was examined with three different solvent systems composed of 1-butanol-acetic acid-water (4:1:5, v/v) (BAW), hexane-ethyl acetate-methanol-0.1 M HCl (1:1:1:1, v/v) (HEMH), and 12.5% (w/w) PEG1000 and 12.5% (w/w) dibasic potassium phosphate in water (PEG-DPP) each with suitable test samples. In dipeptide separation with BAW system, both stationary phase retention (Sf) and peak resolution (Rs) of the ellipsoid column were much higher at 0° column angle (column axis parallel to the centrifugal force) than at 90° column angle (column axis perpendicular to the centrifugal force), where elution with the lower phase at a low flow rate produced the best separation yielding Rs at 2.02 with 27.8% Sf at a flow rate of 0.07 ml/min. In the DNP-amino acid separation with HEMW system, the best results were obtained at a flow rate of 0.05 ml/min with 31.6% Sf yielding high Rs values at 2.16 between DNP-DL-glu and DNP-β-ala peaks and 1.81 between DNP-β-ala and DNP-L-ala peaks. In protein separation with PEG-DPP system, lysozyme and myolobin were resolved at Rs of 1.08 at a flow rate of 0.03 ml/min with 38.9% Sf. Most of those Rs values exceed those obtained from the figure-8 column under similar experimental conditions previously reported.

  9. Determination of zearalenone content in cereals and feedstuffs by immunoaffinity column coupled with liquid chromatography.

    Science.gov (United States)

    Fazekas, B; Tar, A

    2001-01-01

    The zearalenone content of maize, wheat, barley, swine feed, and poultry feed samples was determined by immunoaffinity column cleanup followed by liquid chromatography (IAC-LC). Samples were extracted in methanol-water (8 + 2, v/v) solution. The filtered extract was diluted with distilled water and applied to immunoaffinity columns. Zearalenone was eluted with methanol, dried by evaporation, and dissolved in acetonitrile-water (3 + 7, v/v). Zearalenone was separated by isocratic elution of acetonitrile-water (50 + 50, v/v) on reversed-phase C18 column. The quantitative analysis was performed by fluorescence detector and confirmation was based on the UV spectrum obtained by a diode array detector. The mean recovery rate of zearalenone was 82-97% (RSD, 1.4-4.1%) on the original (single-use) immunoaffinity columns. The limit of detection of zearalenone by fluorescence was 10 ng/g at a signal-to-noise ratio of 10:1 and 30 ng/g by spectral confirmation in UV. A good correlation was found (R2 = 0.89) between the results obtained by IAC-LC and by the official AOAC-LC method. The specificity of the method was increased by using fluorescence detection in parallel with UV detection. This method was applicable to the determination of zearalenone content in cereals and other kinds of feedstuffs. Reusability of immunoaffinity columns was examined by washing with water after sample elution and allowing columns to stand for 24 h at room temperature. The zearalenone recovery rate of the regenerated columns varied between 79 and 95% (RSD, 3.2-6.3%). Columns can be regenerated at least 3 times without altering their performance and without affecting the results of repeated determinations.

  10. Dynamic effects of diabatization in distillation columns

    DEFF Research Database (Denmark)

    Bisgaard, Thomas; Huusom, Jakob Kjøbsted; Abildskov, Jens

    2013-01-01

    The dynamic effects of diabatization in distillation columns are investigated in simulation emphasizing the heat-integrated distillation column (HIDiC). A generic, dynamic, first-principle model has been formulated, which is flexible enough to describe various diabatic distillation configurations....... Dynamic Relative Gain Array and Singular Value Analysis have been applied in a comparative study of a conventional distillation column and a HIDiC. The study showed increased input-output coupling due to diabatization. Feasible SISO control structures for the HIDiC were also found and control...

  11. Dynamic Effects of Diabatization in Distillation Columns

    DEFF Research Database (Denmark)

    Bisgaard, Thomas; Huusom, Jakob Kjøbsted; Abildskov, Jens

    2012-01-01

    The dynamic eects of diabatization in distillation columns are investigated in simulation with primary focus on the heat-integrated distillation column (HIDiC). A generic, dynamic, rst-principle model has been formulated, which is exible to describe various diabatic distillation congurations....... Dynamic Relative Gain Array and Singular Value Analysis have been applied in a comparative study of a conventional distillation column and a HIDiC. The study showed increased input-output coupling due to diabatization. Feasible SISO control structures for the HIDiC were also found. Control...

  12. Column-oriented database management systems

    OpenAIRE

    Možina, David

    2013-01-01

    In the following thesis I will present column-oriented database. Among other things, I will answer on a question why there is a need for a column-oriented database. In recent years there have been a lot of attention regarding a column-oriented database, even if the existence of a columnar database management systems dates back in the early seventies of the last century. I will compare both systems for a database management – a colum-oriented database system and a row-oriented database system ...

  13. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  14. Behavior of chemicals in the seawater column by shadowscopy

    Science.gov (United States)

    Fuhrer, Mélanie; Aprin, Laurent; Le Floch, Stéphane; Slangen, Pierre; Dusserre, Gilles

    2012-10-01

    Ninety percent of the Global Movement of Goods transit by ship. The transportation of HNS (Hazardous and Noxious Substances) in bulk highly increases with the tanker traffic. The huge volume capacities induce a major risk of accident involving chemicals. Among the latest accidents, many have led to vessels sinking (Ievoli Sun, 2000 - ECE, 2006). In case of floating substances, liquid release in depth entails an ascending two phase flow. The visualization of that flow is complex. Indeed, liquid chemicals have mostly a refractive index close to water, causing difficulties for the assessment of the two phase medium behavior. Several physics aspects are points of interest: droplets characterization (shape evolution and velocity), dissolution kinetics and hydrodynamic vortices. Previous works, presented in the 2010 Speckle conference in Brazil, employed Dynamic Speckle Interferometry to study Methyl Ethyl Ketone (MEK) dissolution in a 15 cm high and 1 cm thick water column. This paper deals with experiments achieved with the Cedre Experimental Column (CEC - 5 m high and 0.8 m in diameter). As the water thickness has been increased, Dynamic Speckle Interferometry results are improved by shadowscopic measurements. A laser diode is used to generate parallel light while high speed imaging records the products rising. Two measurements systems are placed at the bottom and the top of the CEC. The chemical class of pollutant like floaters, dissolvers (plume, trails or droplets) has been then identified. Physics of the two phase flow is presented and shows up the dependence on chemicals properties such as interfacial tension, viscosity and density. Furthermore, parallel light propagation through this disturbed medium has revealed trailing edges vortices for some substances (e.g. butanol) presenting low refractive index changes.

  15. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  16. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Characterization of the neutron flux in the Hohlraum of the thermal column of the TRIGA Mark III reactor of the ININ

    International Nuclear Information System (INIS)

    Delfin L, A.; Palacios, J.C.; Alonso, G.

    2006-01-01

    Knowing the magnitude of the neutron flux in the reactor irradiation facilities, is so much importance for the operation of the same one, like for the investigation developing. Particularly, knowing with certain precision the spectrum and the neutron flux in the different positions of irradiation of a reactor, it is essential for the evaluation of the results obtained for a certain irradiation experiment. The TRIGA Mark III reactor account with irradiation facilities designed to carry out experimentation, where the reactor is used like an intense neutron source and gamma radiation, what allows to make irradiations of samples or equipment in radiation fields with components and diverse levels in the different facilities, one of these irradiation facilities is the Thermal Column where the Hohlraum is. In this work it was carried out a characterization of the neutron flux inside the 'Hohlraum' of the irradiation facility Thermal Column of the TRIGA Mark III reactor of the Nuclear Center of Mexico to 1 MW of power. It was determined the sub cadmic neutron flux and the epi cadmic by means of the neutron activation technique of thin sheets of gold. The maps of the distribution of the neutron flux for both energy groups in three different positions inside the 'Hohlraum' are presented, these maps were obtained by means of the irradiation of undressed thin activation sheets of gold and covered with cadmium in arrangements of 10 x 12, located parallel to 11.5 cm, 40.5 cm and 70.5 cm to the internal wall of graphite of the installation in inverse address to the position of the reactor core. Starting from the obtained values of neutron flux it was found that, for the same position of the surface of irradiation of the experimental arrangement, the relative differences among the values of neutron flux can be of 80%, and that the differences among different positions of the irradiation surfaces can vary until in a one order of magnitude. (Author)

  18. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  19. Family of columns isospectral to gravity-loaded columns with tip force: A discrete approach

    Science.gov (United States)

    Ramachandran, Nirmal; Ganguli, Ranjan

    2018-06-01

    A discrete model is introduced to analyze transverse vibration of straight, clamped-free (CF) columns of variable cross-sectional geometry under the influence of gravity and a constant axial force at the tip. The discrete model is used to determine critical combinations of loading parameters - a gravity parameter and a tip force parameter - that cause onset of dynamic instability in the CF column. A methodology, based on matrix-factorization, is described to transform the discrete model into a family of models corresponding to weightless and unloaded clamped-free (WUCF) columns, each with a transverse vibration spectrum isospectral to the original model. Characteristics of models in this isospectral family are dependent on three transformation parameters. A procedure is discussed to convert the isospectral discrete model description into geometric description of realistic columns i.e. from the discrete model, we construct isospectral WUCF columns with rectangular cross-sections varying in width and depth. As part of numerical studies to demonstrate efficacy of techniques presented, frequency parameters of a uniform column and three types of tapered CF columns under different combinations of loading parameters are obtained from the discrete model. Critical combinations of these parameters for a typical tapered column are derived. These results match with published results. Example CF columns, under arbitrarily-chosen combinations of loading parameters are considered and for each combination, isospectral WUCF columns are constructed. Role of transformation parameters in determining characteristics of isospectral columns is discussed and optimum values are deduced. Natural frequencies of these WUCF columns computed using Finite Element Method (FEM) match well with those of the given gravity-loaded CF column with tip force, hence confirming isospectrality.

  20. Unbonded Prestressed Columns for Earthquake Resistance

    Science.gov (United States)

    2012-05-01

    Modern structures are able to survive significant shaking caused by earthquakes. By implementing unbonded post-tensioned tendons in bridge columns, the damage caused by an earthquake can be significantly lower than that of a standard reinforced concr...

  1. PRTR ion exchange vault column sampling

    International Nuclear Information System (INIS)

    Cornwell, B.C.

    1995-01-01

    This report documents ion exchange column sampling and Non Destructive Assay (NDA) results from activities in 1994, for the Plutonium Recycle Test Reactor (PRTR) ion exchange vault. The objective was to obtain sufficient information to prepare disposal documentation for the ion exchange columns found in the PRTR Ion exchange vault. This activity also allowed for the monitoring of the liquid level in the lower vault. The sampling activity contained five separate activities: (1) Sampling an ion exchange column and analyzing the ion exchange media for purpose of waste disposal; (2) Gamma and neutron NDA testing on ion exchange columns located in the upper vault; (3) Lower vault liquid level measurement; (4) Radiological survey of the upper vault; and (5) Secure the vault pending waste disposal

  2. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  3. Capacity of columns with splice imperfections

    International Nuclear Information System (INIS)

    Popov, E.P.; Stephen, R.M.

    1977-01-01

    To study the behavior of spliced columns subjected to tensile forces simulating situations which may develop in an earthquake, all of the spliced specimens were tested to failure in tension after first having been subjected to large compressive loads. The results of these tests indicate that the lack of perfect contact at compression splices of columns may not be important, provided that the gaps are shimmed and welding is used to maintain the sections in alignment

  4. Scaling up machine learning: parallel and distributed approaches

    National Research Council Canada - National Science Library

    Bekkerman, Ron; Bilenko, Mikhail; Langford, John

    2012-01-01

    ... presented in the book cover a range of parallelization platforms from FPGAs and GPUs to multi-core systems and commodity clusters; concurrent programming frameworks that include CUDA, MPI, MapReduce, and DryadLINQ; and various learning settings: supervised, unsupervised, semi-supervised, and online learning. Extensive coverage of parallelizat...

  5. Gas Chromatograph Method Optimization Trade Study for RESOLVE: 20-meter Column v. 8-meter Column

    Science.gov (United States)

    Huz, Kateryna

    2014-01-01

    RESOLVE is the payload on a Class D mission, Resource Prospector, which will prospect for water and other volatile resources at a lunar pole. The RESOLVE payload's primary scientific purpose includes determining the presence of water on the moon in the lunar regolith. In order to detect the water, a gas chromatograph (GC) will be used in conjunction with a mass spectrometer (MS). The goal of the experiment was to compare two GC column lengths and recommend which would be best for RESOLVE's purposes. Throughout the experiment, an Inficon Fusion GC and an Inficon Micro GC 3000 were used. The Fusion had a 20m long column with 0.25mm internal diameter (Id). The Micro GC 3000 had an 8m long column with a 0.32mm Id. By varying the column temperature and column pressure while holding all other parameters constant, the ideal conditions for testing with each column length in their individual instrument configurations were determined. The criteria used for determining the optimal method parameters included (in no particular order) (1) quickest run time, (2) peak sharpness, and (3) peak separation. After testing numerous combinations of temperature and pressure, the parameters for each column length that resulted in the most optimal data given my three criteria were selected. The ideal temperature and pressure for the 20m column were 95 C and 50psig. At this temperature and pressure, the peaks were separated and the retention times were shorter compared to other combinations. The Inficon Micro GC 3000 operated better at lower temperature mainly due to the shorter 8m column. The optimal column temperature and pressure were 70 C and 30psig. The Inficon Micro GC 3000 8m column had worse separation than the Inficon Fusion 20m column, but was able to separate water within a shorter run time. Therefore, the most significant tradeoff between the two column lengths was peak separation of the sample versus run time. After performing several tests, it was concluded that better

  6. The handedness of historiated spiral columns.

    Science.gov (United States)

    Couzin, Robert

    2017-09-01

    Trajan's Column in Rome (AD 113) was the model for a modest number of other spiral columns decorated with figural, narrative imagery from antiquity to the present day. Most of these wind upwards to the right, often with a congruent spiral staircase within. A brief introductory consideration of antique screw direction in mechanical devices and fluted columns suggests that the former may have been affected by the handedness of designers and the latter by a preference for symmetry. However, for the historiated columns that are the main focus of this article, the determining factor was likely script direction. The manner in which this operated is considered, as well as competing mechanisms that might explain exceptions. A related phenomenon is the reversal of the spiral in a non-trivial number of reproductions of the antique columns, from Roman coinage to Renaissance and baroque drawings and engravings. Finally, the consistent inattention in academic literature to the spiral direction of historiated columns and the repeated publication of erroneous earlier reproductions warrants further consideration.

  7. Interpretation of the lime column penetration test

    International Nuclear Information System (INIS)

    Liyanapathirana, D S; Kelly, R B

    2010-01-01

    Dry soil mix (DSM) columns are used to reduce the settlement and to improve the stability of embankments constructed on soft clays. During construction the shear strength of the columns needs to be confirmed for compliance with technical assumptions. A specialized blade shaped penetrometer known as the lime column probe, has been developed for testing DSM columns. This test can be carried out as a pull out resistance test (PORT) or a push in resistance test (PIRT). The test is considered to be more representative of average column shear strength than methods that test only a limited area of the column. Both PORT and PIRT tests require empirical correlations of measured resistance to an absolute measure of shear strength, in a similar manner to the cone penetration test. In this paper, finite element method is used to assess the probe factor, N, for the PORT test. Due to the large soil deformations around the probe, an Arbitrary Lagrangian Eulerian (ALE) based finite element formulation has been used. Variation of N with rigidity index and the friction at the probe-soil interface are investigated to establish a range for the probe factor.

  8. Distributed and parallel approach for handle and perform huge datasets

    Science.gov (United States)

    Konopko, Joanna

    2015-12-01

    Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.

  9. Multi-Column Experimental Test Bed Using CaSDB MOF for Xe/Kr Separation

    Energy Technology Data Exchange (ETDEWEB)

    Welty, Amy Keil [Idaho National Lab. (INL), Idaho Falls, ID (United States); Greenhalgh, Mitchell Randy [Idaho National Lab. (INL), Idaho Falls, ID (United States); Garn, Troy Gerry [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-03-01

    Processing of spent nuclear fuel produces off-gas from which several volatile radioactive components must be separated for further treatment or storage. As part of the Off-gas Sigma Team, parallel research at INL and PNNL has produced several promising sorbents for the selective capture of xenon and krypton from these off-gas streams. In order to design full-scale treatment systems, sorbents that are promising on a laboratory scale must be proven under process conditions to be considered for pilot and then full-scale use. To that end, a bench-scale multi-column system with capability to test multiple sorbents was designed and constructed at INL. This report details bench-scale testing of CaSDB MOF, produced at PNNL, and compares the results to those reported last year using INL engineered sorbents. Two multi-column tests were performed with the CaSDB MOF installed in the first column, followed with HZ-PAN installed in the second column. The CaSDB MOF column was placed in a Stirling cryocooler while the cryostat was employed for the HZ-PAN column. Test temperatures of 253 K and 191 K were selected for the first column while the second column was held at 191 K for both tests. Calibrated volume sample bombs were utilized for gas stream analyses. At the conclusion of each test, samples were collected from each column and analyzed for gas composition. While CaSDB MOF does appear to have good capacity for Xe, the short time to initial breakthrough would make design of a continuous adsorption/desorption cycle difficult, requiring either very large columns or a large number of smaller columns. Because of the tenacity with which Xe and Kr adhere to the material once adsorbed, this CaSDB MOF may be more suitable for use as a long-term storage solution. Further testing is recommended to determine if CaSDB MOF is suitable for this purpose.

  10. Mass transfer model liquid phase catalytic exchange column simulation applicable to any column composition profile

    Energy Technology Data Exchange (ETDEWEB)

    Busigin, A. [NITEK USA Inc., Ocala, FL (United States)

    2015-03-15

    Liquid Phase Catalytic Exchange (LPCE) is a key technology used in water detritiation systems. Rigorous simulation of LPCE is complicated when a column may have both hydrogen and deuterium present in significant concentrations in different sections of the column. This paper presents a general mass transfer model for a homogenous packed bed LPCE column as a set of differential equations describing composition change, and equilibrium equations to define the mass transfer driving force within the column. The model is used to show the effect of deuterium buildup in the bottom of an LPCE column from non-negligible D atom fraction in the bottom feed gas to the column. These types of calculations are important in the design of CECE (Combined Electrolysis and Catalytic Exchange) water detritiation systems.

  11. Development of spent salt treatment technology by zeolite column system. Performance evaluation of zeolite column

    International Nuclear Information System (INIS)

    Miura, Hidenori; Uozumi, Koichi

    2009-01-01

    At electrorefining process, fission products(FPs) accumulate in molten salt. To avoid influence on heating control by decay heat and enlargement of FP amount in the recovered fuel, FP elements must be removed from the spent salt of the electrorefining process. For the removal of the FPs from the spent salt, we are investigating the availability of zeolite column system. For obtaining the basic data of the column system, such as flow property and ion-exchange performance while high temperature molten salt is passing through the column, and experimental apparatus equipped with fraction collector was developed. By using this apparatus, following results were obtained. 1) We cleared up the flow parameter of column system with zeolite powder, such as flow rate control by argon pressure. 2) Zeolite 4A in the column can absorb cesium that is one of the FP elements in molten salt. From these results, we got perspective on availability of the zeolite column system. (author)

  12. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    Science.gov (United States)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  13. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  14. A class of parallel algorithms for computation of the manipulator inertia matrix

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  15. Structural Decoupling and Disturbance Rejection in a Distillation Column

    DEFF Research Database (Denmark)

    Bahar, Mehrdad; Jantzen, Jan; Commault, C.

    1996-01-01

    Introduction, distillation column model, input-output decoupling, disturbance rejection, concluding remarks, references.......Introduction, distillation column model, input-output decoupling, disturbance rejection, concluding remarks, references....

  16. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  17. Column Selection for Biomedical Analysis Supported by Column Classification Based on Four Test Parameters.

    Science.gov (United States)

    Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz

    2016-01-21

    This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis.

  18. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  19. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  20. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  1. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  2. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  3. Recent advances in column switching sample preparation in bioanalysis.

    Science.gov (United States)

    Kataoka, Hiroyuki; Saito, Keita

    2012-04-01

    Column switching techniques, using two or more stationary phase columns, are useful for trace enrichment and online automated sample preparation. Target fractions from the first column are transferred online to a second column with different properties for further separation. Column switching techniques can be used to determine the analytes in a complex matrix by direct sample injection or by simple sample treatment. Online column switching sample preparation is usually performed in combination with HPLC or capillary electrophoresis. SPE or turbulent flow chromatography using a cartridge column and in-tube solid-phase microextraction using a capillary column have been developed for convenient column switching sample preparation. Furthermore, various micro-/nano-sample preparation devices using new polymer-coating materials have been developed to improve extraction efficiency. This review describes current developments and future trends in novel column switching sample preparation in bioanalysis, focusing on innovative column switching techniques using new extraction devices and materials.

  4. Two-dimensional liquid chromatography consisting of twelve second-dimension columns for comprehensive analysis of intact proteins.

    Science.gov (United States)

    Ren, Jiangtao; Beckner, Matthew A; Lynch, Kyle B; Chen, Huang; Zhu, Zaifang; Yang, Yu; Chen, Apeng; Qiao, Zhenzhen; Liu, Shaorong; Lu, Joann J

    2018-05-15

    A comprehensive two-dimensional liquid chromatography (LCxLC) system consisting of twelve columns in the second dimension was developed for comprehensive analysis of intact proteins in complex biological samples. The system consisted of an ion-exchange column in the first dimension and the twelve reverse-phase columns in the second dimension; all thirteen columns were monolithic and prepared inside 250 µm i.d. capillaries. These columns were assembled together through the use of three valves and an innovative configuration. The effluent from the first dimension was continuously fractionated and sequentially transferred into the twelve second-dimension columns, while the second-dimension separations were carried out in a series of batches (six columns per batch). This LCxLC system was tested first using standard proteins followed by real-world samples from E. coli. Baseline separation was observed for eleven standard proteins and hundreds of peaks were observed for the real-world sample analysis. Two-dimensional liquid chromatography, often considered as an effective tool for mapping proteins, is seen as laborious and time-consuming when configured offline. Our online LCxLC system with increased second-dimension columns promises to provide a solution to overcome these hindrances. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Contributions to reversed-phase column selectivity: III. Column hydrogen-bond basicity.

    Science.gov (United States)

    Carr, P W; Dolan, J W; Dorsey, J G; Snyder, L R; Kirkland, J J

    2015-05-22

    Column selectivity in reversed-phase chromatography (RPC) can be described in terms of the hydrophobic-subtraction model, which recognizes five solute-column interactions that together determine solute retention and column selectivity: hydrophobic, steric, hydrogen bonding of an acceptor solute (i.e., a hydrogen-bond base) by a stationary-phase donor group (i.e., a silanol), hydrogen bonding of a donor solute (e.g., a carboxylic acid) by a stationary-phase acceptor group, and ionic. Of these five interactions, hydrogen bonding between donor solutes (acids) and stationary-phase acceptor groups is the least well understood; the present study aims at resolving this uncertainty, so far as possible. Previous work suggests that there are three distinct stationary-phase sites for hydrogen-bond interaction with carboxylic acids, which we will refer to as column basicity I, II, and III. All RPC columns exhibit a selective retention of carboxylic acids (column basicity I) in varying degree. This now appears to involve an interaction of the solute with a pair of vicinal silanols in the stationary phase. For some type-A columns, an additional basic site (column basicity II) is similar to that for column basicity I in primarily affecting the retention of carboxylic acids. The latter site appears to be associated with metal contamination of the silica. Finally, for embedded-polar-group (EPG) columns, the polar group can serve as a proton acceptor (column basicity III) for acids, phenols, and other donor solutes. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. A stochastic view on column efficiency.

    Science.gov (United States)

    Gritti, Fabrice

    2018-03-09

    A stochastic model of transcolumn eddy dispersion along packed beds was derived. It was based on the calculation of the mean travel time of a single analyte molecule from one radial position to another. The exchange mechanism between two radial positions was governed by the transverse dispersion of the analyte across the column. The radial velocity distribution was obtained by flow simulations in a focused-ion-beam scanning electron microscopy (FIB-SEM) based 3D reconstruction from a 2.1 mm × 50 mm column packed with 2 μm BEH-C 18 particles. Accordingly, the packed bed was divided into three coaxial and uniform zones: (1) a 1.4 particle diameter wide, ordered, and loose packing at the column wall (velocity u w ), (2) an intermediate 130 μm wide, random, and dense packing (velocity u i ), and (3) the bulk packing in the center of the column (velocity u c ). First, the validity of this proposed stochastic model was tested by adjusting the predicted to the observed reduced van Deemter plots of a 2.1 mm × 50 mm column packed with 2 μm BEH-C 18 fully porous particles (FPPs). An excellent agreement was found for u i  = 0.93u c , a result fully consistent with the FIB-SEM observation (u i  = 0.95u c ). Next, the model was used to measure u i  = 0.94u c for 2.1 mm × 100 mm column packed with 1.6 μm Cortecs-C 18 superficially porous particles (SPPs). The relative velocity bias across columns packed with SPPs is then barely smaller than that observed in columns packed with FPPs (+6% versus + 7%). u w =1.8u i is measured for a 75 μm × 1 m capillary column packed with 2 μm BEH-C 18 particles. Despite this large wall-to-center velocity bias (+80%), the presence of the thin and ordered wall packing layer has no negative impact on the kinetic performance of capillary columns. Finally, the stochastic model of long-range eddy dispersion explains why analytical (2.1-4.6 mm i.d.) and capillary (columns can all be

  7. Vertebral Column Resection for Rigid Spinal Deformity.

    Science.gov (United States)

    Saifi, Comron; Laratta, Joseph L; Petridis, Petros; Shillingford, Jamal N; Lehman, Ronald A; Lenke, Lawrence G

    2017-05-01

    Broad narrative review. To review the evolution, operative technique, outcomes, and complications associated with posterior vertebral column resection. A literature review of posterior vertebral column resection was performed. The authors' surgical technique is outlined in detail. The authors' experience and the literature regarding vertebral column resection are discussed at length. Treatment of severe, rigid coronal and/or sagittal malalignment with posterior vertebral column resection results in approximately 50-70% correction depending on the type of deformity. Surgical site infection rates range from 2.9% to 9.7%. Transient and permanent neurologic injury rates range from 0% to 13.8% and 0% to 6.3%, respectively. Although there are significant variations in EBL throughout the literature, it can be minimized by utilizing tranexamic acid intraoperatively. The ability to correct a rigid deformity in the spine relies on osteotomies. Each osteotomy is associated with a particular magnitude of correction at a single level. Posterior vertebral column resection is the most powerful posterior osteotomy method providing a successful correction of fixed complex deformities. Despite meticulous surgical technique and precision, this robust osteotomy technique can be associated with significant morbidity even in the most experienced hands.

  8. Effect of backmixing on pulse column performance

    International Nuclear Information System (INIS)

    Miao, Y.W.

    1979-05-01

    A critical survey of the published literature concerning dispersed phase holdup and longitudinal mixing in pulsed sieve-plate extraction columns has been made to assess the present state-of-the-art in predicting these two parameters, both of which are of critical importance in the development of an accurate mathematical model of the pulse column. Although there are many conflicting correlations of these variables as a function of column geometry, operating conditions, and physical properties of the liquid systems involved it has been possible to develop new correlations which appear to be useful and which are consistent with much of the available data over the limited range of variables most likely to be encountered in plant sized equipment. The correlations developed were used in a stagewise model of the pulse column to predict product concentrations, solute inventory, and concentration profiles in a column for which limited experimental data were available. Reasonable agreement was obtained between the mathematical model and the experimental data. Complete agreement, however, can only be obtained after a correlation for the extraction efficiency has been developed. The correlation of extraction efficiency was beyond the scope of this work

  9. The relation between the column density structures and the magnetic field orientation in the Vela C molecular complex

    Science.gov (United States)

    Soler, J. D.; Ade, P. A. R.; Angilè, F. E.; Ashton, P.; Benton, S. J.; Devlin, M. J.; Dober, B.; Fissel, L. M.; Fukui, Y.; Galitzki, N.; Gandilo, N. N.; Hennebelle, P.; Klein, J.; Li, Z.-Y.; Korotkov, A. L.; Martin, P. G.; Matthews, T. G.; Moncelsi, L.; Netterfield, C. B.; Novak, G.; Pascale, E.; Poidevin, F.; Santos, F. P.; Savini, G.; Scott, D.; Shariff, J. A.; Thomas, N. E.; Tucker, C. E.; Tucker, G. S.; Ward-Thompson, D.

    2017-07-01

    We statistically evaluated the relative orientation between gas column density structures, inferred from Herschel submillimetre observations, and the magnetic field projected on the plane of sky, inferred from polarized thermal emission of Galactic dust observed by the Balloon-borne Large-Aperture Submillimetre Telescope for Polarimetry (BLASTPol) at 250, 350, and 500 μm, towards the Vela C molecular complex. First, we find very good agreement between the polarization orientations in the three wavelength-bands, suggesting that, at the considered common angular resolution of 3.´0 that corresponds to a physical scale of approximately 0.61 pc, the inferred magnetic field orientation is not significantly affected by temperature or dust grain alignment effects. Second, we find that the relative orientation between gas column density structures and the magnetic field changes progressively with increasing gas column density, from mostly parallel or having no preferred orientation at low column densities to mostly perpendicular at the highest column densities. This observation is in agreement with previous studies by the Planck collaboration towards more nearby molecular clouds. Finally, we find a correspondencebetween (a) the trends in relative orientation between the column density structures and the projected magnetic field; and (b) the shape of the column density probability distribution functions (PDFs). In the sub-regions of Vela C dominated by one clear filamentary structure, or "ridges", where the high-column density tails of the PDFs are flatter, we find a sharp transition from preferentially parallel or having no preferred relative orientation at low column densities to preferentially perpendicular at highest column densities. In the sub-regions of Vela C dominated by several filamentary structures with multiple orientations, or "nests", where the maximum values of the column density are smaller than in the ridge-like sub-regions and the high-column density

  10. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  11. Harmonic maps of the bounded symmetric domains

    International Nuclear Information System (INIS)

    Xin, Y.L.

    1994-06-01

    A shrinking property of harmonic maps into R IV (2) is proved which is used to classify complete spacelike surfaces of the parallel mean curvature in R 4 2 with a reasonable condition on the Gauss image. Liouville-type theorems of harmonic maps from the higher dimensional bounded symmetric domains are also established. (author). 25 refs

  12. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  13. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  14. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  15. Mathematical modeling of alcohol distillation columns

    Directory of Open Access Journals (Sweden)

    Ones Osney Pérez

    2011-04-01

    Full Text Available New evaluation modules are proposed to extend the scope of a modular simulator oriented to the sugar cane industry, called STA 4.0, in a way that it can be used to carry out x calculation and analysis in ethanol distilleries. Calculation modules were developed for the simulation of the columns that are combined in the distillation area. Mathematical models were supported on materials and energy balances, equilibrium relations and thermodynamic properties of the ethanol-water system. Ponchon-Savarit method was used for the evaluation of the theoretical stages in the columns. A comparison between the results using Ponchon- Savarit method and those obtained applying McCabe-Thiele method was done for a distillation column. These calculation modules for ethanol distilleries were applied to a real case for validation.

  16. Inert carriers for column extraction chromatography

    International Nuclear Information System (INIS)

    Katykhin, G.S.

    1978-01-01

    Inert carriers used in column extraction chromatography are reviewed. Such carriers are devided into two large groups: hydrophilic carriers which possess high surface energy and are well wetted only with strongly polar liquids (kieselguhrs, silica gels, glasses, cellulose, Al 2 O 3 ) and water-repellent carriers which possess low surface energy and are well wetted with various organic solvents (polyethylene, polytetrafluorethylene polytrifluorochlorethylene). Properties of various carriers are presented: structure, chemical and radiation stability, adsorption properties, extracting agent capacity. The effect of structure and sizes of particles on the efficiency of chromatography columns is considered. Ways of immovable phase deposition on the carrier and the latter's regeneration. Peculiarities of column packing for preparative and continuous chromatography are discussed

  17. Computational analysis of ozonation in bubble columns

    International Nuclear Information System (INIS)

    Quinones-Bolanos, E.; Zhou, H.; Otten, L.

    2002-01-01

    This paper presents a new computational ozonation model based on the principle of computational fluid dynamics along with the kinetics of ozone decay and microbial inactivation to predict the performance of ozone disinfection in fine bubble columns. The model can be represented using a mixture two-phase flow model to simulate the hydrodynamics of the water flow and using two transport equations to track the concentration profiles of ozone and microorganisms along the height of the column, respectively. The applicability of this model was then demonstrated by comparing the simulated ozone concentrations with experimental measurements obtained from a pilot scale fine bubble column. One distinct advantage of this approach is that it does not require the prerequisite assumptions such as plug flow condition, perfect mixing, tanks-in-series, uniform radial or longitudinal dispersion in predicting the performance of disinfection contactors without carrying out expensive and tedious tracer studies. (author)

  18. Operation of the annular pulsed column, (2)

    International Nuclear Information System (INIS)

    Takahashi, Keiki; Tsukada, Takeshi

    1988-01-01

    The heat of reaction generated form the uranium extraction is considered to from the temperature profile inside the pulsed column. A simulation code was developed to estimate the temperature profile, considering heat generation and counter-current heat transfer. The temperature profiles calculated using this code was found to depend on both the position of the extraction zone and the operating condition. The reported experimental result was fairly represented by this simulation code. We consider that this presented simulation code is capable of providing with the temperature profile in the pulsed column and useful for the monitoring of the uranium extraction zone. (author)

  19. Distillation columns inspection through gamma scanning

    International Nuclear Information System (INIS)

    Garcia, Marco

    1999-09-01

    The application of nuclear energy is very wide and it allows the saving of economic resources since the investigation of a certain process is carried out without stop the plant. The gamma scanning of oil c racking c olumns are practical examples, they allow to determine the hydraulic operation of the inspected columns. A source of Co-60 22mCi and a detector with a crystal of INa(TI) are used. This paper shows the results got from a profile carried out in a column distillation

  20. Performance of RC columns with partial length corrosion

    International Nuclear Information System (INIS)

    Wang Xiaohui; Liang Fayun

    2008-01-01

    Experimental and analytical studies on the load capacity of reinforced concrete (RC) columns with partial length corrosion are presented, where only a fraction of the column length was corroded. Twelve simply supported columns were eccentrically loaded. The primary variables were partial length corrosion in tensile or compressive zone and the corrosion level within this length. The failure of the corroded column occurs in the partial length, mainly developed from or located nearby or merged with the longitudinal corrosion cracks. For RC column with large eccentricity, load capacity of the column is mainly influenced by the partial length corrosion in tensile zone; while for RC column with small eccentricity, load capacity of the column greatly decreases due to the partial length corrosion in compressive zone. The destruction of the longitudinally mechanical integrality of the column in the partial length leads to this great reduction of the load capacity of the RC column

  1. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  2. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  3. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  4. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  5. Scalability of pre-packed preparative chromatography columns with different diameters and lengths taking into account extra column effects.

    Science.gov (United States)

    Schweiger, Susanne; Jungbauer, Alois

    2018-02-16

    Small pre-packed columns are commonly used to estimate the optimum run parameters for pilot and production scale. The question arises if the experiments obtained with these columns are scalable, because there are substantial changes in extra column volume when going from a very small scale to a benchtop column. In this study we demonstrate the scalability of pre-packed disposable and non-disposable columns of volumes in the range of 0.2-20 ml packed with various media using superficial velocities in the range of 30-500 cm/h. We found that the relative contribution of extra column band broadening to total band broadening was not only high for columns with small diameters, but also for columns with a larger volume due to their wider diameter. The extra column band broadening can be more than 50% for columns with volumes larger than 10 ml. An increase in column diameter leads to high additional extra column band broadening in the filter, frits, and adapters of the columns. We found a linear relationship between intra column band broadening and column length, which increased stepwise with increases in column diameter. This effect was also corroborated by CFD simulation. The intra column band broadening was the same for columns packed with different media. An empirical engineering equation and the data gained from the extra column effects allowed us to predict the intra, extra, and total column band broadening just from column length, diameter, and flow rate. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.

  6. The MAPS based PXL vertex detector for the STAR experiment

    Science.gov (United States)

    Contin, G.; Anderssen, E.; Greiner, L.; Schambach, J.; Silber, J.; Stezelberger, T.; Sun, X.; Szelezniak, M.; Vu, C.; Wieman, H.; Woodmansee, S.

    2015-03-01

    The Heavy Flavor Tracker (HFT) was installed in the STAR experiment for the 2014 heavy ion run of RHIC. Designed to improve the vertex resolution and extend the measurement capabilities in the heavy flavor domain, the HFT is composed of three different silicon detectors based on CMOS monolithic active pixels (MAPS), pads and strips respectively, arranged in four concentric cylinders close to the STAR interaction point. The two innermost HFT layers are placed at a radius of 2.7 and 8 cm from the beam line, respectively, and accommodate 400 ultra-thin (50 μ m) high resolution MAPS sensors arranged in 10-sensor ladders to cover a total silicon area of 0.16 m2. Each sensor includes a pixel array of 928 rows and 960 columns with a 20.7 μ m pixel pitch, providing a sensitive area of ~ 3.8 cm2. The architecture is based on a column parallel readout with amplification and correlated double sampling inside each pixel. Each column is terminated with a high precision discriminator, is read out in a rolling shutter mode and the output is processed through an integrated zero suppression logic. The results are stored in two SRAM with ping-pong arrangement for a continuous readout. The sensor features 185.6 μ s readout time and 170 mW/cm2 power dissipation. The detector is air-cooled, allowing a global material budget as low as 0.39% on the inner layer. A novel mechanical approach to detector insertion enables effective installation and integration of the pixel layers within an 8 hour shift during the on-going STAR run.In addition to a detailed description of the detector characteristics, the experience of the first months of data taking will be presented in this paper, with a particular focus on sensor threshold calibration, latch-up protection procedures and general system operations aimed at stabilizing the running conditions. Issues faced during the 2014 run will be discussed together with the implemented solutions. A preliminary analysis of the detector performance

  7. The MAPS based PXL vertex detector for the STAR experiment

    International Nuclear Information System (INIS)

    Contin, G.; Anderssen, E.; Greiner, L.; Silber, J.; Stezelberger, T.; Vu, C.; Wieman, H.; Woodmansee, S.; Schambach, J.; Sun, X.; Szelezniak, M.

    2015-01-01

    The Heavy Flavor Tracker (HFT) was installed in the STAR experiment for the 2014 heavy ion run of RHIC. Designed to improve the vertex resolution and extend the measurement capabilities in the heavy flavor domain, the HFT is composed of three different silicon detectors based on CMOS monolithic active pixels (MAPS), pads and strips respectively, arranged in four concentric cylinders close to the STAR interaction point. The two innermost HFT layers are placed at a radius of 2.7 and 8 cm from the beam line, respectively, and accommodate 400 ultra-thin (50 μ m) high resolution MAPS sensors arranged in 10-sensor ladders to cover a total silicon area of 0.16 m 2 . Each sensor includes a pixel array of 928 rows and 960 columns with a 20.7 μ m pixel pitch, providing a sensitive area of ∼ 3.8 cm 2 . The architecture is based on a column parallel readout with amplification and correlated double sampling inside each pixel. Each column is terminated with a high precision discriminator, is read out in a rolling shutter mode and the output is processed through an integrated zero suppression logic. The results are stored in two SRAM with ping-pong arrangement for a continuous readout. The sensor features 185.6 μ s readout time and 170 mW/cm 2 power dissipation. The detector is air-cooled, allowing a global material budget as low as 0.39% on the inner layer. A novel mechanical approach to detector insertion enables effective installation and integration of the pixel layers within an 8 hour shift during the on-going STAR run.In addition to a detailed description of the detector characteristics, the experience of the first months of data taking will be presented in this paper, with a particular focus on sensor threshold calibration, latch-up protection procedures and general system operations aimed at stabilizing the running conditions. Issues faced during the 2014 run will be discussed together with the implemented solutions. A preliminary analysis of the detector

  8. Optimization and simulation of tandem column supercritical fluid chromatography separations using column back pressure as a unique parameter.

    Science.gov (United States)

    Wang, Chunlei; Tymiak, Adrienne A; Zhang, Yingru

    2014-04-15

    Tandem column supercritical fluid chromatography (SFC) has demonstrated to be a useful technique to resolve complex mixtures by serially coupling two columns of different selectivity. The overall selectivity of a tandem column separation is the retention time weighted average of selectivity from each coupled column. Currently, the method development merely relies on extensive screenings and is often a hit-or-miss process. No attention is paid to independently adjust retention and selectivity contributions from individual columns. In this study, we show how tandem column SFC selectivity can be optimized by changing relative dimensions (length or inner diameter) of the coupled columns. Moreover, we apply column back pressure as a unique parameter for SFC optimization. Continuous tuning of tandem column SFC selectivity is illustrated through column back pressure adjustments of the upstream column, for the first time. In addition, we show how and why changing coupling order of the columns can produce dramatically different separations. Using the empirical mathematical equation derived in our previous study, we also demonstrate a simulation of tandem column separations based on a single retention time measurement on each column. The simulation compares well with experimental results and correctly predicts column order and back pressure effects on the separations. Finally, considerations on instrument and column hardware requirements are discussed.

  9. SU-F-J-146: Experimental Validation of 6 MV Photon PDD in Parallel Magnetic Field Calculated by EGSnrc

    Energy Technology Data Exchange (ETDEWEB)

    Ghila, A; Steciw, S; Fallone, B; Rathee, S [Cross Cancer Institute, Edmonton, AB (Canada)

    2016-06-15

    Purpose: Integrated linac-MR systems are uniquely suited for real time tumor tracking during radiation treatment. Understanding the magnetic field dose effects and incorporating them in treatment planning is paramount for linac-MR clinical implementation. We experimentally validated the EGSnrc dose calculations in the presence of a magnetic field parallel to the radiation beam travel. Methods: Two cylindrical bore electromagnets produced a 0.21 T magnetic field parallel to the central axis of a 6 MV photon beam. A parallel plate ion chamber was used to measure the PDD in a polystyrene phantom, placed inside the bore in two setups: phantom top surface coinciding with the magnet bore center (183 cm SSD), and with the magnet bore’s top surface (170 cm SSD). We measured the field of the magnet at several points and included the exact dimensions of the coils to generate a 3D magnetic field map in a finite element model. BEAMnrc and DOSXYZnrc simulated the PDD experiments in parallel magnetic field (i.e. 3D magnetic field included) and with no magnetic field. Results: With the phantom surface at the top of the electromagnet, the surface dose increased by 10% (compared to no-magnetic field), due to electrons being focused by the smaller fringe fields of the electromagnet. With the phantom surface at the bore center, the surface dose increased by 30% since extra 13 cm of air column was in relatively higher magnetic field (>0.13T) in the magnet bore. EGSnrc Monte Carlo code correctly calculated the radiation dose with and without the magnetic field, and all points passed the 2%, 2 mm Gamma criterion when the ion chamber’s entrance window and air cavity were included in the simulated phantom. Conclusion: A parallel magnetic field increases the surface and buildup dose during irradiation. The EGSnrc package can model these magnetic field dose effects accurately. Dr. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi

  10. A New, Large-scale Map of Interstellar Reddening Derived from H I Emission

    Science.gov (United States)

    Lenz, Daniel; Hensley, Brandon S.; Doré, Olivier

    2017-09-01

    We present a new map of interstellar reddening, covering the 39% of the sky with low H I column densities ({N}{{H}{{I}}}Peek and Graves based on observed reddening toward passive galaxies. We therefore argue that our H I-based map provides the most accurate interstellar reddening estimates in the low-column-density regime to date. Our reddening map is made publicly available at doi.org/10.7910/DVN/AFJNWJ.

  11. A privacy-preserving parallel and homomorphic encryption scheme

    Directory of Open Access Journals (Sweden)

    Min Zhaoe

    2017-04-01

    Full Text Available In order to protect data privacy whilst allowing efficient access to data in multi-nodes cloud environments, a parallel homomorphic encryption (PHE scheme is proposed based on the additive homomorphism of the Paillier encryption algorithm. In this paper we propose a PHE algorithm, in which plaintext is divided into several blocks and blocks are encrypted with a parallel mode. Experiment results demonstrate that the encryption algorithm can reach a speed-up ratio at about 7.1 in the MapReduce environment with 16 cores and 4 nodes.

  12. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  13. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  14. In situ quantitative characterisation of the ocean water column using acoustic multibeam backscatter data

    Science.gov (United States)

    Lamarche, G.; Le Gonidec, Y.; Lucieer, V.; Lurton, X.; Greinert, J.; Dupré, S.; Nau, A.; Heffron, E.; Roche, M.; Ladroit, Y.; Urban, P.

    2017-12-01

    Detecting liquid, solid or gaseous features in the ocean is generating considerable interest in the geoscience community, because of their potentially high economic values (oil & gas, mining), their significance for environmental management (oil/gas leakage, biodiversity mapping, greenhouse gas monitoring) as well as their potential cultural and traditional values (food, freshwater). Enhancing people's capability to quantify and manage the natural capital present in the ocean water goes hand in hand with the development of marine acoustic technology, as marine echosounders provide the most reliable and technologically advanced means to develop quantitative studies of water column backscatter data. This is not developed to its full capability because (i) of the complexity of the physics involved in relation to the constantly changing marine environment, and (ii) the rapid technological evolution of high resolution multibeam echosounder (MBES) water-column imaging systems. The Water Column Imaging Working Group is working on a series of multibeam echosounder (MBES) water column datasets acquired in a variety of environments, using a range of frequencies, and imaging a number of water-column features such as gas seeps, oil leaks, suspended particulate matter, vegetation and freshwater springs. Access to data from different acoustic frequencies and ocean dynamics enables us to discuss and test multifrequency approaches which is the most promising means to develop a quantitative analysis of the physical properties of acoustic scatterers, providing rigorous cross calibration of the acoustic devices. In addition, high redundancy of multibeam data, such as is available for some datasets, will allow us to develop data processing techniques, leading to quantitative estimates of water column gas seeps. Each of the datasets has supporting ground-truthing data (underwater videos and photos, physical oceanography measurements) which provide information on the origin and

  15. The shapes of column density PDFs. The importance of the last closed contour

    Science.gov (United States)

    Alves, João; Lombardi, Marco; Lada, Charles J.

    2017-10-01

    The probability distribution function of column density (PDF) has become the tool of choice for cloud structure analysis and star formation studies. Its simplicity is attractive, and the PDF could offer access to cloud physical parameters otherwise difficult to measure, but there has been some confusion in the literature on the definition of its completeness limit and shape at the low column density end. In this letter we use the natural definition of the completeness limit of a column density PDF, the last closed column density contour inside a surveyed region, and apply it to a set of large-scale maps of nearby molecular clouds. We conclude that there is no observational evidence for log-normal PDFs in these objects. We find that all studied molecular clouds have PDFs well described by power laws, including the diffuse cloud Polaris. Our results call for a new physical interpretation of the shape of the column density PDFs. We find that the slope of a cloud PDF is invariant to distance but not to the spatial arrangement of cloud material, and as such it is still a useful tool for investigating cloud structure.

  16. Influence of pressure on the properties of chromatographic columns. II. The column hold-up volume.

    Science.gov (United States)

    Gritti, Fabrice; Martin, Michel; Guiochon, Georges

    2005-04-08

    The effect of the local pressure and of the average column pressure on the hold-up column volume was investigated between 1 and 400 bar, from a theoretical and an experimental point of view. Calculations based upon the elasticity of the solids involved (column wall and packing material) and the compressibility of the liquid phase show that the increase of the column hold-up volume with increasing pressure that is observed is correlated with (in order of decreasing importance): (1) the compressibility of the mobile phase (+1 to 5%); (2) in RPLC, the compressibility of the C18-bonded layer on the surface of the silica (+0.5 to 1%); and (3) the expansion of the column tube (columns packed with the pure Resolve silica (0% carbon), the derivatized Resolve-C18 (10% carbon) and the Symmetry-C18 (20% carbon) adsorbents, using water, methanol, or n-pentane as the mobile phase. These solvents have different compressibilities. However, 1% of the relative increase of the column hold-up volume that was observed when the pressure was raised is not accounted for by the compressibilities of either the solvent or the C18-bonded phase. It is due to the influence of the pressure on the retention behavior of thiourea, the compound used as tracer to measure the hold-up volume.

  17. Pulsing flow in trickle bed columns

    NARCIS (Netherlands)

    Blok, Jan Rudolf

    1981-01-01

    In the operation of a packed column with cocurrent downflow of gas and liquid (trickle bed) several flowpatterns can be observed depending on the degree of interaction between gas and liquid. At low liquid and gas flow rates - low interaction - gascontinuous flow occurs. In this flowregime, the

  18. Revive your columns with cyclic distillation

    NARCIS (Netherlands)

    Kiss, Anton A.; Bîldea, Costin Sorin

    2015-01-01

    The process intensification (PI) technique involves changing a tower?s internals and operating mode and the separate movement of the liquid and vapor phases. This can significantly increase column throughput and reduce energy requirements, while improving separation performance. PI is a set of

  19. Robust Geometric Control of a Distillation Column

    DEFF Research Database (Denmark)

    Kymmel, Mogens; Andersen, Henrik Weisberg

    1987-01-01

    A frequency domain method, which makes it possible to adjust multivariable controllers with respect to both nominal performance and robustness, is presented. The basic idea in the approach is that the designer assigns objectives such as steady-state tracking, maximum resonance peaks, bandwidth, m...... is used to examine and improve geometric control of a binary distillation column....

  20. On Row Rank Equal Column Rank

    Science.gov (United States)

    Khalili, Parviz

    2009-01-01

    We will prove a well-known theorem in Linear Algebra, that is, for any "m x n" matrix the dimension of row space and column space are the same. The proof is based on the subject of "elementary matrices" and "reduced row-echelon" form of a matrix.

  1. On Stability of a Bubble Column

    Czech Academy of Sciences Publication Activity Database

    Růžička, Marek

    2013-01-01

    Roč. 91, č. 2 (2013), s. 191-203 ISSN 0263-8762 R&D Projects: GA ČR GA104/07/1110 Institutional support: RVO:67985858 Keywords : bubble column * flow regimes * steady solution Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 2.281, year: 2013

  2. Thermal Analysis of LANL Ion Exchange Column

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1999-01-01

    This document reports results from an ion exchange column heat transfer analysis requested by Los Alamos National Laboratory (LANL). The object of the analysis is to demonstrate that the decay heat from the Pu-238 will not cause resin bed temperatures to increase to a level where the resin significantly degrades

  3. Column Stores as an IR Prototyping Tool

    NARCIS (Netherlands)

    H.F. Mühleisen (Hannes); T. Samar (Thaer); J.J.P. Lin (Jimmy); A.P. de Vries (Arjen)

    2014-01-01

    textabstract. We make the suggestion that instead of implementing custom index structures and query evaluation algorithms, IR researchers should simply store document representations in a column-oriented relational database and write ranking models using SQL. For rapid prototyping, this is

  4. Topographic mapping

    Science.gov (United States)

    ,

    2008-01-01

    The U.S. Geological Survey (USGS) produced its first topographic map in 1879, the same year it was established. Today, more than 100 years and millions of map copies later, topographic mapping is still a central activity for the USGS. The topographic map remains an indispensable tool for government, science, industry, and leisure. Much has changed since early topographers traveled the unsettled West and carefully plotted the first USGS maps by hand. Advances in survey techniques, instrumentation, and design and printing technologies, as well as the use of aerial photography and satellite data, have dramatically improved mapping coverage, accuracy, and efficiency. Yet cartography, the art and science of mapping, may never before have undergone change more profound than today.

  5. Implementation of multidimensional databases in column-oriented NoSQL systems

    OpenAIRE

    Chevalier, Max; El Malki, Mohammed; Kopliku, Arlind; Teste, Olivier; Tournier, Ronan

    2015-01-01

    International audience; NoSQL (Not Only SQL) systems are becoming popular due to known advantages such as horizontal scalability and elasticity. In this paper, we study the implementation of multidimensional data warehouses with columnoriented NoSQL systems. We define mapping rules that transform the conceptual multidimensional data model to logical column-oriented models. We consider three different logical models and we use them to instantiate data warehouses. We focus on data loading, mode...

  6. Single column and two-column H-D-T distillation experiments at TSTA

    International Nuclear Information System (INIS)

    Yamanishi, T.; Yoshida, H.; Hirata, S.; Naito, T.; Naruse, Y.; Sherman, R.H.; Bartlit, J.R.; Anderson, J.L.

    1988-01-01

    Cryogenic distillation experiments were peformed at TSTA with H-D-T system by using a single column and a two-column cascade. In the single column experiment, fundamental engineering data such as the liquid holdup and the HETP were measured under a variety of operational condtions. The liquid holdup in the packed section was about 10 /approximately/ 15% of its superficial volume. The HETP values were from 4 to 6 cm, and increased slightly with the vapor velocity. The reflux ratio had no effect on the HETP. For the wo-colunn experiemnt, dynamic behavior of the cascade was observed. 8 refs., 7 figs., 2 tabs

  7. The central column structure in SPHEX

    International Nuclear Information System (INIS)

    Duck, R.C.; French, P.A.; Browning, P.K.; Cunningham, G.; Gee, S.J.; al-Karkhy, A.; Martin, R.; Rusbridge, M.G.

    1994-01-01

    SPHEX is a gun injected spheromak in which a magnetised Marshall gun generates and maintains an approximately axisymmetric toroidal plasma within a topologically spherical flux conserving vessel. The central column has been defined as a region of high mean floating potential, f > up to ∼ 150 V, aligned with the geometric axis of the device. It has been suggested that this region corresponds to the open magnetic flux which is connected directly to the central electrode of the gun and links the toroidal annulus (in which f > ∼ 0 V). Poynting vector measurements have shown that the power required to drive toroidal current in the annulus is transmitted out of the column by the coherent 20 kHz mode which pervades the plasma. Measurements of the MHD dynamo in the column indicate an 'antidynamo' electric field due to correlated fluctuations in v and B at the 20 kHz mode frequency which is consistent with the time-averaged Ohm's Law. On shorting the gun electrodes, the density in the column region decays rapidly leaving a 'hole' of radius R c ∼ 7 cm. This agrees with the estimated dimension of the open flux from mean internal B measurements and axisymmetric force-free equilibrium modelling, but is considerably smaller than the radius of ∼ 13 cm inferred from the time-averaged potential. In standard operating conditions the gun delivers a current of I G ∼ 60 kA at V G ∼ 500 V for ∼ 1 ms, driving a toroidal current of I t ∼ 60 kA. Ultimately we wish to understand the mechanism which drives toroidal current in the annulus; the central column is of interest because of the crucial role it plays in this process. (author) 8 refs., 6 figs

  8. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  9. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  10. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  11. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  12. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  13. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  14. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  15. Uranium facilitated transport by water-dispersible colloids in field and soil columns

    Energy Technology Data Exchange (ETDEWEB)

    Crancon, P.; Pili, E. [CEA Bruyeres-le-Chatel, DIF, 91 (France); Charlet, L. [Univ Grenoble 1, Lab Geophys Interne and Tectonophys LGIT OSUG, CNRS, UJF, UMR5559, F-38041 Grenoble 9 (France)

    2010-07-01

    The transport of uranium through a sandy podsolic soil has been investigated in the field and in column experiments. Field monitoring, numerous years after surface contamination by depleted uranium deposits, revealed a 20 cm deep uranium migration in soil. Uranium retention in soil is controlled by the {<=} 50 {mu}m mixed humic and clayey coatings in the first 40 cm i.e. in the E horizon. Column experiments of uranium transport under various conditions were run using isotopic spiking. After 100 pore volumes elution, 60% of the total input uranium is retained in the first 2 cm of the column. Retardation factor of uranium on E horizon material ranges from 1300 (column) to 3000 (batch). In parallel to this slow uranium migration, we experimentally observed a fast elution related to humic colloids of about 1-5% of the total-uranium input, transferred at the mean pore-water velocity through the soil column. In order to understand the effect of rain events, ionic strength of the input solution was sharply changed. Humic colloids are retarded when ionic strength increases, while a major mobilization of humic colloids and colloid-borne uranium occurs as ionic strength decreases. Isotopic spiking shows that both {sup 238}U initially present in the soil column and {sup 233}U brought by input solution are desorbed. The mobilization process observed experimentally after a drop of ionic strength may account for a rapid uranium migration in the field after a rainfall event, and for the significant uranium concentrations found in deep soil horizons and in groundwater, 1 km downstream from the pollution source. (authors)

  16. Uranium facilitated transport by water-dispersible colloids in field and soil columns

    Energy Technology Data Exchange (ETDEWEB)

    Crancon, P., E-mail: pierre.crancon@cea.fr [CEA, DAM, DIF, F-91297 Arpajon (France); Pili, E. [CEA, DAM, DIF, F-91297 Arpajon (France); Charlet, L. [Laboratoire de Geophysique Interne et Tectonophysique (LGIT-OSUG), University of Grenoble-I, UMR5559-CNRS-UJF, BP53, 38041 Grenoble cedex 9 (France)

    2010-04-01

    The transport of uranium through a sandy podzolic soil has been investigated in the field and in column experiments. Field monitoring, numerous years after surface contamination by depleted uranium deposits, revealed a 20 cm deep uranium migration in soil. Uranium retention in soil is controlled by the < 50 {mu}m mixed humic and clayey coatings in the first 40 cm i.e. in the E horizon. Column experiments of uranium transport under various conditions were run using isotopic spiking. After 100 pore volumes elution, 60% of the total input uranium is retained in the first 2 cm of the column. Retardation factor of uranium on E horizon material ranges from 1300 (column) to 3000 (batch). In parallel to this slow uranium migration, we experimentally observed a fast elution related to humic colloids of about 1-5% of the total-uranium input, transferred at the mean porewater velocity through the soil column. In order to understand the effect of rain events, ionic strength of the input solution was sharply changed. Humic colloids are retarded when ionic strength increases, while a major mobilization of humic colloids and colloid-borne uranium occurs as ionic strength decreases. Isotopic spiking shows that both {sup 238}U initially present in the soil column and {sup 233}U brought by input solution are desorbed. The mobilization process observed experimentally after a drop of ionic strength may account for a rapid uranium migration in the field after a rainfall event, and for the significant uranium concentrations found in deep soil horizons and in groundwater, 1 km downstream from the pollution source.

  17. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  18. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  19. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  20. NeatMap--non-clustering heat map alternatives in R.

    Science.gov (United States)

    Rajaram, Satwik; Oono, Yoshi

    2010-01-22

    The clustered heat map is the most popular means of visualizing genomic data. It compactly displays a large amount of data in an intuitive format that facilitates the detection of hidden structures and relations in the data. However, it is hampered by its use of cluster analysis which does not always respect the intrinsic relations in the data, often requiring non-standardized reordering of rows/columns to be performed post-clustering. This sometimes leads to uninformative and/or misleading conclusions. Often it is more informative to use dimension-reduction algorithms (such as Principal Component Analysis and Multi-Dimensional Scaling) which respect the topology inherent in the data. Yet, despite their proven utility in the analysis of biological data, they are not as widely used. This is at least partially due to the lack of user-friendly visualization methods with the visceral impact of the heat map. NeatMap is an R package designed to meet this need. NeatMap offers a variety of novel plots (in 2 and 3 dimensions) to be used in conjunction with these dimension-reduction techniques. Like the heat map, but unlike traditional displays of such results, it allows the entire dataset to be displayed while visualizing relations between elements. It also allows superimposition of cluster analysis results for mutual validation. NeatMap is shown to be more informative than the traditional heat map with the help of two well-known microarray datasets. NeatMap thus preserves many of the strengths of the clustered heat map while addressing some of its deficiencies. It is hoped that NeatMap will spur the adoption of non-clustering dimension-reduction algorithms.

  1. Mixed Map Labeling

    Directory of Open Access Journals (Sweden)

    Maarten Löffler

    2016-12-01

    Full Text Available Point feature map labeling is a geometric visualization problem, in which a set of input points must be labeled with a set of disjoint rectangles (the bounding boxes of the label texts. It is predominantly motivated by label placement in maps but it also has other visualization applications. Typically, labeling models either use internal labels, which must touch their feature point, or external (boundary labels, which are placed outside the input image and which are connected to their feature points by crossing-free leader lines. In this paper we study polynomial-time algorithms for maximizing the number of internal labels in a mixed labeling model that combines internal and external labels. The model requires that all leaders are parallel to a given orientation θ ∈ [0, 2π, the value of which influences the geometric properties and hence the running times of our algorithms.

  2. Column, particularly extraction column, for fission and/or breeder materials

    International Nuclear Information System (INIS)

    Vietzke, H.; Pirk, H.

    1980-01-01

    An absorber rod with a B 4 C insert is situated in the long extraction column for a uranyl nitrate solution or a plutonium nitrate solution. The geometrical dimensions are designed for a high throughput with little corrosion. (DG) [de

  3. ADVANCED DIAGNOSTIC TECHNIQUES FOR THREE-PHASE SLURRY BUBBLE COLUMN REACTORS (SBCR)

    Energy Technology Data Exchange (ETDEWEB)

    M.H. Al-Dahhan; M.P. Dudukovic; L.S. Fan

    2001-07-25

    This report summarizes the accomplishment made during the second year of this cooperative research effort between Washington University, Ohio State University and Air Products and Chemicals. The technical difficulties that were encountered in implementing Computer Automated Radioactive Particle Tracking (CARPT) in high pressure SBCR have been successfully resolved. New strategies for data acquisition and calibration procedure have been implemented. These have been performed as a part of other projects supported by Industrial Consortium and DOE via contract DE-2295PC95051 which are executed in parallel with this grant. CARPT and Computed Tomography (CT) experiments have been performed using air-water-glass beads in 6 inch high pressure stainless steel slurry bubble column reactor at selected conditions. Data processing of this work is in progress. The overall gas holdup and the hydrodynamic parameters are measured by Laser Doppler Anemometry (LDA) in 2 inch slurry bubble column using Norpar 15 that mimic at room temperature the Fischer Tropsch wax at FT reaction conditions of high pressure and temperature. To improve the design and scale-up of bubble column, new correlations have been developed to predict the radial gas holdup and the time averaged axial liquid recirculation velocity profiles in bubble columns.

  4. HETP evaluation of structured packing distillation column

    Directory of Open Access Journals (Sweden)

    A. E. Orlando Jr.

    2009-09-01

    Full Text Available Several tests with a hydrocarbon mixture of known composition (C8-C14, obtained from DETEN Chemistry S.A., have been performed in a laboratory distillation column, having 40mm of nominal diameter and 2.2m high, with internals of Sulzer DX gauze stainless steel structured packing. The main purpose of this work was to evaluate HETP of a structured packing laboratory scale distillation column, operating continuously. Six HETP correlations available in the literature were compared in order to find out which is the most appropriate for structured packing columns working with medium distillates. Prior to the experimental tests, simulation studies using commercial software PRO/II® were performed in order to establish the optimum operational conditions for the distillation, especially concerning operating pressure, top and bottom temperatures, feed location and reflux ratio. The results of PRO/II® were very similar to the analysis of the products obtained during continuous operation, therefore permitting the use of the properties calculated by that software on the theoretical models investigated. The theoretical models chosen for HETP evaluation were: Bravo, Rocha and Fair (1985; Rocha, Bravo and Fair (1993, 1996; Brunazzi and Pagliant (1997; Carlo, Olujić and Pagliant (2006; Olujić et al., (2004. Modifications concerning calculation of specific areas were performed on the correlations in order to fit them for gauze packing HETP evaluation. As the laboratory distillation column was operated continuously, different HETP values were found by the models investigated for each section of the column. The low liquid flow rates in the top section of the column are a source of error for HETP evaluation by the models; therefore, more reliable HETP values were found in the bottom section, in which liquid flow rates were much greater. Among the theoretical models, Olujić et al. (2004 has shown good results relative to the experimental tests. In addition, the

  5. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  6. Implementation of a Parallel Protein Structure Alignment Service on Cloud

    Directory of Open Access Journals (Sweden)

    Che-Lun Hung

    2013-01-01

    Full Text Available Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.

  7. Sequential and parallel image restoration: neural network implementations.

    Science.gov (United States)

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  8. Columnar discharge mode between parallel dielectric barrier electrodes in atmospheric pressure helium

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Yanpeng; Zheng, Bin; Liu, Yaoge [School of Electric Power, South China University of Technology, Guangzhou 510640 (China)

    2014-01-15

    Using a fast-gated intensified charge-coupled device, end- and side-view photographs were taken of columnar discharge between parallel dielectric barrier electrodes in atmospheric pressure helium. Based on three-dimensional images generated from end-view photographs, the number of discharge columns increased, whereas the diameter of each column decreased as the applied voltage was increased. Side-view photographs indicate that columnar discharges exhibited a mode transition ranging from Townsend to glow discharges generated by the same discharge physics as atmospheric pressure glow discharge.

  9. Participatory Maps

    DEFF Research Database (Denmark)

    Salovaara-Moring, Inka

    2016-01-01

    practice. In particular, mapping environmental damage, endangered species, and human-made disasters has become one focal point for environmental knowledge production. This type of digital map has been highlighted as a processual turn in critical cartography, whereas in related computational journalism...... of a geo-visualization within information mapping that enhances embodiment in the experience of the information. InfoAmazonia is defined as a digitally created map-space within which journalistic practice can be seen as dynamic, performative interactions between journalists, ecosystems, space, and species...

  10. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  11. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  12. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  13. Neural net generated seismic facies map and attribute facies map

    International Nuclear Information System (INIS)

    Addy, S.K.; Neri, P.

    1998-01-01

    The usefulness of 'seismic facies maps' in the analysis of an Upper Wilcox channel system in a 3-D survey shot by CGG in 1995 in Lavaca county in south Texas was discussed. A neural net-generated seismic facies map is a quick hydrocarbon exploration tool that can be applied regionally as well as on a prospect scale. The new technology is used to classify a constant interval parallel to a horizon in a 3-D seismic volume based on the shape of the wiggle traces using a neural network technology. The tool makes it possible to interpret sedimentary features of a petroleum deposit. The same technology can be used in regional mapping by making 'attribute facies maps' in which various forms of amplitude attributes, phase attributes or frequency attributes can be used

  14. Leveraging Parallel Data Processing Frameworks with Verified Lifting

    Directory of Open Access Journals (Sweden)

    Maaz Bin Safeer Ahmad

    2016-11-01

    Full Text Available Many parallel data frameworks have been proposed in recent years that let sequential programs access parallel processing. To capitalize on the benefits of such frameworks, existing code must often be rewritten to the domain-specific languages that each framework supports. This rewriting–tedious and error-prone–also requires developers to choose the framework that best optimizes performance given a specific workload. This paper describes Casper, a novel compiler that automatically retargets sequential Java code for execution on Hadoop, a parallel data processing framework that implements the MapReduce paradigm. Given a sequential code fragment, Casper uses verified lifting to infer a high-level summary expressed in our program specification language that is then compiled for execution on Hadoop. We demonstrate that Casper automatically translates Java benchmarks into Hadoop. The translated results execute on average 3.3x faster than the sequential implementations and scale better, as well, to larger datasets.

  15. Mitigating oil spills in the water column

    International Nuclear Information System (INIS)

    Barry, Edward; Libera, Joseph A.; Mane, Anil University; Avila, Jason R.; DeVitis, David

    2017-01-01

    The scale and scope of uncontrolled oil spills can be devastating. Diverse marine environments and fragile ecologies are some of the most susceptible to the many ill effects, while the economic costs can be crippling. A notoriously difficult challenge with no known technological solution is the successful removal of oil dispersed in the water column. Here, we address this problem through cheap and reusable oil sorbents based on the chemical modification of polymer foams. Interfacial chemistry was optimized and subsequently tested in a simulated marine environment at the National Oil Spill Response Research & Renewable Energy Test Facility, Ohmsett. We find favorable performance for surface oil mitigation and, for the first time, demonstrate the advanced sorbent's efficiency and efficacy at pilot scale in extraction of crude oil and refined petroleum products dispersed in the water column. As a result, this is a potentially disruptive technology, opening a new field of environmental science focused on sub-surface pollutant sequestration.

  16. Assembly procedure for column cutting platform

    International Nuclear Information System (INIS)

    Routh, R.D.

    1995-01-01

    This supporting document describes the assembly procedure for the Column Cutting Platform and Elevation Support. The Column Cutting Platform is a component of the 241-SY-101 Equipment Removal System. It is set up on the deck of the Strongback Trailer to provide work access to cut off the upper portion of the Mitigation Pump Assembly (MPA). The Elevation Support provides support for the front of the Storage Container with the Strongback at an inclined position. The upper portion of the MPA must be cut off to install the Containment Caps on the Storage Container. The storage Container must be maintained in an inclined position until the Containment Caps are installed to prevent any residual liquids from migrating forward in the Storage Container

  17. Modeling of Crystalline Silicotitanate Ion Exchange Columns

    International Nuclear Information System (INIS)

    Walker, D.D.

    1999-01-01

    Non-elutable ion exchange is being considered as a potential replacement for the In-Tank Precipitation process for removing cesium from Savannah River Site (SRS) radioactive waste. Crystalline silicotitanate (CST) particles are the reference ion exchange medium for the process. A major factor in the construction cost of this process is the size of the ion exchange column required to meet product specifications for decontaminated waste. To validate SRS column sizing calculations, SRS subcontracted two reknowned experts in this field to perform similar calculations: Professor R. G. Anthony, Department of Chemical Engineering, Texas A ampersand 038;M University, and Professor S. W. Wang, Department of Chemical Engineering, Purdue University. The appendices of this document contain reports from the two subcontractors. Definition of the design problem came through several meetings and conference calls between the participants and SRS personnel over the past few months. This document summarizes the problem definition and results from the two reports

  18. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  19. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  20. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  1. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  2. Optimization of the isotope separation in columns

    International Nuclear Information System (INIS)

    Kaminskij, V.A.; Vetsko, V.M.; Tevzadze, G.A.; Devdariani, O.A.; Sulaberidze, G.A.

    1982-01-01

    The general method for the multi-parameter optimization of cascade plants of packed columns is proposed. As an optimization effectiveness function a netcost of the isotopic product is selected. The net cost is comprehensively characterizing the sum total of capital costs for manufacturing the products as well as determining the choice of the most effective directions for capital investments and rational limits of improvement of the products quality. The method is based on main representations of the cascade theory, such as the ideal flow profile and form efficiency as well as mathematical model of the packed column specifying the bonds between its geometric and operating parameters. As a result, the isotopic products cost function could be bound with such parameters as the equilibrium stage height, ultimate packing capacity, its element dimensions, column diameter. It is concluded that the suggested approach to the optimization of isotope separation processes is rather a general one. It permits to solve a number of special problems, such as estimation of advisability of using heat-pump circuits and determining the rational automation level. Besides, by means of the method suggested one can optimize the process conditions with regard to temperature and pressure

  3. Employing anatomical knowledge in vertebral column labeling

    Science.gov (United States)

    Yao, Jianhua; Summers, Ronald M.

    2009-02-01

    The spinal column constitutes the central axis of human torso and is often used by radiologists to reference the location of organs in the chest and abdomen. However, visually identifying and labeling vertebrae is not trivial and can be timeconsuming. This paper presents an approach to automatically label vertebrae based on two pieces of anatomical knowledge: one vertebra has at most two attached ribs, and ribs are attached only to thoracic vertebrae. The spinal column is first extracted by a hybrid method using the watershed algorithm, directed acyclic graph search and a four-part vertebra model. Then curved reformations in sagittal and coronal directions are computed and aggregated intensity profiles along the spinal cord are analyzed to partition the spinal column into vertebrae. After that, candidates for rib bones are detected using features such as location, orientation, shape, size and density. Then a correspondence matrix is established to match ribs and vertebrae. The last vertebra (from thoracic to lumbar) with attached ribs is identified and labeled as T12. The rest of vertebrae are labeled accordingly. The method was tested on 50 CT scans and successfully labeled 48 of them. The two failed cases were mainly due to rudimentary ribs.

  4. Local buckling of composite channel columns

    Science.gov (United States)

    Szymczak, Czesław; Kujawa, Marcin

    2018-05-01

    The investigation concerns local buckling of compressed flanges of axially compressed composite channel columns. Cooperation of the member flange and web is taken into account here. The buckling mode of the member flange is defined by rotation angle a flange about the line of its connection with the web. The channel column under investigation is made of unidirectional fibre-reinforced laminate. Two approaches to member orthotropic material modelling are performed: the homogenization with the aid of theory of mixture and periodicity cell or homogenization upon the Voigt-Reuss hypothesis. The fundamental differential equation of local buckling is derived with the aid of the stationary total potential energy principle. The critical buckling stress corresponding to a number of buckling half-waves is assumed to be a minimum eigenvalue of the equation. Some numerical examples dealing with columns are given here. The analytical results are compared with the finite element stability analysis carried out by means of ABAQUS software. The paper is focused on a close analytical solution of the critical buckling stress and the associated buckling mode while the web-flange cooperation is assumed.

  5. Electronic Nose using Gas Chromatography Column and Quartz Crystal Microbalance

    Directory of Open Access Journals (Sweden)

    Hari Agus Sujono

    2011-08-01

    Full Text Available The conventional electronic nose usually consists of an array of dissimilar chemical sensors such as quartz crystal microbalance (QCM combined with pattern recognition algorithm such as Neural network. Because of parallel processing, the system needs a huge number of sensors and circuits which may emerge complexity and inter-channel crosstalk problems. In this research, a new type of odor identification which combines between gas chromatography (GC and electronic nose methods has been developed. The system consists of a GC column and a 10-MHz quartz crystal microbalance sensor producing a unique pattern for an odor in time domain. This method offers advantages of substantially reduced size, interferences and power consumption in comparison to existing odor identification system. Several odors of organic compounds were introduced to evaluate the selectivity of the system. Principle component analysis method was used to visualize the classification of each odor in two-dimensional space. This system could resolve common organic solvents, including molecules of different classes (aromatic from alcohols as well as those within a particular class (methanol from ethanol and also fuels (premium from pertamax. The neural network can be taught to recognize the odors tested in the experiment with identification rate of 85 %. It is therefore the system may take the place of human nose, especially for poisonous odor evaluations.

  6. Concept Mapping

    Science.gov (United States)

    Technology & Learning, 2005

    2005-01-01

    Concept maps are graphical ways of working with ideas and presenting information. They reveal patterns and relationships and help students to clarify their thinking, and to process, organize and prioritize. Displaying information visually--in concept maps, word webs, or diagrams--stimulates creativity. Being able to think logically teaches…

  7. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  8. Adiabatic packed column supercritical fluid chromatography using a dual-zone still-air column heater.

    Science.gov (United States)

    Helmueller, Shawn C; Poe, Donald P; Kaczmarski, Krzysztof

    2018-02-02

    An approach to conducting SFC separations under pseudo-adiabatic condition utilizing a dual-zone column heater is described. The heater allows for efficient separations at low pressures above the critical temperature by imposing a temperature profile along the column wall that closely matches that for isenthalpic expansion of the fluid inside the column. As a result, the efficiency loss associated with the formation of radial temperature gradients in this difficult region can be largely avoided in packed analytical scale columns. For elution of n-octadecylbenzene at 60 °C with 5% methanol modifier and a flow rate of 3 mL/min, a 250 × 4.6-mm column packed with 5-micron Kinetex C18 particles began to lose efficiency (8% decrease in the number of theoretical plates) at outlet pressures below 142 bar in a traditional forced air oven. The corresponding outlet pressure for onset of excess efficiency loss was decreased to 121 bar when the column was operated in a commercial HPLC column heater, and to 104 bar in the new dual-zone heater operated in adiabatic mode, with corresponding increases in the retention factor for n-octadecylbenzene from 2.9 to 6.8 and 14, respectively. This approach allows for increased retention and efficient separations of otherwise weakly retained analytes. Applications are described for rapid SFC separation of an alkylbenzene mixture using a pressure ramp, and isobaric separation of a cannabinoid mixture. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Ductility of reinforced concrete columns confined with stapled strips

    International Nuclear Information System (INIS)

    Tahir, M.F.; Khan, Q.U.Z.; Shabbir, F.; Sharif, M.B.; Ijaz, N.

    2015-01-01

    Response of three 150x150x450mm short reinforced concrete (RC) columns confined with different types of confining steel was investigated. Standard stirrups, strips and stapled strips, each having same cross-sectional area, were employed as confining steel around four comer column bars. Experimental work was aimed at probing into the affect of stapled strip confinement on post elastic behavior and ductility level under cyclic axial load. Ductility ratios, strength enhancement factor and core concrete strengths were compared to study the affect of confinement. Results indicate that strength enhancement in RC columns due to strip and stapled strip confinement was not remarkable as compared to stirrup confined column. It was found that as compared to stirrup confined column, stapled strip confinement enhanced the ductility of RC column by 183% and observed axial capacity of stapled strip confined columns was 41 % higher than the strip confined columns. (author)

  10. EX0904 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0904: Water Column Exploration Field...

  11. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  12. Rectification of catalyst separation column at HWP, Thal (Paper No. 5.7)

    International Nuclear Information System (INIS)

    Prakash, R.; Bhaskaran, M.

    1992-01-01

    Heavy Water Plant, Thal is based on the monothermal ammonia hydrogen process. Liquid ammonia containing potassium amide catalyst is contacted with the synthesis gas where-in deuterium from hydrogen gets transferred to liquid phase. There are two parallel streams A and B with a common ammonia synthesis unit. The system is provided with an ammonia cracker and ammonia synthesis for providing the reflux gas and liquid for the enrichment process. The parameters such as steam valve opening, column pressure, reflux, condensate valve opening, cooling water valve position, cracking load of the unit before and after the rectification, etc. are discussed. (author). 2 tabs., 2 figs

  13. Cross flow cyclonic flotation column for coal and minerals beneficiation

    Science.gov (United States)

    Lai, Ralph W.; Patton, Robert A.

    2000-01-01

    An apparatus and process for the separation of coal from pyritic impurities using a modified froth flotation system. The froth flotation column incorporates a helical track about the inner wall of the column in a region intermediate between the top and base of the column. A standard impeller located about the central axis of the column is used to generate a centrifugal force thereby increasing the separation efficiency of coal from the pyritic particles and hydrophillic tailings.

  14. Behaviour of FRP confined concrete in square columns

    OpenAIRE

    Diego Villalón, Ana de; Arteaga Iriarte, Ángel; Fernandez Gomez, Jaime Antonio; Perera Velamazán, Ricardo; Cisneros, Daniel

    2015-01-01

    A significant amount of research has been conducted on FRP-confined circular columns, but much less is known about rectangular/square columns in which the effectiveness of confinement is much reduced. This paper presents the results of experimental investigations on low strength square concrete columns confined with FRP. Axial compression tests were performed on ten intermediate size columns. The tests results indicate that FRP composites can significantly improve the bearing capacity and duc...

  15. Modalization in the Political Column of Tempo Magazine

    OpenAIRE

    Rahmah, Maria Betti Sinaga and

    2017-01-01

    The study focuses on analyzing the use of modalization in the Political Column of Tempo Magazine. The objectives were to find out the type of modalization and to describe the use of modalization in the Political Column of Tempo magazine. The data were taken from Political Column of Tempo magazine published in June and July 2017. The source of data was Political Column in Tempo magazine. The data analysis applied descriptive qualitative research. There were 135 clauses which contained Modaliza...

  16. Numerical Simulations of Settlement of Jet Grouting Columns

    Directory of Open Access Journals (Sweden)

    Juzwa Anna

    2016-03-01

    Full Text Available The paper presents the comparison of results of numerical analyses of interaction between group of jet grouting columns and subsoil. The analyses were conducted for single column and groups of three, seven and nine columns. The simulations are based on experimental research in real scale which were carried out by authors. The final goal for the research is an estimation of an influence of interaction between columns working in a group.

  17. Separate the inseparable one-layer mapping

    Science.gov (United States)

    Hu, Chia-Lun J.

    2000-04-01

    When the input-output mapping of a one-layered perceptron (OLP) does NOT meet the PLI condition which is the if-and- only-if, or 'IFF, condition that the mapping can be realized by a OLP, then no matter what learning rule we use, a OLP just cannot realize this mapping at all. However, because of the nature of the PLI, one can still construct a parallel- cascaded, two-layered perceptron system to realize this `illegal' mapping. Theory and design example of this novel design will be reported in detail in this paper.

  18. column frame for design of reinforced concrete sway frames

    African Journals Online (AJOL)

    adminstrator

    design of slender reinforced concrete columns in sway frames according .... concrete,. Ac = gross cross-sectional area of the columns. Step 3: Effective Buckling Length Factors. The effective buckling length factors of columns in a sway frame shall be computed by .... shall have adequate resistance to failure in a sway mode ...

  19. Behavior of reinforced concrete columns strenghtened by partial jacketing

    Directory of Open Access Journals (Sweden)

    D. B. FERREIRA

    Full Text Available This article presents the study of reinforced concrete columns strengthened using a partial jacket consisting of a 35mm self-compacting concrete layer added to its most compressed face and tested in combined compression and uniaxial bending until rupture. Wedge bolt connectors were used to increase bond at the interface between the two concrete layers of different ages. Seven 2000 mm long columns were tested. Two columns were cast monolithically and named PO (original column e PR (reference column. The other five columns were strengthened using a new 35 mm thick self-compacting concrete layer attached to the column face subjected to highest compressive stresses. Column PO had a 120mm by 250 mm rectangular cross section and other columns had a 155 mm by 250mm cross section after the strengthening procedure. Results show that the ultimate resistance of the strengthened columns was more than three times the ultimate resistance of the original column PO, indicating the effectiveness of the strengthening procedure. Detachment of the new concrete layer with concrete crushing and steel yielding occurred in the strengthened columns.

  20. 46 CFR 174.085 - Flooding on column stabilized units.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Flooding on column stabilized units. 174.085 Section 174... Units § 174.085 Flooding on column stabilized units. (a) Watertight compartments that are outboard of... of the unit, must be assumed to be subject to flooding as follows: (1) When a column is subdivided...

  1. Water hammer with column separation : a historical review

    NARCIS (Netherlands)

    Bergant, A.; Simpson, A.R.; Tijsseling, A.S.

    2006-01-01

    Column separation refers to the breaking of liquid columns in fully filled pipelines. This may occur in a water-hammer event when the pressure in a pipeline drops to the vapor pressure at specific locations such as closed ends, high points or knees (changes in pipe slope). The liquid columns are

  2. Comparison of the performance of full scale pulsed columns vs. mixer-settlers for uranium solvent extraction

    International Nuclear Information System (INIS)

    Movsowitz, R.L.; Kleinberger, R.; Buchalter, E.M.; Grinbaum, B.

    2000-01-01

    A rare opportunity arose to compare the performance of Bateman Pulsed Columns (BPC) vs. Mixer-Settlers at an industrial site, over a long period, when the Uranium Solvent Extraction Plant of WMC at Olympic Dam, South Australia was upgraded. The original plant was operated for years with two trains of 2-stage mixer-settler batteries for the extraction of uranium. When the company decided to increase the yield of the plant, the existing two trains of mixer-settlers for uranium extraction were arranged in series, giving one 4-stage battery. In parallel, two Bateman Pulsed Columns, of the disc-and-doughnut type, were installed to compare the performance of both types of equipment over an extended period.The plant has been operating in parallel for three years and the results show that the performance of the columns is excellent: the extraction yield is similar to the 4 mixer-settlers in series - about 98%, the entrainment of solvent is lower, there are less mechanical failures, less problems with crud, smaller solvent losses and the operation is simpler. The results convinced WMC to install an additional 10 BPC's for the expansion of their uranium plant. These columns were successfully commissioned early 1999. This paper includes quantitative comparison of both types of equipment. (author)

  3. Continuous fraction collection of gas chromatographic separations with parallel mass spectrometric detection applied to cell-based bioactivity analysis

    NARCIS (Netherlands)

    Jonker, Willem; Zwart, Nick; Stockl, Jan B.; de Koning, Sjaak; Schaap, Jaap; Lamoree, Marja H.; Somsen, Govert W.; Hamers, Timo; Kool, Jeroen

    2017-01-01

    We describe the development and evaluation of a GC-MS fractionation platform that combines high-resolution fraction collection of full chromatograms with parallel MS detection. A y-split at the column divides the effluent towards the MS detector and towards an inverted y-piece where vaporized trap

  4. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  5. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  6. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  7. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  8. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  9. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  10. Parallel plate transmission line transformer

    NARCIS (Netherlands)

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

  11. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  12. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  13. Mapping racism.

    Science.gov (United States)

    Moss, Donald B

    2006-01-01

    The author uses the metaphor of mapping to illuminate a structural feature of racist thought, locating the degraded object along vertical and horizontal axes. These axes establish coordinates of hierarchy and of distance. With the coordinates in place, racist thought begins to seem grounded in natural processes. The other's identity becomes consolidated, and parochialism results. The use of this kind of mapping is illustrated via two patient vignettes. The author presents Freud's (1905, 1927) views in relation to such a "mapping" process, as well as Adorno's (1951) and Baldwin's (1965). Finally, the author conceptualizes the crucial status of primitivity in the workings of racist thought.

  14. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  15. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  16. Blind column selection protocol for two-dimensional high performance liquid chromatography.

    Science.gov (United States)

    Burns, Niki K; Andrighetto, Luke M; Conlan, Xavier A; Purcell, Stuart D; Barnett, Neil W; Denning, Jacquie; Francis, Paul S; Stevenson, Paul G

    2016-07-01

    The selection of two orthogonal columns for two-dimensional high performance liquid chromatography (LC×LC) separation of natural product extracts can be a labour intensive and time consuming process and in many cases is an entirely trial-and-error approach. This paper introduces a blind optimisation method for column selection of a black box of constituent components. A data processing pipeline, created in the open source application OpenMS®, was developed to map the components within the mixture of equal mass across a library of HPLC columns; LC×LC separation space utilisation was compared by measuring the fractional surface coverage, fcoverage. It was found that for a test mixture from an opium poppy (Papaver somniferum) extract, the combination of diphenyl and C18 stationary phases provided a predicted fcoverage of 0.48 and was matched with an actual usage of 0.43. OpenMS®, in conjunction with algorithms designed in house, have allowed for a significantly quicker selection of two orthogonal columns, which have been optimised for a LC×LC separation of crude extractions of plant material. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  18. Sorting, Searching, and Simulation in the MapReduce Framework

    DEFF Research Database (Denmark)

    Goodrich, Michael T.; Sitchinava, Nodari; Zhang, Qin

    2011-01-01

    usefulness of our approach by designing and analyzing efficient MapReduce algorithms for fundamental sorting, searching, and simulation problems. This study is motivated by a goal of ultimately putting the MapReduce framework on an equal theoretical footing with the well-known PRAM and BSP parallel...... in parallel computational geometry for the MapReduce framework, which result in efficient MapReduce algorithms for sorting, 2- and 3-dimensional convex hulls, and fixed-dimensional linear programming. For the case when mappers and reducers have a memory/message-I/O size of M = (N), for a small constant > 0...

  19. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  20. Hydrodynamic Study Of Column Bioleaching Processes

    Directory of Open Access Journals (Sweden)

    Sadowski Zygmunt

    2015-06-01

    Full Text Available The modelling of flow leaching solution through the porous media has been considered. The heap bioleaching process can be tested using the column experimental equipment. This equipment was employed to the hydrodynamic studies of copper ore bioleaching. The copper ore (black shale ore with the support, inertial materials (glass small balls and polyethylene beads was used to the bioleaching tests. The packed beds were various composition, the ore/support ratio was changed. The correlation between the bed porosity and bioleaching kinetics, and copper recovery was investigated.

  1. Column leaching from biomass combustion ashes

    DEFF Research Database (Denmark)

    Maresca, Alberto; Astrup, Thomas Fruergaard

    2015-01-01

    The utilization of biomass combustion ashes for forest soil liming and fertilizing has been addressed in literature. Though, a deep understanding of the ash chemical composition and leaching behavior is necessary to predict potential benefits and environmental risks related to this practice....... In this study, a fly ash sample from an operating Danish power plant based on wood biomass was collected, chemically characterized and investigated for its leaching release of nutrients and heavy metals. A column leaching test was employed. The strongly alkaline pH of all the collected eluates suggested...

  2. Design of Steel Beam-Column Connections

    Directory of Open Access Journals (Sweden)

    Bogatinoski Z.

    2014-05-01

    Full Text Available In this paper a theoretical and experimental research of the steel beam-column connections is presented. Eight types of specimens were being researched, composed of rigid and semi-rigid connections from which 4 connections are with IPE - profile and 4 connections with tube's section for the beam. From the numerical analysis of the researched models, and especially from the experimental research at the Laboratory for Structures in the Faculty of Mechanical Engineering - Skopje, specific conclusions were received that ought to have theoretical and practical usage for researchers in this area of interest.

  3. Buckling driven debonding in sandwich columns

    DEFF Research Database (Denmark)

    Østergaard, Rasmus Christian

    2008-01-01

    results from two mechanisms: (a) interaction of local debond buckling and global buckling and (b) the development of a damaged zone at the debond crack tip. Based on the pronounced imperfection sensitivity, the author predicts that an experimental measurement of the strength of sandwich structures may......A compression loaded sandwich column that contains a debond is analyzed using a geometrically non-linear finite element model. The model includes a cohesive zone along one face sheet/core interface whereby the debond can extend by interface crack growth. Two geometrical imperfections are introduced...

  4. Dynamic Deformation and Collapse of Granular Columns

    Science.gov (United States)

    Uenishi, K.; Tsuji, K.; Doi, S.

    2009-12-01

    Large dynamic deformation of granular materials may be found in nature not only in the failure of slopes and cliffs — due to earthquakes, rock avalanches, debris flows and landslides — but also in earthquake faulting itself. Granular surface flows often consist of solid grains and intergranular fluid, but the effect of the fluid may be usually negligible because the volumetric concentration of grains is in many cases high enough for interparticle forces to dominate momentum transport. Therefore, the investigation of dry granular flow of a mass might assist in further understanding of the above mentioned geophysical events. Here, utilizing a high-speed digital video camera system, we perform a simple yet fully-controlled series of laboratory experiments related to the collapse of granular columns. We record, at an interval of some microseconds, the dynamic transient granular mass flow initiated by abrupt release of a tube that contains dry granular materials. The acrylic tube is partially filled with glass beads and has a cross-section of either a fully- or semi-cylindrical shape. Upon sudden removal of the tube, the granular solid may fragment under the action of its own weight and the particles spread on a rigid horizontal plane. This study is essentially the extension of the previous ones by Lajeunesse et al. (Phys. Fluids 2004) and Uenishi and Tsuji (JPGU 2008), but the striped layers of particles in a semi-cylindrical tube, newly introduced in this contribution, allow us to observe the precise particle movement inside the granular column: The development of slip lines inside the column and the movement of particles against each other can be clearly identified. The major controlling parameters of the spreading dynamics are the initial aspect ratio of the granular (semi-)cylindrical column, the frictional properties of the horizontal plane (substrate) and the size of beads. We show the influence of each parameter on the average flow velocity and final radius

  5. A review of oscillating water columns.

    Science.gov (United States)

    Heath, T V

    2012-01-28

    This paper considers the history of oscillating water column (OWC) systems from whistling buoys to grid-connected power generation systems. The power conversion from the wave resource through to electricity via pneumatic and shaft power is discussed in general terms and with specific reference to Voith Hydro Wavegen's land installed marine energy transformer (LIMPET) plant on the Scottish island of Islay and OWC breakwater systems. A report on the progress of other OWC systems and power take-off units under commercial development is given, and the particular challenges faced by OWC developers reviewed.

  6. Preinjector for Linac 1, accelerating column

    CERN Multimedia

    1974-01-01

    For a description of the Linac 1 preinjector, please see first 7403070X. High up on the wall of the Faraday cage (7403073X) is this drum-shaped container of the ion source (7403083X). It is mounted at the HV end of the accelerating column through which the ions (usually protons; many other types of ions in the course of its long history) proceed through the Faraday cage wall to the low-energy end (at ground potential) of Linac 1. The 520 kV accelerating voltage was supplied by a SAMES generator (7403074X).

  7. The evolution of the cognitive map.

    Science.gov (United States)

    Jacobs, Lucia F

    2003-01-01

    The hippocampal formation of mammals and birds mediates spatial orientation behaviors consistent with a map-like representation, which allows the navigator to construct a new route across unfamiliar terrain. This cognitive map thus appears to underlie long-distance navigation. Its mediation by the hippocampal formation and its presence in birds and mammals suggests that at least one function of the ancestral medial pallium was spatial navigation. Recent studies of the goldfish and certain reptile species have shown that the medial pallium homologue in these species can also play an important role in spatial orientation. It is not yet clear, however, whether one type of cognitive map is found in these groups or indeed in all vertebrates. To answer this question, we need a more precise definition of the map. The recently proposed parallel map theory of hippocampal function provides a new perspective on this question, by unpacking the mammalian cognitive map into two dissociable mapping processes, mediated by different hippocampal subfields. If the cognitive map of non-mammals is constructed in a similar manner, the parallel map theory may facilitate the analysis of homologies, both in behavior and in the function of medial pallium subareas. Copyright 2003 S. Karger AG, Basel

  8. Genetic Mapping

    Science.gov (United States)

    ... greatly advanced genetics research. The improved quality of genetic data has reduced the time required to identify a ... cases, a matter of months or even weeks. Genetic mapping data generated by the HGP's laboratories is freely accessible ...

  9. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  10. Column studies on BTEX biodegradation under microaerophilic and denitrifying conditions

    International Nuclear Information System (INIS)

    Hutchins, S.R.; Moolenaar, S.W.; Rhodes, D.E.

    1992-01-01

    Two column tests were conducted using aquifer material to simulate the nitrate field demonstration project carried out earlier at Traverse City, Michigan. The objectives were to better define the effect nitrate addition had on biodegradation of benzene, toluene, ethylbenzene, xylenes, and trimethylbenzenes (BTEX) in the field study, and to determine whether BTEX removal can be enhanced by supplying a limited amount of oxygen as a supplemental electron acceptor. Columns were operated using limited oxygen, limited oxygen plus nitrate, and nitrate alone. In the first column study, benzene was generally recalcitrant compared to the alkylbenzenes (TEX), although some removal did occur. In the second column study, nitrate was deleted from the feed to the column originally receiving nitrate alone and added to the feed of the column originally receiving limited oxygen alone. Although the requirement for nitrate for optimum TEX removal was clearly demonstrated in these columns, there were significant contributions by biotic and abiotic processes other than denitrification which could not be quantified

  11. Multilevel Parallelization of AutoDock 4.2

    Directory of Open Access Journals (Sweden)

    Norgan Andrew P

    2011-04-01

    Full Text Available Abstract Background Virtual (computational screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4. Results Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Conclusions Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI and node-level (OpenMP parallelization to best fit both workloads and computational resources.

  12. Multilevel Parallelization of AutoDock 4.2.

    Science.gov (United States)

    Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P

    2011-04-28

    Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.

  13. Further optimization of a parallel double-effect organosilicon distillation scheme through exergy analysis

    International Nuclear Information System (INIS)

    Sun, Jinsheng; Dai, Leilei; Shi, Ming; Gao, Hong; Cao, Xijia; Liu, Guangxin

    2014-01-01

    In our previous work, a significant improvement in organosilicon monomer distillation using parallel double-effect heat integration between a heavies removal column and six other columns, as well as heat integration between methyltrichlorosilane and dimethylchlorosilane columns, reduced the total exergy loss of the currently running counterpart by 40.41%. Further research regarding this optimized scheme demonstrated that it was necessary to reduce the higher operating pressure of the methyltrichlorosilane column, which is required for heat integration between the methyltrichlorosilane and dimethylchlorosilane columns. Therefore, in this contribution, a challenger scheme is presented with heat pumps introduced separately from the originally heat-coupled methyltrichlorosilane and dimethylchlorosilane columns in the above-mentioned optimized scheme, which is the prototype for this work. Both schemes are simulated using the same purity requirements used in running industrial units. The thermodynamic properties from the simulation are used to calculate the energy consumption and exergy loss of the two schemes. The results show that the heat pump option further reduces the flowsheet energy consumption and exergy loss by 27.35% and 10.98% relative to the prototype scheme. These results indicate that the heat pumps are superior to heat integration in the context of energy-savings during organosilicon monomer distillation. - Highlights: • Combine the paralleled double-effect and heat pump distillation to organosilicon distillation. • Compare the double-effect with the heat pump in saving energy. • Further cut down the flowsheet energy consumption and exergy loss by 27.35% and 10.98% respectively

  14. Spinal column damage from water ski jumping

    International Nuclear Information System (INIS)

    Horne, J.; Cockshott, W.P.; Shannon, H.S.

    1987-01-01

    We conducted a radiographic survey of 117 competitive water ski jumpers to determine whether this sport can cause spinal column damage and, if so, whether damage is more likely to occur in those who participate during the period of spinal growth and development (age 15 years or younger). We found a high prevalence of two types of abnormality: Scheuermann (adolescent) spondylodystrophy (present in 26% of the skiers) and vertebral body wedging (present in 34%). The prevalence of adolescent spondylodystrophy increased with the number of years of participation in the sport before age 15 years or less. Of those in this age group who had skied for 5 years or more, 57 showed adolescent spondylodystrophy; of those in the same age group who had skied for 9 years or more, 100% were affected. Wedged vertebrae increased as time of participation increased, regardless of the age at which exposure began. We conclude that competitive water ski jumping may damage the spinal column and that consideration should be given to regulating this sport, particularly for children. (orig.)

  15. Spinal column damage from water ski jumping.

    Science.gov (United States)

    Horne, J; Cockshott, W P; Shannon, H S

    1987-01-01

    We conducted a radiographic survey of 117 competitive water ski jumpers to determine whether this sport can cause spinal column damage and, if so, whether damage is more likely to occur in those who participate during the period of spinal growth and development (age 15 years or younger). We found a high prevalence of two types of abnormality: Scheuermann (adolescent) spondylodystrophy (present in 26% of the skiers) and vertebral body wedging (present in 34%). The prevalence of adolescent spondylodystrophy increased with the number of years of participation in the sport before age 15 years or less. Of those in this age group who had skied for 5 years or more, 57 showed adolescent spondylodystrophy; of those in the same age group who had skied for 9 years or more, 100% were affected. Wedged vertebrae increased as time of participation increased, regardless of the age at which exposure began. We conclude that competitive water ski jumping may damage the spinal column and that consideration should be given to regulating this sport, particularly for children.

  16. Picobubble column flotation of fine coal

    Energy Technology Data Exchange (ETDEWEB)

    Daniel Tao; Samuel Yu; Xiaohua Zhou; R.Q. Honaker; B.K. Parekh [University of Kentucky, Lexington, KY (United States). Department of Mining Engineering

    2008-01-15

    Froth flotation is widely used in the coal industry to clean -28 mesh (0.6 mm) or -100 mesh (0.15 mm) fine coal. A successful recovery of particles by flotation depends on efficient particle-bubble collision and attachment with minimal subsequent particle detachment from bubble. Flotation is effective in a narrow size range, nominally 10-100 {mu}m, beyond which the flotation efficiency drops sharply. A fundamental analysis has shown that use of picobubbles can significantly improve the flotation recovery of particles by increasing the probability of collision and attachment and reducing the probability of detachment. A specially designed column with a picobubble generator has been developed for enhanced recovery of fine coal particles. Picobubbles were produced based on the hydrodynamic cavitation principle. Experimental results have shown that the use of picobubbles in a 5-cm diameter column flotation increased the combustible recovery of a highly floatable coal by up to 10% and that of a poorly floatable coal by up to 40%, depending on the feed rate, collector dosage, and other flotation conditions. 14 refs.

  17. Spinal column damage from water ski jumping

    Energy Technology Data Exchange (ETDEWEB)

    Horne, J.; Cockshott, W.P.; Shannon, H.S.

    1987-11-01

    We conducted a radiographic survey of 117 competitive water ski jumpers to determine whether this sport can cause spinal column damage and, if so, whether damage is more likely to occur in those who participate during the period of spinal growth and development (age 15 years or younger). We found a high prevalence of two types of abnormality: Scheuermann (adolescent) spondylodystrophy (present in 26% of the skiers) and vertebral body wedging (present in 34%). The prevalence of adolescent spondylodystrophy increased with the number of years of participation in the sport before age 15 years or less. Of those in this age group who had skied for 5 years or more, 57 showed adolescent spondylodystrophy; of those in the same age group who had skied for 9 years or more, 100% were affected. Wedged vertebrae increased as time of participation increased, regardless of the age at which exposure began. We conclude that competitive water ski jumping may damage the spinal column and that consideration should be given to regulating this sport, particularly for children. (orig.)

  18. Hydrogen isotope exchange in metal hydride columns

    International Nuclear Information System (INIS)

    Wiswall, R.; Reilly, J.; Bloch, F.; Wirsing, E.

    1977-01-01

    Several metal hydrides were shown to act as chromatographic media for hydrogen isotopes. The procedure was to equilibrate a column of hydride with flowing hydrogen, inject a small quantity of tritium tracer, and observe its elution behavior. Characteristic retention times were found. From these and the extent of widening of the tritium band, the heights equivalent to a theoretical plate could be calculated. Values of around 1 cm were obtained. The following are the metals whose hydrides were studied, together with the temperature ranges in which chromatographic behavior was observed: vanadium, 0 to 70 0 C; zirconium, 500 to 600 0 C; LaNi 5 , -78 to +30 0 C; Mg 2 Ni, 300 to 375 0 C; palladium, 0 to 70 0 C. A dual-temperature isotope separation process based on hydride chromatography was demonstrated. In this, a column was caused to cycle between two temperatures while being supplied with a constant stream of tritium-traced hydrogen. Each half-cycle was continued until ''breakthrough,'' i.e., until the tritium concentration in the effluent was the same as that in the feed. Up to that point, the effluent was enriched or depleted in tritium, by up to 20%

  19. Improved focusing-and-deflection columns

    International Nuclear Information System (INIS)

    Mui, P.H.; Szilagyi, M.

    1995-01-01

    Our earlier design procedures for constructing quadrupole columns are further expanded to include octupole corrector units and ''octupole'' deflectors with no third-order harmonics. The additional complications are finer partitioning of the plates and increased number of voltage controllers. Two sample designs, one having only the additional octupole deflectors and one having both the deflectors and the correctors, are presented and compared to our previous quadrupole system. The additional octupole components are shown to be capable of increasing the current density from 30% to more than 300% for a four-plate system, designed to focus and scan the electron beam over a circular area of 0.25 mm radius. The electron beam is assumed to have an initial divergence of ±2.3 mrad, an initial energy of 6 kV, a total energy spread of 1 eV, and a final acceleration of 30 keV. These systems are then slightly reoptimized for a superficial comparison with the commercially available column by Micrion Corporation. The numerical results indicate a potential for substantial improvements, demonstrating the power of this design procedure. Finally, a discussion is presented on how the individual components can interact with each other to reduce the various aberrations. copyright 1995 American Vacuum Society

  20. Synthesis of focusing-and-deflection columns

    International Nuclear Information System (INIS)

    Szilagyi, M.; Mui, P.H.

    1995-01-01

    Szilagyi and Szep have demonstrated that focusing lenses of high performances can be constructed from a column of circular plate electrodes. Later, Szilagyi modified that system to include dipole, quadrupole, and octupole components by partitioning each plate into eight equal sectors. It has already been shown that the additional quadrupole components can indeed bring about substantial improvements in the focusing of charged particle beams. In this article, that design procedure is expanded to construct columns capable of both focusing and deflecting particle beams by just introducing additional dipole components. In this new design, the geometry of the system remains unchanged. The only extra complication is the demand for more individual controls of the sector voltages. Two sample designs, one for negative ions and one for electrons, are presented showing that in both cases a ±2.3 mrad diverging beam can be focused down to a spot of less than 50 nm in radius over a scanning circular area of radius 0.25 mm. The details of the two systems are given in Sec. IV along with the source conditions. The performance of the negative ion system is found to be comparable to the published data. For the relativistic electron system, the interaction of individual components to reduce various aberrations is investigated. copyright 1995 American Vacuum Society

  1. Okeanos Explorer (EX1402L2): Gulf of Mexico Mapping and Exploration

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Transit mapping operations will collect bathymetry, sub-bottom profiles, water column backscatter, and seafloor backscatter over the continental shelf and Claypile...

  2. NONLINEAR FINITE ELEMENT ANALYSIS OF NONSEISMICALLY DETAILED INTERIOR RC BEAM-COLUMN CONNECTION UNDER REVERSED CYCLIC LOAD

    Directory of Open Access Journals (Sweden)

    Teeraphot Supaviriyakit

    2017-11-01

    Full Text Available This paper presents a nonlinear finite element analysis of non-seismically detailed RC beam column connections under reversed cyclic load. The test of half-scale nonductile reinforced concrete beam-column joints was conducted. The tested specimens represented those of the actual mid-rise reinforced concrete frame buildings designed according to the non-seismic provisions of the ACI building code.  The test results show that specimens representing small and medium column tributary area failed in brittle joint shear while specimen representing large column tributary area failed by ductile flexure though no ductile reinforcement details were provided. The nonlinear finite element analysis was applied to simulate the behavior of the specimens. The finite element analysis employs the smeared crack approach for modeling beam, column and joint, and employs the discrete crack approach for modeling the interface between beam and joint face. The nonlinear constitutive models of reinforced concrete elements consist of coupled tension-compression model to model normal force orthogonal and parallel to the crack and shear transfer model to capture the shear sliding mechanism. The FEM shows good comparison with test results in terms of load-displacement relations, hysteretic loops, cracking process and the failure mode of the tested specimens. The finite element analysis clarifies that the joint shear failure was caused by the collapse of principal diagonal concrete strut.

  3. Visualisation of air–water bubbly column flow using array Ultrasonic Velocity Profiler

    Directory of Open Access Journals (Sweden)

    Munkhbat Batsaikhan

    2017-11-01

    Full Text Available In the present work, an experimental study of bubbly two-phase flow in a rectangular bubble column was performed using two ultrasonic array sensors, which can measure the instantaneous velocity of gas bubbles on multiple measurement lines. After the sound pressure distribution of sensors had been evaluated with a needle hydrophone technique, the array sensors were applied to two-phase bubble column. To assess the accuracy of the measurement system with array sensors for one and two-dimensional velocity, a simultaneous measurement was performed with an optical measurement technique called particle image velocimetry (PIV. Experimental results showed that accuracy of the measurement system with array sensors is under 10% for one-dimensional velocity profile measurement compared with PIV technique. The accuracy of the system was estimated to be under 20% along the mean flow direction in the case of two-dimensional vector mapping.

  4. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  5. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  6. The effects of carbide column to swelling potential and Atterberg limit on expansive soil with column to soil drainage

    Science.gov (United States)

    Muamar Rifa'i, Alfian; Setiawan, Bambang; Djarwanti, Noegroho

    2017-12-01

    The expansive soil is soil that has a potential for swelling-shrinking due to changes in water content. Such behavior can exert enough force on building above to cause damage. The use of columns filled with additives such as Calcium Carbide is done to reduce the negative impact of expansive soil behavior. This study aims to determine the effect of carbide columns on expansive soil. Observations were made on swelling and spreading of carbides in the soil. 7 Carbide columns with 5 cm diameter and 20 cm height were installed into the soil with an inter-column spacing of 8.75 cm. Wetting is done through a pipe at the center of the carbide column for 20 days. Observations were conducted on expansive soil without carbide columns and expansive soil with carbide columns. The results showed that the addition of carbide column could reduce the percentage of swelling by 4.42%. Wetting through the center of the carbide column can help spread the carbide into the soil. The use of carbide columns can also decrease the rate of soil expansivity. After the addition of carbide column, the plasticity index value decreased from 71.76% to 4.3% and the shrinkage index decreased from 95.72% to 9.2%.

  7. A tandem parallel plate analyzer

    International Nuclear Information System (INIS)

    Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.

    1996-11-01

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  8. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  9. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  10. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  11. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  12. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  13. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  14. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  15. Mechanized sephadex LH-20 multiple column chromatography as a prerequisite to automated multi-steroid radioimmunoassays

    International Nuclear Information System (INIS)

    Sippell, W.G.; Bidlingmaier, F.; Knorr, D.

    1977-01-01

    In order to establish a procedure for the simultaneous determination of all major corticosteroid hormones and their immediate biological precursors in the same plasma sample, two different mechanized methods for the simultaneous isolation of aldosterone (A), corticosterone (B), 11-deoxycorticosterone (DOC), progesterone (P), 17-hydroxyprogesterone (17-OHP), 11-deoxycorticol (S), cortisol (F), and cortisone (E) from the methylene chloride extracts of 0.1 to 2.0 ml plasma samples have been developed. In both methods, eluate fractions of each of the isolated steroids are automatically pooled and collected from all parallel columns by one programmable linear fraction collector. Due to the high reproducibility of the elution patterns both between different parallel columns and between 30 to 40 consecutive elutions, mean recoveries of tritiated steroids including extraction are 60 to 84% after a single elution and still over 50% after an additional chromatography on 40cm LH-20 colums, with coefficients of variation below 15%. Thus, the eight steroids can be completely isolated from each of ten plasma extracts within 3 to 4 hours, yielding 80 samples readily prepared for subsequent quantitation by radioimmunoassay. (orig./AJ) [de

  16. Heat Transfer Analysis for a Fixed CST Column

    International Nuclear Information System (INIS)

    Lee, S.Y.

    2004-01-01

    In support of a small column ion exchange (SCIX) process for the Savannah River Site waste processing program, a transient two-dimensional heat transfer model that includes the conduction process neglecting the convection cooling mechanism inside the crystalline silicotitanate (CST) column has been constructed and heat transfer calculations made for the present design configurations. For this situation, a no process flow condition through the column was assumed as one of the reference conditions for the simulation of a loss-of-flow accident. A series of the modeling calculations has been performed using a computational heat transfer approach. Results for the baseline model indicate that transit times to reach 130 degrees Celsius maximum temperature of the CST-salt solution column are about 96 hours when the 20-in CST column with 300 Ci/liter heat generation source and 25 degrees Celsius initial column temperature is cooled by natural convection of external air as a primary heat transfer mechanism. The modeling results for the 28-in column equipped with water jacket systems on the external wall surface of the column and water coolant pipe at the center of the CST column demonstrate that the column loaded with 300 Ci/liter heat source can be maintained non-boiling indefinitely. Sensitivity calculations for several alternate column sizes, heat loads of the packed column, engineered cooling systems, and various ambient conditions at the exterior wall of the column have been performed under the reference conditions of the CST-salt solution to assess the impact of those parameters on the peak temperatures of the packed column for a given transient time. The results indicate that a water-coolant pipe at the center of the CST column filled with salt solution is the most effective one among the potential design parameters related to the thermal energy dissipation of decay heat load. It is noted that the cooling mechanism at the wall boundary of the column has significant

  17. Design and implementation of a micron-sized electron column fabricated by focused ion beam milling

    Energy Technology Data Exchange (ETDEWEB)

    Wicki, Flavio, E-mail: flavio.wicki@physik.uzh.ch; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner

    2016-01-15

    We have designed, fabricated and tested a micron-sized electron column with an overall length of about 700 microns comprising two electron lenses; a micro-lens with a minimal bore of 1 micron followed by a second lens with a bore of up to 50 microns in diameter to shape a coherent low-energy electron wave front. The design criteria follow the notion of scaling down source size, lens-dimensions and kinetic electron energy for minimizing spherical aberrations to ensure a parallel coherent electron wave front. All lens apertures have been milled employing a focused ion beam and could thus be precisely aligned within a tolerance of about 300 nm from the optical axis. Experimentally, the final column shapes a quasi-planar wave front with a minimal full divergence angle of 4 mrad and electron energies as low as 100 eV. - Highlights: • Electron optics • Scaling laws • Low-energy electrons • Coherent electron beams • Micron-sized electron column.

  18. Investigating the Effect of Column Geometry on Separation Efficiency using 3D Printed Liquid Chromatographic Columns Containing Polymer Monolithic Phases.

    Science.gov (United States)

    Gupta, Vipul; Beirne, Stephen; Nesterenko, Pavel N; Paull, Brett

    2018-01-16

    Effect of column geometry on the liquid chromatographic separations using 3D printed liquid chromatographic columns with in-column polymerized monoliths has been studied. Three different liquid chromatographic columns were designed and 3D printed in titanium as 2D serpentine, 3D spiral, and 3D serpentine columns, of equal length and i.d. Successful in-column thermal polymerization of mechanically stable poly(BuMA-co-EDMA) monoliths was achieved within each design without any significant structural differences between phases. Van Deemter plots indicated higher efficiencies for the 3D serpentine chromatographic columns with higher aspect ratio turns at higher linear velocities and smaller analysis times as compared to their counterpart columns with lower aspect ratio turns. Computational fluid dynamic simulations of a basic monolithic structure indicated 44%, 90%, 100%, and 118% higher flow through narrow channels in the curved monolithic configuration as compared to the straight monolithic configuration at linear velocities of 1, 2.5, 5, and 10 mm s -1 , respectively. Isocratic RPLC separations with the 3D serpentine column resulted in an average 23% and 245% (8 solutes) increase in the number of theoretical plates as compared to the 3D spiral and 2D serpentine columns, respectively. Gradient RPLC separations with the 3D serpentine column resulted in an average 15% and 82% (8 solutes) increase in the peak capacity as compared to the 3D spiral and 2D serpentine columns, respectively. Use of the 3D serpentine column at a higher flow rate, as compared to the 3D spiral column, provided a 58% reduction in the analysis time and 74% increase in the peak capacity for the isocratic separations of the small molecules and the gradient separations of proteins, respectively.

  19. Column properties and flow profiles of a flat, wide column for high-pressure liquid chromatography.

    Science.gov (United States)

    Mriziq, Khaled S; Guiochon, Georges

    2008-04-11

    The design and the construction of a pressurized, flat, wide column for high-performance liquid chromatography (HPLC) are described. This apparatus, which is derived from instruments that implement over-pressured thin layer chromatography, can carry out only uni-dimensional chromatographic separations. However, it is intended to be the first step in the development of more powerful instruments that will be able to carry out two-dimensional chromatographic separations, in which case, the first separation would be a space-based separation, LC(x), taking place along one side of the bed and the second separation would be a time-based separation, LC(t), as in classical HPLC but proceeding along the flat column, not along a tube. The apparatus described consists of a pressurization chamber made of a Plexiglas block and a column chamber made of stainless steel. These two chambers are separated by a thin Mylar membrane. The column chamber is a cavity which is filled with a thick layer (ca. 1mm) of the stationary phase. Suitable solvent inlet and outlet ports are located on two opposite sides of the sorbent layer. The design allows the preparation of a homogenous sorbent layer suitable to be used as a chromatographic column, the achievement of effective seals of the stationary phase layer against the chamber edges, and the homogenous flow of the mobile phase along the chamber. The entire width of the sorbent layer area can be used to develop separations or elute samples. The reproducible performance of the apparatus is demonstrated by the chromatographic separations of different dyes. This instrument is essentially designed for testing detector arrays to be used in a two-dimensional LC(x) x LC(t) instrument. The further development of two-dimension separation chromatographs based on the apparatus described is sketched.

  20. Stiffness Analysis and Comparison of 3-PPR Planar Parallel Manipulators with Actuation Compliance

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2012-01-01

    In this paper, the stiffness of 3-PPR planar parallel manipulator (PPM) is analyzed with the consideration of nonlinear actuation compliance. The characteristics of the stiffness matrix pertaining to the planar parallel manipulators are analyzed and discussed. Graphic representation of the stiffn...... of the stiffness characteristics by means of translational and rotational stiffness mapping is developed. The developed method is illustrated with an unsymmetrical 3-PPR PPM, being compared with its structure-symmetrical counterpart....

  1. Strengthening of Steel Columns under Load: Torsional-Flexural Buckling

    Directory of Open Access Journals (Sweden)

    Martin Vild

    2016-01-01

    Full Text Available The paper presents experimental and numerical research into the strengthening of steel columns under load using welded plates. So far, the experimental research in this field has been limited mostly to flexural buckling of columns and the preload had low effect on the column load resistance. This paper focuses on the local buckling and torsional-flexural buckling of columns. Three sets of three columns each were tested. Two sets corresponding to the base section (D and strengthened section (E were tested without preloading and were used for comparison. Columns from set (F were first preloaded to the load corresponding to the half of the load resistance of the base section (D. Then the columns were strengthened and after they cooled, they were loaded to failure. The columns strengthened under load (F had similar average resistance as the columns welded without preloading (E, meaning the preload affects even members susceptible to local buckling and torsional-flexural buckling only slightly. This is the same behaviour as of the tested columns from previous research into flexural buckling. The study includes results gained from finite element models of the problem created in ANSYS software. The results obtained from the experiments and numerical simulations were compared.

  2. Materials performance in prototype Thermal Cycling Absorption Process (TCAP) columns

    International Nuclear Information System (INIS)

    Clark, E.A.

    1992-01-01

    Two prototype Thermal Cycling Absorption Process (TCAP) columns have been metallurgically examined after retirement, to determine the causes of failure and to evaluate the performance of the column container materials in this application. Leaking of the fluid heating and cooling subsystems caused retirement of both TCAP columns, not leaking of the main hydrogen-containing column. The aluminum block design TCAP column (AHL block TCAP) used in the Advanced Hydride Laboratory, Building 773-A, failed in one nitrogen inlet tube that was crimped during fabrication, which lead to fatigue crack growth in the tube and subsequent leaking of nitrogen from this tube. The Third Generation stainless steel design TCAP column (Third generation TCAP), operated in 773-A room C-061, failed in a braze joint between the freon heating and cooling tubes (made of copper) and the main stainless steel column. In both cases, stresses from thermal cycling and local constraint likely caused the nucleation and growth of fatigue cracks. No materials compatibility problems between palladium coated kieselguhr (the material contained in the TCAP column) and either aluminum or stainless steel column materials were observed. The aluminum-stainless steel transition junction appeared to be unaffected by service in the AHL block TCAP. Also, no evidence of cracking was observed in the AHL block TCAP in a location expected to experience the highest thermal shock fatigue in this design. It is important to limit thermal stresses caused by constraint in hydride systems designed to work by temperature variation, such as hydride storage beds and TCAP columns

  3. Refreshment topics II: Design of distillation columns

    Directory of Open Access Journals (Sweden)

    Milojević Svetomir

    2006-01-01

    Full Text Available For distillation column design it is necessary to define all the variable parameters such as component concentrations in different streams temperatures, pressures, mass and energy flow, which are used to represent the separation process of some specific system. They are related to each other according to specific laws, and if the number of such parameters exceeds the number of their relationships, in order to solve a problem some of them must be specified in advance or some constraints assumed for the mass balance, the balance of energy, phase equilibria or chemical equilibria. Knowledge of specific elements which are the constituents of a distillation unit must be known to define the number of design parameters as well as some additional apparati also necessary to realize the distilation. Each separate apparatus might be designed and constructed only if all the necessary and variable parameters for such a unit are defined. This is the right route to solve a distilation unit in many different cases. The construction of some distillation unit requires very good knowledge of mass, heat and momentum transfer phenomena. Moreover, the designer needs to know which kind of apparatus will be used in the distillation unit to realize a specific production process. The most complicated apparatus in a rectification unit is the distillation column. Depending on the complexity of the separation process one, two or more columns are often used. Additional equipment are heat exchangers (reboilers, condensers, cooling systems, heaters, separators, tanks for reflux distribution, tanks and pumps for feed transportation, etc. Such equipment is connected by pipes and valves, and for the normal operation of a distillation unit other instruments for measuring the flow rate, temperature and pressure are also required. Problems which might arise during the determination and selection of such apparati and their number requires knowledge of the specific systems which must

  4. A New ENSO Index Derived from Satellite Measurements of Column Ozone

    Science.gov (United States)

    Ziemke, J. R.; Chandra, S.; Oman, L. D.; Bhartia, P. K.

    2010-01-01

    Column Ozone measured in tropical latitudes from Nimbus 7 total ozone mapping spectrometer (TOMS), Earth Probe TOMS, solar backscatter ultraviolet (SBUV), and Aura ozone monitoring instrument (OMI) are used to derive an El Nino-Southern Oscillation (ENSO) index. This index, which covers a time period from 1979 to the present, is defined as the Ozone ENSO Index (OEI) and is the first developed from atmospheric trace gas measurements. The OEI is constructed by first averaging monthly mean column ozone over two broad regions in the western and eastern Pacific and then taking their difference. This differencing yields a self-calibrating ENSO index which is independent of individual instrument calibration offsets and drifts in measurements over the long record. The combined Aura OMI and MLS ozone data confirm that zonal variability in total column ozone in the tropics caused by ENSO events lies almost entirely in the troposphere. As a result, the OEI can be derived directly from total column ozone instead of tropospheric column ozone. For clear-sky ozone measurements a +1K change in Nino 3.4 index corresponds to +2.9 Dobson Unit (DU) change in the OEI, while a +1 hPa change in SOI coincides with a -1.7DU change in the OEI. For ozone measurements under all cloud conditions these numbers are +2.4DU and -1.4 DU, respectively. As an ENSO index based upon ozone, it is potentially useful in evaluating climate models predicting long term changes in ozone and other trace gases.

  5. Projective mapping

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Bredie, Wender Laurentius Petrus

    2012-01-01

    by the practical testing environment. As a result of the changes, a reasonable assumption would be to question the consequences caused by the variations in method procedures. Here, the aim is to highlight the proven or hypothetic consequences of variations of Projective Mapping. Presented variations will include...... instructions and influence heavily the product placements and the descriptive vocabulary (Dehlholm et.al., 2012b). The type of assessors performing the method influences results with an extra aspect in Projective Mapping compared to more analytical tests, as the given spontaneous perceptions are much dependent......Projective Mapping (Risvik et.al., 1994) and its Napping (Pagès, 2003) variations have become increasingly popular in the sensory field for rapid collection of spontaneous product perceptions. It has been applied in variations which sometimes are caused by the purpose of the analysis and sometimes...

  6. Intro to Google Maps and Google Earth

    Directory of Open Access Journals (Sweden)

    Jim Clifford

    2013-12-01

    Full Text Available Google My Maps and Google Earth provide an easy way to start creating digital maps. With a Google Account you can create and edit personal maps by clicking on My Places. In My Maps you can choose between several different base maps (including the standard satellite, terrain, or standard maps and add points, lines and polygons. It is also possible to import data from a spreadsheet, if you have columns with geographical information (i.e. longitudes and latitudes or place names. This automates a formerly complex task known as geocoding. Not only is this one of the easiest ways to begin plotting your historical data on a map, but it also has the power of Google’s search engine. As you read about unfamiliar places in historical documents, journal articles or books, you can search for them using Google Maps. It is then possible to mark numerous locations and explore how they relate to each other geographically. Your personal maps are saved by Google (in their cloud, meaning you can access them from any computer with an internet connection. You can keep them private or embed them in your website or blog. Finally, you can export your points, lines, and polygons as KML files and open them in Google Earth or Quantum GIS.

  7. Retention of nitrous gases in scrubber columns

    International Nuclear Information System (INIS)

    Nakazone, A.K.; Costa, R.C.; Lobao, A.S.T.; Matsuda, H.T.; Araujo, B.F. de

    1988-01-01

    During the UO 2 dissolution in nitric acid, some different species of NO (sub)x are released. The off gas can either be refluxed to the dissolver or be released and retained on special colums. The final composition of the solution is the main parameter to take in account. A process for nitrous gases retention using scrubber colums containing H 2 O or diluted HNO 3 is presented. Chemiluminescence measurement was employed to NO x evaluation before and after scrubing. Gas flow, temperature, residence time are the main parameters considered in this paper. For the dissolution of 100g UO 2 in 8M nitric acid, a 6NL/h O 2 flow was the best condition for the NO/NO 2 oxidation with maximum absorption in the scrubber columns. (author) [pt

  8. Education and training column: the learning collaborative.

    Science.gov (United States)

    MacDonald-Wilson, Kim L; Nemec, Patricia B

    2015-03-01

    This column describes the key components of a learning collaborative, with examples from the experience of 1 organization. A learning collaborative is a method for management, learning, and improvement of products or processes, and is a useful approach to implementation of a new service design or approach. This description draws from published material on learning collaboratives and the authors' experiences. The learning collaborative approach offers an effective method to improve service provider skills, provide support, and structure environments to result in lasting change for people using behavioral health services. This approach is consistent with psychiatric rehabilitation principles and practices, and serves to increase the overall capacity of the mental health system by structuring a process for discovering and sharing knowledge and expertise across provider agencies. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  9. [Lateral column lengthening osteotomy of calcaneus].

    Science.gov (United States)

    Hintermann, B

    2015-08-01

    Lengthening of the lateral column for adduction of forefoot and restoration of the medial arch. Stabilization of the ankle joint complex. Supple flatfoot deformity (posterior tibial tendon dysfunction stage II). Instability of the medial ankle joint complex (superficial deltoid and spring ligament). Posttraumatic valgus and pronation deformity of the foot. Rigid flatfoot deformity (posterior tibial tendon dysfunction stage III and IV). Talocalcaneal and naviculocalcaneal coalition. Osteoarthritis of calcaneocuboid joint. Exposition of calcaneus at sinus tarsi. Osteotomy through sinus tarsi and widening until desired correction of the foot is achieved. Insertion of bone graft. Screw fixation. Immobilization in a cast for 6 weeks. Weight-bearing as tolerated from the beginning. In the majority of cases, part of hindfoot reconstruction. Reliable and stable correction. Safe procedure with few complications.

  10. Yield stress independent column buckling curves

    DEFF Research Database (Denmark)

    Stan, Tudor‐Cristian; Jönsson, Jeppe

    2017-01-01

    of the yield stress is to some inadequate degree taken into account in the Eurocode by specifying that steel grades of S460 and higher all belong to a common set of “raised” buckling curves. This is not satisfying as it can be shown theoretically that the current Eurocode formulation misses an epsilon factor......Using GMNIA and shell finite element modelling of steel columns it is ascertained that the buckling curves for given imperfections and residual stresses are not only dependent on the relative slenderness ratio and the cross section shape but also on the magnitude of the yield stress. The influence...... in the definition of the normalised imperfection magnitudes. By introducing this factor it seems that the GMNIA analysis and knowledge of the independency of residual stress levels on the yield stress can be brought together and give results showing consistency between numerical modelling and a simple modified...

  11. Calculation of a TBP extraction column

    International Nuclear Information System (INIS)

    Lima Soares, M.L. de.

    1973-01-01

    Problems involving the number of stages in an extraction column and the equipment needed in most aqueous methods of reprocessing of nuclear fuels were studied. A solution for the separation of uranium from fission products in a feed solution that contains these components plus nitric acid, thorium and protactinium is obtained. The program has peculiarities such as treatment of tracer components; acceptance of decontamination and recuperation factors better than the set values for the solution; occurrence of niaxima concentrations; change of key component; criterion for ending of section; corrections for interaction; input data not including concentration estimates of the raffinate and organic extract; set of limitations for the concentrations based on input data to help convergence

  12. Experimental validation of pulsed column inventory estimators

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.; Eiben, K.; Dander, T.; Hakkila, E.A.

    1991-01-01

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may be an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs

  13. Affective Maps

    DEFF Research Database (Denmark)

    Salovaara-Moring, Inka

    . In particular, mapping environmental damage, endangered species, and human made disasters has become one of the focal point of affective knowledge production. These ‘more-than-humangeographies’ practices include notions of species, space and territory, and movement towards a new political ecology. This type...... of digital cartographies has been highlighted as the ‘processual turn’ in critical cartography, whereas in related computational journalism it can be seen as an interactive and iterative process of mapping complex and fragile ecological developments. This paper looks at computer-assisted cartography as part...

  14. A parallel algorithm for 3D dislocation dynamics

    International Nuclear Information System (INIS)

    Wang Zhiqiang; Ghoniem, Nasr; Swaminarayan, Sriram; LeSar, Richard

    2006-01-01

    Dislocation dynamics (DD), a discrete dynamic simulation method in which dislocations are the fundamental entities, is a powerful tool for investigation of plasticity, deformation and fracture of materials at the micron length scale. However, severe computational difficulties arising from complex, long-range interactions between these curvilinear line defects limit the application of DD in the study of large-scale plastic deformation. We present here the development of a parallel algorithm for accelerated computer simulations of DD. By representing dislocations as a 3D set of dislocation particles, we show here that the problem of an interacting ensemble of dislocations can be converted to a problem of a particle ensemble, interacting with a long-range force field. A grid using binary space partitioning is constructed to keep track of node connectivity across domains. We demonstrate the computational efficiency of the parallel micro-plasticity code and discuss how O(N) methods map naturally onto the parallel data structure. Finally, we present results from applications of the parallel code to deformation in single crystal fcc metals

  15. Calibrationless Parallel Magnetic Resonance Imaging: A Joint Sparsity Model

    Directory of Open Access Journals (Sweden)

    Angshul Majumdar

    2013-12-01

    Full Text Available State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets—eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used—Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods—CS SENSE and l1SPIRiT and two calibration free techniques—Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  16. Novel field emission SEM column with beam deceleration technology

    Energy Technology Data Exchange (ETDEWEB)

    Jiruše, Jaroslav; Havelka, Miloslav; Lopour, Filip

    2014-11-15

    A novel field-emission SEM column has been developed that features Beam Deceleration Mode, high-probe current and ultra-fast scanning. New detection system in the column is introduced to detect true secondary electron signal. The resolution power at low energy was doubled for conventional SEM optics and moderately improved for immersion optics. Application examples at low landing energies include change of contrast, imaging of non-conductive samples and thin layers. - Highlights: • A novel field-emission SEM column has been developed. • Implemented beam deceleration improves the SEM resolution at 1 keV two times. • New column maintains high analytical potential and wide field of view. • Detectors integrated in the column allow gaining true SE and BE signal separately. • Performance of the column is demonstrated on low energy applications.

  17. Novel field emission SEM column with beam deceleration technology

    International Nuclear Information System (INIS)

    Jiruše, Jaroslav; Havelka, Miloslav; Lopour, Filip

    2014-01-01

    A novel field-emission SEM column has been developed that features Beam Deceleration Mode, high-probe current and ultra-fast scanning. New detection system in the column is introduced to detect true secondary electron signal. The resolution power at low energy was doubled for conventional SEM optics and moderately improved for immersion optics. Application examples at low landing energies include change of contrast, imaging of non-conductive samples and thin layers. - Highlights: • A novel field-emission SEM column has been developed. • Implemented beam deceleration improves the SEM resolution at 1 keV two times. • New column maintains high analytical potential and wide field of view. • Detectors integrated in the column allow gaining true SE and BE signal separately. • Performance of the column is demonstrated on low energy applications

  18. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  19. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  20. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  1. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  2. Energetic map

    International Nuclear Information System (INIS)

    2012-01-01

    This report explains the energetic map of Uruguay as well as the different systems that delimits political frontiers in the region. The electrical system importance is due to the electricity, oil and derived , natural gas, potential study, biofuels, wind and solar energy

  3. Necklace maps

    NARCIS (Netherlands)

    Speckmann, B.; Verbeek, K.A.B.

    2010-01-01

    Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions:

  4. Participatory maps

    DEFF Research Database (Denmark)

    Salovaara-Moring, Inka

    towards a new political ecology. This type of digital cartographies has been highlighted as the ‘processual turn’ in critical cartography, whereas in related computational journalism it can be seen as an interactive and iterative process of mapping complex and fragile ecological developments. This paper...

  5. Circular Raft Footings Strengthened by Stone Columns under Static Loads

    OpenAIRE

    R. Ziaie Moayed; B. Mohammadi-Haji

    2016-01-01

    Stone columns have been widely employed to improve the load-settlement characteristics of soft soils. The results of two small scale displacement control loading tests on stone columns were used in order to validate numerical finite element simulations. Additionally, a series of numerical calculations of static loading have been performed on strengthened raft footing to investigate the effects of using stone columns on bearing capacity of footings. The bearing capacity of single and group of ...

  6. Estimation of bearing capacity of floating group of stone columns

    OpenAIRE

    Fattah, Mohammed Y.; Al-Neami, Mohammed A.; Shamel Al-Suhaily, Ahmed

    2017-01-01

    Stone column is one of the ground improvement techniques. This technique has a proven performance, short time schedule, durability, constructability and low costs. The stone column technique has been used as a method of reinforcement of soft ground over the past 30 years. The bearing capacity of the stone column still has high level of uncertainties because the existing formulas for the estimation of the bearing capacity are general and do not take into consideration the type of the stone col...

  7. Uncertain Buckling Load and Reliability of Columns with Uncertain Properties

    DEFF Research Database (Denmark)

    Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.

    Continuous and finite element methods are utilized to determine the buckling load of columns with material and geometrical uncertainties considering deterministic, stochastic and interval models for the bending rigidity of columns. When the bending rigidity field is assumed to be deterministic, t....... for structural design, the lower bound is of crucial interest. The buckling load of fixed-free, simple-supported, pinned-fixed, fixed-fixed columns and a sample frame are calculated....

  8. Texture mapping in a distributed environment

    NARCIS (Netherlands)

    Nicolae, Goga; Racovita, Zoea; Telea, Alexandru

    2003-01-01

    This paper presents a tool for texture mapping in a distributed environment. A parallelization method based on the master-slave model is described. The purpose of this work is to lower the image generation time in the complex 3D scenes synthesis process. The experimental results concerning the

  9. Distributed Parallel Endmember Extraction of Hyperspectral Data Based on Spark

    Directory of Open Access Journals (Sweden)

    Zebin Wu

    2016-01-01

    Full Text Available Due to the increasing dimensionality and volume of remotely sensed hyperspectral data, the development of acceleration techniques for massive hyperspectral image analysis approaches is a very important challenge. Cloud computing offers many possibilities of distributed processing of hyperspectral datasets. This paper proposes a novel distributed parallel endmember extraction method based on iterative error analysis that utilizes cloud computing principles to efficiently process massive hyperspectral data. The proposed method takes advantage of technologies including MapReduce programming model, Hadoop Distributed File System (HDFS, and Apache Spark to realize distributed parallel implementation for hyperspectral endmember extraction, which significantly accelerates the computation of hyperspectral processing and provides high throughput access to large hyperspectral data. The experimental results, which are obtained by extracting endmembers of hyperspectral datasets on a cloud computing platform built on a cluster, demonstrate the effectiveness and computational efficiency of the proposed method.

  10. Particle simulation on a distributed memory highly parallel processor

    International Nuclear Information System (INIS)

    Sato, Hiroyuki; Ikesaka, Morio

    1990-01-01

    This paper describes parallel molecular dynamics simulation of atoms governed by local force interaction. The space in the model is divided into cubic subspaces and mapped to the processor array of the CAP-256, a distributed memory, highly parallel processor developed at Fujitsu Labs. We developed a new technique to avoid redundant calculation of forces between atoms in different processors. Experiments showed the communication overhead was less than 5%, and the idle time due to load imbalance was less than 11% for two model problems which contain 11,532 and 46,128 argon atoms. From the software simulation, the CAP-II which is under development is estimated to be about 45 times faster than CAP-256 and will be able to run the same problem about 40 times faster than Fujitsu's M-380 mainframe when 256 processors are used. (author)

  11. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  12. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

  13. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  14. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  15. The design of a new concept chromatography column.

    Science.gov (United States)

    Camenzuli, Michelle; Ritchie, Harald J; Ladine, James R; Shalliker, R Andrew

    2011-12-21

    Active Flow Management is a new separation technique whereby the flow of mobile phase and the injection of sample are introduced to the column in a manner that allows migration according to the principles of the infinite diameter column. A segmented flow outlet fitting allows for the separation of solvent or solute that elutes along the central radial section of the column from that of the sample or solvent that elutes along the wall region of the column. Separation efficiency on the analytical scale is increased by 25% with an increase in sensitivity by as much as 52% compared to conventional separations.

  16. Simulation of startup period of hydrogen isotope separation distillation column

    International Nuclear Information System (INIS)

    Sazonov, A.B.; Kagramanov, Z.G.; Magomedbekov, Eh.P.

    2003-01-01

    Kinetic procedure for the mathematical simulation of start-up regime of rectification columns for molecular hydrogen isotope separation was developed. Nonstationary state (start-up period) of separating column for rectification of multi-component mixture was calculated. Full information on equilibrium and kinetic physicochemical properties of components in separating mixtures was used for the calculations. Profile of concentration of components by height of column in task moment of time was calculated by means of differential equilibriums of nonstationary mass transfer. Calculated results of nonstationary state of column by the 2 m height, 30 mm diameter during separation of the mixture: 5 % protium, 70 % deuterium, 25 % tritium were illustrated [ru

  17. Response of steel box columns in fire conditions

    Directory of Open Access Journals (Sweden)

    Mahmood Yahyai

    2017-05-01

    Full Text Available Effect of elevated temperatures on the mechanical properties of steel, brings the importance of investigating the effect of fire on the steel structures anxiously. Columns, as the main load-carrying part of a structure, can be highly vulnerable to the fire. In this study, the behavior of steel gravity columns with box cross section exposed to fire has been investigated. These kinds of columns are widely used in common steel structures design in Iran. In current study, the behavior of such columns in fire conditions is investigated through the finite element method. To perform this, the finite element model of a steel column which has been previously tested under fire condition, was prepared. Experimental loading and boundary conditions were considered in the model and was analyzed. Results were validated by experimental data and various specimens of gravity box columns were designed according to the Iran’s steel buildings code, and modeled and analyzed using Abaqus software. The effect of width to thickness ratio of column plates, the load ratio and slenderness on the ultimate strength of the column was investigated, and the endurance time was estimated under ISO 834 standard fire curve. The results revealed that an increase in width to thickness ratio and load ratio leads to reduction of endurance time and the effect of width to thickness ratio on the ultimate strength of the column decreases with temperature increase.

  18. Applicability of hydroxylamine nitrate reductant in pulse-column contactors

    International Nuclear Information System (INIS)

    Reif, D.J.

    1983-05-01

    Uranium and plutonium separations were made from simulated breeder reactor spent fuel dissolver solution with laboratory-sized pulse column contactors. Hydroxylamine nitrate (HAN) was used for reduction of plutonium (1V). An integrated extraction-partition system, simulating a breeder fuel reprocessing flowsheet, carried out a partial partition of uranium and plutonium in the second contactor. Tests have shown that acceptable coprocessing can be ontained using HAN as a plutonium reductant. Pulse column performance was stable even though gaseous HAN oxidation products were present in the column. Gas evolution rates up to 0.27 cfm/ft 2 of column cross section were tested and found acceptable

  19. Evaluation of Controller Tuning Methods Applied to Distillation Column Control

    DEFF Research Database (Denmark)

    Nielsen, Kim; W. Andersen, Henrik; Kümmel, Professor Mogens

    A frequency domain approach is used to compare the nominal performance and robustness of dual composition distillation column control tuned according to Ziegler-Nichols (ZN) and Biggest Log Modulus Tuning (BLT) for three binary distillation columns, WOBE, LUVI and TOFA. The scope of this is to ex......A frequency domain approach is used to compare the nominal performance and robustness of dual composition distillation column control tuned according to Ziegler-Nichols (ZN) and Biggest Log Modulus Tuning (BLT) for three binary distillation columns, WOBE, LUVI and TOFA. The scope...

  20. Performance of zeolite scavenge column in Xe monitoring system

    International Nuclear Information System (INIS)

    Wang Qian; Wang Hongxia; Li Wei; Bian Zhishang

    2010-01-01

    In order to improve the performance of zeolite scavenge column, its ability of removal of humidity and carbon dioxide was studied by both static and dynamic approaches. The experimental results show that various factors, including the column length and diameter, the mass of zeolite, the content of water in air, the temperature rise during adsorption, and the activation effectiveness all effect the performance of zeolite column in scavenging humanity and carbon dioxide. Based on these results and previous experience, an optimized design of the zeolite column is made for use in xenon monitoring system. (authors)

  1. Partial strengthening of R.C square columns using CFRP

    Directory of Open Access Journals (Sweden)

    Ahmed Shaban Abdel-Hay

    2014-12-01

    An experimental program was undertaken testing ten square columns 200 × 200 × 2000 mm. One of them was a control specimen and the other nine specimens were strengthened with CFRP. The main parameters studied in this research were the compressive strength of the upper part, the height of the upper poor concrete part, and the height of CFRP wrapped part of column. The experimental results including mode of failure, ultimate load, concrete strain, and fiber strains were analyzed. The main conclusion of this research was, partial strengthening of square column using CFRP can be permitted and gives good results of the column carrying capacity.

  2. Mini-columns for Conducting Breakthrough Experiments. Design and Construction

    Energy Technology Data Exchange (ETDEWEB)

    Dittrich, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ware, Stuart Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-06-11

    Experiments with moderately and strongly sorbing radionuclides (i.e., U, Cs, Am) have shown that sorption between experimental solutions and traditional column materials must be accounted for to accurately determine stationary phase or porous media sorption properties (i.e., sorption site density, sorption site reaction rate coefficients, and partition coefficients or Kd values). This report details the materials and construction of mini-columns for use in breakthrough columns to allow for accurate measurement and modeling of sorption parameters. Material selection, construction techniques, wet packing of columns, tubing connections, and lessons learned are addressed.

  3. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  4. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  5. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  6. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  7. Comparison and Analysis of Steel Frame Based on High Strength Column and Normal Strength Column

    Science.gov (United States)

    Liu, Taiyu; An, Yuwei

    2018-01-01

    The anti-seismic performance of high strength steel has restricted its industrialization in civil buildings. In order to study the influence of high strength steel column on frame structure, three models are designed through MIDAS/GEN finite element software. By comparing the seismic performance and economic performance of the three models, the three different structures are comprehensively evaluated to provide some references for the development of high strength steel in steel structure.

  8. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  9. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  10. The life of the cortical column: opening the domain of functional architecture of the cortex (1955-1981).

    Science.gov (United States)

    Haueis, Philipp

    2016-09-01

    The concept of the cortical column refers to vertical cell bands with similar response properties, which were initially observed by Vernon Mountcastle's mapping of single cell recordings in the cat somatic cortex. It has subsequently guided over 50 years of neuroscientific research, in which fundamental questions about the modularity of the cortex and basic principles of sensory information processing were empirically investigated. Nevertheless, the status of the column remains controversial today, as skeptical commentators proclaim that the vertical cell bands are a functionally insignificant by-product of ontogenetic development. This paper inquires how the column came to be viewed as an elementary unit of the cortex from Mountcastle's discovery in 1955 until David Hubel and Torsten Wiesel's reception of the Nobel Prize in 1981. I first argue that Mountcastle's vertical electrode recordings served as criteria for applying the column concept to electrophysiological data. In contrast to previous authors, I claim that this move from electrophysiological data to the phenomenon of columnar responses was concept-laden, but not theory-laden. In the second part of the paper, I argue that Mountcastle's criteria provided Hubel Wiesel with a conceptual outlook, i.e. it allowed them to anticipate columnar patterns in the cat and macaque visual cortex. I argue that in the late 1970s, this outlook only briefly took a form that one could call a 'theory' of the cerebral cortex, before new experimental techniques started to diversify column research. I end by showing how this account of early column research fits into a larger project that follows the conceptual development of the column into the present.

  11. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  12. Circum-North Pacific tectonostratigraphic terrane map

    Science.gov (United States)

    Nokleberg, Warren J.; Parfenov, Leonid M.; Monger, James W.H.; Baranov, Boris B.; Byalobzhesky, Stanislav G.; Bundtzen, Thomas K.; Feeney, Tracey D.; Fujita, Kazuya; Gordey, Steven P.; Grantz, Arthur; Khanchuk, Alexander I.; Natal'in, Boris A.; Natapov, Lev M.; Norton, Ian O.; Patton, William W.; Plafker, George; Scholl, David W.; Sokolov, Sergei D.; Sosunov, Gleb M.; Stone, David B.; Tabor, Rowland W.; Tsukanov, Nickolai V.; Vallier, Tracy L.; Wakita, Koji

    1994-01-01

    The companion tectonostratigraphic terrane and overlap assemblage of map the Circum-North Pacific presents a modern description of the major geologic and tectonic units of the region. The map illustrates both the onshore terranes and overlap volcanic assemblages of the region, and the major offshore geologic features. The map is the first collaborative compilation of the geology of the region at a scale of 1:5,000,000 by geologists of the Russian Far East, Japanese, Alaskan, Canadian, and U.S.A. Pacific Northwest. The map is designed to be a source of geologic information for all scientists interested in the region, and is designed to be used for several purposes, including regional tectonic analyses, mineral resource and metallogenic analyses (Nokleberg and others, 1993, 1994a), petroleum analyses, neotectonic analyses, and analyses of seismic hazards and volcanic hazards. This text contains an introduction, tectonic definitions, acknowledgments, descriptions of postaccretion stratified rock units, descriptions and stratigraphic columns for tectonostratigraphic terranes in onshore areas, and references for the companion map (Sheets 1 to 5). This map is the result of extensive geologic mapping and associated tectonic studies in the Russian Far East, Hokkaido Island of Japan, Alaska, the Canadian Cordillera, and the U.S.A. Pacific Northwest in the last few decades. Geologic mapping suggests that most of this region can be interpreted as a collage of fault-bounded tectonostratigraphic terranes that were accreted onto continental margins around the Circum-

  13. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  14. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  15. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  16. MAPPING INNOVATION

    DEFF Research Database (Denmark)

    Thuesen, Christian Langhoff; Koch, Christian

    2011-01-01

    By adopting a theoretical framework from strategic niche management research (SNM) this paper presents an analysis of the innovation system of the Danish Construction industry. The analysis shows a multifaceted landscape of innovation around an existing regime, built around existing ways of working...... and developed over generations. The regime is challenged from various niches and the socio-technical landscape through trends as globalization. Three niches (Lean Construction, BIM and System Deliveries) are subject to a detailed analysis showing partly incompatible rationales and various degrees of innovation...... potential. The paper further discusses how existing policymaking operates in a number of tensions one being between government and governance. Based on the concepts from SNM the paper introduces an innovation map in order to support the development of meta-governance policymaking. By mapping some...

  17. Mapping filmmaking

    DEFF Research Database (Denmark)

    Gilje, Øystein; Frølunde, Lisbeth; Lindstrand, Fredrik

    2010-01-01

    This chapter concerns mapping patterns in regards to how young filmmakers (age 15 – 20) in the Scandinavian countries learn about filmmaking. To uncover the patterns, we present portraits of four young filmmakers who participated in the Scandinavian research project Making a filmmaker. The focus ...... is on their learning practices and how they create ‘learning paths’ in relation to resources in diverse learning contexts, whether formal, non-formal and informal contexts.......This chapter concerns mapping patterns in regards to how young filmmakers (age 15 – 20) in the Scandinavian countries learn about filmmaking. To uncover the patterns, we present portraits of four young filmmakers who participated in the Scandinavian research project Making a filmmaker. The focus...

  18. Temperature of Steel Columns under Natural Fire

    Directory of Open Access Journals (Sweden)

    F. Wald

    2004-01-01

    Full Text Available Current fire design models for time-temperature development within structural elements as well as for structural behaviour are based on isolated member tests subjected to standard fire regimes, which serve as a reference heating, but do not model natural fire. Only tests on a real structure under a natural fire can evaluate future models of the temperature developments in a fire compartment, of the transfer of heat into the structure and of the overall structural behaviour under fire.To study overall structural behaviour, a research project was conducted on an eight storey steel frame building at the  Cardington Building Research Establishment laboratory on January 16, 2003. A fire compartment 11×7 m was prepared on the fourth floor. A fire load of 40 kg/m2 was applied with 100 % permanent mechanical load and 65 % of imposed load. The paper summarises the experimental programme and shows the temperature development of the gas in the fire compartment and of the fire protected columns bearing the unprotected floors.

  19. Selective detachment process in column flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Honaker, R.Q.; Ozsever, A.V.; Parekh, B.K. [University of Kentucky, Lexington, KY (United States). Dept. of Mining Engineering

    2006-05-15

    The selectivity in flotation columns involving the separation of particles of varying degrees of floatability is based on differential flotation rates in the collection zone, reflux action between the froth and collection zones, and differential detachment rates in the froth zone. Using well-known theoretical models describing the separation process and experimental data, froth zone and overall flotation recovery values were quantified for particles in an anthracite coal that have a wide range of floatability potential. For highly floatable particles, froth recovery had a very minimal impact on overall recovery while the recovery of weakly floatable material was decreased substantially by reductions in froth recovery values. In addition, under carrying-capacity limiting conditions, selectivity was enhanced by the preferential detachment of the weakly floatable material. Based on this concept, highly floatable material was added directly into the froth zone when treating the anthracite coal. The enriched froth phase reduced the product ash content of the anthracite product by five absolute percentage points while maintaining a constant recovery value.

  20. SPR Hydrostatic Column Model Verification and Validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lord, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rudeen, David Keith [Gram, Inc. Albuquerque, NM (United States)

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.