WorldWideScience

Sample records for parallel columns mapping

  1. Column-Oriented Storage Techniques for MapReduce

    OpenAIRE

    Floratou, Avrilia; Patel, Jignesh; Shekita, Eugene; Tata, Sandeep

    2011-01-01

    Users of MapReduce often run into performance problems when they scale up their workloads. Many of the problems they encounter can be overcome by applying techniques learned from over three decades of research on parallel DBMSs. However, translating these techniques to a MapReduce implementation such as Hadoop presents unique challenges that can lead to new design choices. This paper describes how column-oriented storage techniques can be incorporated in Hadoop in a way that preserves its pop...

  2. Adaptive query parallelization in multi-core column stores

    NARCIS (Netherlands)

    M.M. Gawade (Mrunal); M.L. Kersten (Martin); M.M. Gawade (Mrunal); M.L. Kersten (Martin)

    2016-01-01

    htmlabstractWith the rise of multi-core CPU platforms, their optimal utilization for in-memory OLAP workloads using column store databases has become one of the biggest challenges. Some of the inherent limi- tations in the achievable query parallelism are due to the degree of parallelism

  3. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  4. Multi-core parallelism in a column-store

    NARCIS (Netherlands)

    Gawade, M.M.

    2017-01-01

    The research reported in this thesis addresses several challenges of improving the efficiency and effectiveness of parallel processing of analytical database queries on modern multi- and many-core systems, using an open-source column-oriented analytical database management system, MonetDB, for

  5. Unpacking the cognitive map: the parallel map theory of hippocampal function.

    Science.gov (United States)

    Jacobs, Lucia F; Schenk, Françoise

    2003-04-01

    In the parallel map theory, the hippocampus encodes space with 2 mapping systems. The bearing map is constructed primarily in the dentate gyrus from directional cues such as stimulus gradients. The sketch map is constructed within the hippocampus proper from positional cues. The integrated map emerges when data from the bearing and sketch maps are combined. Because the component maps work in parallel, the impairment of one can reveal residual learning by the other. Such parallel function may explain paradoxes of spatial learning, such as learning after partial hippocampal lesions, taxonomic and sex differences in spatial learning, and the function of hippocampal neurogenesis. By integrating evidence from physiology to phylogeny, the parallel map theory offers a unified explanation for hippocampal function.

  6. Topology in Synthetic Column Density Maps for Interstellar Turbulence

    Science.gov (United States)

    Putko, Joseph; Burkhart, B. K.; Lazarian, A.

    2013-01-01

    We show how the topology tool known as the genus statistic can be utilized to characterize magnetohydrodyanmic (MHD) turbulence in the ISM. The genus is measured with respect to a given density threshold and varying the threshold produces a genus curve, which can suggest an overall ‘‘meatball,’’ neutral, or ‘‘Swiss cheese’’ topology through its integral. We use synthetic column density maps made from three-dimensional 5123 compressible MHD isothermal simulations performed for different sonic and Alfvénic Mach numbers (Ms and MA respectively). We study eight different Ms values each with one sub- and one super-Alfvénic counterpart. We consider sight-lines both parallel (x) and perpendicular (y and z) to the mean magnetic field. We find that the genus integral shows a dependence on both Mach numbers, and this is still the case even after adding beam smoothing and Gaussian noise to the maps to mimic observational data. The genus integral increases with higher Ms values (but saturates after about Ms = 4) for all lines of sight. This is consistent with greater values of Ms resulting in stronger shocks, which results in a clumpier topology. We observe a larger genus integral for the sub-Alfvénic cases along the perpendicular lines of sight due to increased compression from the field lines and enhanced anisotropy. Application of the genus integral to column density maps should allow astronomers to infer the Mach numbers and thus learn about the environments of interstellar turbulence. This work was supported by the National Science Foundation’s REU program through NSF Award AST-1004881.

  7. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  8. A Design of a New Column-Parallel Analog-to-Digital Converter Flash for Monolithic Active Pixel Sensor

    Directory of Open Access Journals (Sweden)

    Mostafa Chakir

    2017-01-01

    Full Text Available The CMOS Monolithic Active Pixel Sensor (MAPS for the International Linear Collider (ILC vertex detector (VXD expresses stringent requirements on their analog readout electronics, specifically on the analog-to-digital converter (ADC. This paper concerns designing and optimizing a new architecture of a low power, high speed, and small-area 4-bit column-parallel ADC Flash. Later in this study, we propose to interpose an S/H block in the converter. This integration of S/H block increases the sensitiveness of the converter to the very small amplitude of the input signal from the sensor and provides a sufficient time to the converter to be able to code the input signal. This ADC is developed in 0.18 μm CMOS process with a pixel pitch of 35 μm. The proposed ADC responds to the constraints of power dissipation, size, and speed for the MAPS composed of a matrix of 64 rows and 48 columns where each column ADC covers a small area of 35 × 336.76 μm2. The proposed ADC consumes low power at a 1.8 V supply and 100 MS/s sampling rate with dynamic range of 125 mV. Its DNL and INL are 0.0812/−0.0787 LSB and 0.0811/−0.0787 LSB, respectively. Furthermore, this ADC achieves a high speed more than 5 GHz.

  9. A Design of a New Column-Parallel Analog-to-Digital Converter Flash for Monolithic Active Pixel Sensor.

    Science.gov (United States)

    Chakir, Mostafa; Akhamal, Hicham; Qjidaa, Hassan

    2017-01-01

    The CMOS Monolithic Active Pixel Sensor (MAPS) for the International Linear Collider (ILC) vertex detector (VXD) expresses stringent requirements on their analog readout electronics, specifically on the analog-to-digital converter (ADC). This paper concerns designing and optimizing a new architecture of a low power, high speed, and small-area 4-bit column-parallel ADC Flash. Later in this study, we propose to interpose an S/H block in the converter. This integration of S/H block increases the sensitiveness of the converter to the very small amplitude of the input signal from the sensor and provides a sufficient time to the converter to be able to code the input signal. This ADC is developed in 0.18  μ m CMOS process with a pixel pitch of 35  μ m. The proposed ADC responds to the constraints of power dissipation, size, and speed for the MAPS composed of a matrix of 64 rows and 48 columns where each column ADC covers a small area of 35 × 336.76  μ m 2 . The proposed ADC consumes low power at a 1.8 V supply and 100 MS/s sampling rate with dynamic range of 125 mV. Its DNL and INL are 0.0812/-0.0787 LSB and 0.0811/-0.0787 LSB, respectively. Furthermore, this ADC achieves a high speed more than 5 GHz.

  10. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  11. Automated integration of genomic physical mapping data via parallel simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  12. A 10-bit column-parallel cyclic ADC for high-speed CMOS image sensors

    International Nuclear Information System (INIS)

    Han Ye; Li Quanliang; Shi Cong; Wu Nanjian

    2013-01-01

    This paper presents a high-speed column-parallel cyclic analog-to-digital converter (ADC) for a CMOS image sensor. A correlated double sampling (CDS) circuit is integrated in the ADC, which avoids a stand-alone CDS circuit block. An offset cancellation technique is also introduced, which reduces the column fixed-pattern noise (FPN) effectively. One single channel ADC with an area less than 0.02 mm 2 was implemented in a 0.13 μm CMOS image sensor process. The resolution of the proposed ADC is 10-bit, and the conversion rate is 1.6 MS/s. The measured differential nonlinearity and integral nonlinearity are 0.89 LSB and 6.2 LSB together with CDS, respectively. The power consumption from 3.3 V supply is only 0.66 mW. An array of 48 10-bit column-parallel cyclic ADCs was integrated into an array of CMOS image sensor pixels. The measured results indicated that the ADC circuit is suitable for high-speed CMOS image sensors. (semiconductor integrated circuits)

  13. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  14. Implementing Parallel Google Map-Reduce in Eden

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Loogen, Rita

    2009-01-01

    Recent publications have emphasised map-reduce as a general programming model (labelled Google map-reduce), and described existing high-performance implementations for large data sets. We present two parallel implementations for this Google map-reduce skeleton, one following earlier work, and one...... of the Google map-reduce skeleton in usage and performance, and deliver runtime analyses for example applications. Although very flexible, the Google map-reduce skeleton is often too general, and typical examples reveal a better runtime behaviour using alternative skeletons....

  15. DIMACS Workshop on Interconnection Networks and Mapping, and Scheduling Parallel Computations

    CERN Document Server

    Rosenberg, Arnold L; Sotteau, Dominique; NSF Science and Technology Center in Discrete Mathematics and Theoretical Computer Science; Interconnection networks and mapping and scheduling parallel computations

    1995-01-01

    The interconnection network is one of the most basic components of a massively parallel computer system. Such systems consist of hundreds or thousands of processors interconnected to work cooperatively on computations. One of the central problems in parallel computing is the task of mapping a collection of processes onto the processors and routing network of a parallel machine. Once this mapping is done, it is critical to schedule computations within and communication among processor from universities and laboratories, as well as practitioners involved in the design, implementation, and application of massively parallel systems. Focusing on interconnection networks of parallel architectures of today and of the near future , the book includes topics such as network topologies,network properties, message routing, network embeddings, network emulation, mappings, and efficient scheduling. inputs for a process are available where and when the process is scheduled to be computed. This book contains the refereed pro...

  16. Parallel image encryption algorithm based on discretized chaotic map

    International Nuclear Information System (INIS)

    Zhou Qing; Wong Kwokwo; Liao Xiaofeng; Xiang Tao; Hu Yue

    2008-01-01

    Recently, a variety of chaos-based algorithms were proposed for image encryption. Nevertheless, none of them works efficiently in parallel computing environment. In this paper, we propose a framework for parallel image encryption. Based on this framework, a new algorithm is designed using the discretized Kolmogorov flow map. It fulfills all the requirements for a parallel image encryption algorithm. Moreover, it is secure and fast. These properties make it a good choice for image encryption on parallel computing platforms

  17. Column ratio mapping: a processing technique for atomic resolution high-angle annular dark-field (HAADF) images.

    Science.gov (United States)

    Robb, Paul D; Craven, Alan J

    2008-12-01

    An image processing technique is presented for atomic resolution high-angle annular dark-field (HAADF) images that have been acquired using scanning transmission electron microscopy (STEM). This technique is termed column ratio mapping and involves the automated process of measuring atomic column intensity ratios in high-resolution HAADF images. This technique was developed to provide a fuller analysis of HAADF images than the usual method of drawing single intensity line profiles across a few areas of interest. For instance, column ratio mapping reveals the compositional distribution across the whole HAADF image and allows a statistical analysis and an estimation of errors. This has proven to be a very valuable technique as it can provide a more detailed assessment of the sharpness of interfacial structures from HAADF images. The technique of column ratio mapping is described in terms of a [110]-oriented zinc-blende structured AlAs/GaAs superlattice using the 1 angstroms-scale resolution capability of the aberration-corrected SuperSTEM 1 instrument.

  18. Column ratio mapping: A processing technique for atomic resolution high-angle annular dark-field (HAADF) images

    International Nuclear Information System (INIS)

    Robb, Paul D.; Craven, Alan J.

    2008-01-01

    An image processing technique is presented for atomic resolution high-angle annular dark-field (HAADF) images that have been acquired using scanning transmission electron microscopy (STEM). This technique is termed column ratio mapping and involves the automated process of measuring atomic column intensity ratios in high-resolution HAADF images. This technique was developed to provide a fuller analysis of HAADF images than the usual method of drawing single intensity line profiles across a few areas of interest. For instance, column ratio mapping reveals the compositional distribution across the whole HAADF image and allows a statistical analysis and an estimation of errors. This has proven to be a very valuable technique as it can provide a more detailed assessment of the sharpness of interfacial structures from HAADF images. The technique of column ratio mapping is described in terms of a [1 1 0]-oriented zinc-blende structured AlAs/GaAs superlattice using the 1 A-scale resolution capability of the aberration-corrected SuperSTEM 1 instrument.

  19. Nearly auto-parallel maps and conservation laws on curved spaces

    International Nuclear Information System (INIS)

    Vacaru, S.

    1994-01-01

    The theory of nearly auto-parallel maps (na-maps, generalization of conformal transforms) of Einstein-Cartan spaces is formulated. The transformation laws of geometrical objects and gravitational and matter field equations under superpositions of na-maps are considered. A special attention is paid to the very important problem of definition of conservation laws for gravitational fields. (Author)

  20. Column-Parallel Single Slope ADC with Digital Correlated Multiple Sampling for Low Noise CMOS Image Sensors

    NARCIS (Netherlands)

    Chen, Y.; Theuwissen, A.J.P.; Chae, Y.

    2011-01-01

    This paper presents a low noise CMOS image sensor (CIS) using 10/12 bit configurable column-parallel single slope ADCs (SS-ADCs) and digital correlated multiple sampling (CMS). The sensor used is a conventional 4T active pixel with a pinned-photodiode as photon detector. The test sensor was

  1. Long Read Alignment with Parallel MapReduce Cloud Platform

    Directory of Open Access Journals (Sweden)

    Ahmed Abdulhakim Al-Absi

    2015-01-01

    Full Text Available Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner’s Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms.

  2. Long Read Alignment with Parallel MapReduce Cloud Platform

    Science.gov (United States)

    Al-Absi, Ahmed Abdulhakim; Kang, Dae-Ki

    2015-01-01

    Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner's Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR) cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms. PMID:26839887

  3. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    Science.gov (United States)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  4. A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map

    Directory of Open Access Journals (Sweden)

    Xizhong Wang

    2013-01-01

    Full Text Available We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI. The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.

  5. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    Science.gov (United States)

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  6. Parallel pipeline algorithm of real time star map preprocessing

    Science.gov (United States)

    Wang, Hai-yong; Qin, Tian-mu; Liu, Jia-qi; Li, Zhi-feng; Li, Jian-hua

    2016-03-01

    To improve the preprocessing speed of star map and reduce the resource consumption of embedded system of star tracker, a parallel pipeline real-time preprocessing algorithm is presented. The two characteristics, the mean and the noise standard deviation of the background gray of a star map, are firstly obtained dynamically by the means that the intervene of the star image itself to the background is removed in advance. The criterion on whether or not the following noise filtering is needed is established, then the extraction threshold value is assigned according to the level of background noise, so that the centroiding accuracy is guaranteed. In the processing algorithm, as low as two lines of pixel data are buffered, and only 100 shift registers are used to record the connected domain label, by which the problems of resources wasting and connected domain overflow are solved. The simulating results show that the necessary data of the selected bright stars could be immediately accessed in a delay time as short as 10us after the pipeline processing of a 496×496 star map in 50Mb/s is finished, and the needed memory and registers resource total less than 80kb. To verify the accuracy performance of the algorithm proposed, different levels of background noise are added to the processed ideal star map, and the statistic centroiding error is smaller than 1/23 pixel under the condition that the signal to noise ratio is greater than 1. The parallel pipeline algorithm of real time star map preprocessing helps to increase the data output speed and the anti-dynamic performance of star tracker.

  7. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  8. TME (Task Mapping Editor): tool for executing distributed parallel computing. TME user's manual

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Yamagishi, Nobuhiro; Imamura, Toshiyuki

    2000-03-01

    At the Center for Promotion of Computational Science and Engineering, a software environment PPExe has been developed to support scientific computing on a parallel computer cluster (distributed parallel scientific computing). TME (Task Mapping Editor) is one of components of the PPExe and provides a visual programming environment for distributed parallel scientific computing. Users can specify data dependence among tasks (programs) visually as a data flow diagram and map these tasks onto computers interactively through GUI of TME. The specified tasks are processed by other components of PPExe such as Meta-scheduler, RIM (Resource Information Monitor), and EMS (Execution Management System) according to the execution order of these tasks determined by TME. In this report, we describe the usage of TME. (author)

  9. An image-space parallel convolution filtering algorithm based on shadow map

    Science.gov (United States)

    Li, Hua; Yang, Huamin; Zhao, Jianping

    2017-07-01

    Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.

  10. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available Artificial neural networks (ANNs have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  11. Cryptanalysis on a parallel keyed hash function based on chaotic maps

    International Nuclear Information System (INIS)

    Guo Wei; Wang Xiaoming; He Dake; Cao Yang

    2009-01-01

    This Letter analyzes the security of a novel parallel keyed hash function based on chaotic maps, proposed by Xiao et al. to improve the efficiency in parallel computing environment. We show how to devise forgery attacks on Xiao's scheme with differential cryptanalysis and give the experiment results of two kinds of forgery attacks firstly. Furthermore, we discuss the problem of weak keys in the scheme and demonstrate how to utilize weak keys to construct collision.

  12. Massively parallel read mapping on GPUs with the q-group index and PEANUT

    NARCIS (Netherlands)

    J. Köster (Johannes); S. Rahmann (Sven)

    2014-01-01

    textabstractWe present the q-group index, a novel data structure for read mapping tailored towards graphics processing units (GPUs) with a small memory footprint and efficient parallel algorithms for querying and building. On top of the q-group index we introduce PEANUT, a highly parallel GPU-based

  13. Optimal task mapping in safety-critical real-time parallel systems

    International Nuclear Information System (INIS)

    Aussagues, Ch.

    1998-01-01

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author)

  14. Parallel Access of Out-Of-Core Dense Extendible Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Rotem, Doron

    2007-07-26

    Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements.

  15. Supercritical Fluid Chromatography of Drugs: Parallel Factor Analysis for Column Testing in a Wide Range of Operational Conditions

    Science.gov (United States)

    Al-Degs, Yahya; Andri, Bertyl; Thiébaut, Didier; Vial, Jérôme

    2017-01-01

    Retention mechanisms involved in supercritical fluid chromatography (SFC) are influenced by interdependent parameters (temperature, pressure, chemistry of the mobile phase, and nature of the stationary phase), a complexity which makes the selection of a proper stationary phase for a given separation a challenging step. For the first time in SFC studies, Parallel Factor Analysis (PARAFAC) was employed to evaluate the chromatographic behavior of eight different stationary phases in a wide range of chromatographic conditions (temperature, pressure, and gradient elution composition). Design of Experiment was used to optimize experiments involving 14 pharmaceutical compounds present in biological and/or environmental samples and with dissimilar physicochemical properties. The results showed the superiority of PARAFAC for the analysis of the three-way (column × drug × condition) data array over unfolding the multiway array to matrices and performing several classical principal component analyses. Thanks to the PARAFAC components, similarity in columns' function, chromatographic trend of drugs, and correlation between separation conditions could be simply depicted: columns were grouped according to their H-bonding forces, while gradient composition was dominating for condition classification. Also, the number of drugs could be efficiently reduced for columns classification as some of them exhibited a similar behavior, as shown by hierarchical clustering based on PARAFAC components. PMID:28695040

  16. Selective loss of orientation column maps in visual cortex during brief elevation of intraocular pressure.

    Science.gov (United States)

    Chen, Xin; Sun, Chao; Huang, Luoxiu; Shou, Tiande

    2003-01-01

    To compare the orientation column maps elicited by different spatial frequency gratings in cortical area 17 of cats before and during brief elevation of intraocular pressure (IOP). IOP was elevated by injecting saline into the anterior chamber of a cat's eye through a syringe needle. The IOP was elevated enough to cause a retinal perfusion pressure (arterial pressure minus IOP) of approximately 30 mm Hg during a brief elevation of IOP. The visual stimulus gratings were varied in spatial frequency, whereas other parameters were kept constant. The orientation column maps of the cortical area 17 were monocularly elicited by drifting gratings of different spatial frequencies and revealed by a brain intrinsic signal optical imaging system. These maps were compared before and during short-term elevation of IOP. The response amplitude of the orientation maps in area 17 decreased during a brief elevation of IOP. This decrease was dependent on the retinal perfusion pressure but not on the absolute IOP. The location of the most visible maps was spatial-frequency dependent. The blurring or loss of the pattern of the orientation maps was most severe when high-spatial-frequency gratings were used and appeared most significantly on the posterior part of the exposed cortex while IOP was elevated. However, the basic patterns of the maps remained unchanged. Changes in cortical signal were not due to changes in the optics of the eye with elevation of IOP. A stable normal IOP is essential for maintaining normal visual cortical functions. During a brief and high elevation of IOP, the cortical processing of high-spatial-frequency visual information was diminished because of a selectively functional decline of the retinogeniculocortical X pathway by a mechanism of retinal circulation origin.

  17. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    Science.gov (United States)

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  18. Improving the security of a parallel keyed hash function based on chaotic maps

    Energy Technology Data Exchange (ETDEWEB)

    Xiao Di, E-mail: xiaodi_cqu@hotmail.co [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); Liao Xiaofeng [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); Wang Yong [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)] [College of Economy and Management, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China)

    2009-11-23

    In this Letter, we analyze the cause of vulnerability of the original parallel keyed hash function based on chaotic maps in detail, and then propose the corresponding enhancement measures. Theoretical analysis and computer simulation indicate that the modified hash function is more secure than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function.

  19. Improving the security of a parallel keyed hash function based on chaotic maps

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Wang Yong

    2009-01-01

    In this Letter, we analyze the cause of vulnerability of the original parallel keyed hash function based on chaotic maps in detail, and then propose the corresponding enhancement measures. Theoretical analysis and computer simulation indicate that the modified hash function is more secure than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function.

  20. NOAA JPSS Ozone Mapping and Profiler Suite (OMPS) Nadir Total Column Sensor Data Record (SDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ozone Mapping and Profiler Suite (OMPS) onboard the Suomi NPP satellite monitors ozone from space. OMPS will collect total column and vertical profile ozone data...

  1. Signal-to-noise ratio measurement in parallel MRI with subtraction mapping and consecutive methods

    International Nuclear Information System (INIS)

    Imai, Hiroshi; Miyati, Tosiaki; Ogura, Akio; Doi, Tsukasa; Tsuchihashi, Toshio; Machida, Yoshio; Kobayashi, Masato; Shimizu, Kouzou; Kitou, Yoshihiro

    2008-01-01

    When measuring the signal-to-noise ratio (SNR) of an image the used parallel magnetic resonance imaging, it was confirmed that there was a problem in the application of past SNR measurement. With the method of measuring the noise from the background signal, SNR with parallel imaging was higher than that without parallel imaging. In the subtraction method (NEMA standard), which sets a wide region of interest, the white noise was not evaluated correctly although SNR was close to the theoretical value. We proposed two techniques because SNR in parallel imaging was not uniform according to inhomogeneity of the coil sensitivity distribution and geometry factor. Using the first method (subtraction mapping), two images were scanned with identical parameters. The SNR in each pixel divided the running mean (7 by 7 pixels in neighborhood) by standard deviation/√2 in the same region of interest. Using the second (consecutive) method, more than fifty consecutive scans of the uniform phantom were obtained with identical scan parameters. Then the SNR was calculated from the ratio of mean signal intensity to the standard deviation in each pixel on a series of images. Moreover, geometry factors were calculated from SNRs with and without parallel imaging. The SNR and geometry factor using parallel imaging in the subtraction mapping method agreed with those of the consecutive method. Both methods make it possible to obtain a more detailed determination of SNR in parallel imaging and to calculate the geometry factor. (author)

  2. MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control

    Science.gov (United States)

    Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming

    2017-09-01

    Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.

  3. Using Hadoop MapReduce for Parallel Genetic Algorithms: A Comparison of the Global, Grid and Island Models.

    Science.gov (United States)

    Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica

    2017-06-29

    The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store

  4. Biomechanical properties of orthogonal plate configuration versus parallel plate configuration using the same locking plate system for intra-articular distal humeral fractures under radial or ulnar column axial load.

    Science.gov (United States)

    Kudo, Toshiya; Hara, Akira; Iwase, Hideaki; Ichihara, Satoshi; Nagao, Masashi; Maruyama, Yuichiro; Kaneko, Kazuo

    2016-10-01

    Previous reports have questioned whether an orthogonal or parallel configuration is superior for distal humeral articular fractures. In previous clinical and biomechanical studies, implant failure of the posterolateral plate has been reported with orthogonal configurations; however, the reason for screw loosening in the posterolateral plate is unclear. The purpose of this study was to evaluate biomechanical properties and to clarify the causes of posterolateral plate loosening using a humeral fracture model under axial compression on the radial or ulnar column separately. And we changed only the plate set up: parallel or orthogonal. We used artificial bone to create an Association for the Study of Internal Fixation type 13-C2.3 intra-articular fracture model with a 1-cm supracondylar gap. We used an anatomically-preshaped distal humerus locking compression plate system (Synthes GmbH, Solothurn, Switzerland). Although this is originally an orthogonal plate system, we designed a mediolateral parallel configuration to use the contralateral medial plate instead of the posterolateral plate in the system. We calculated the stiffness of the radial and ulnar columns and anterior movement of the condylar fragment in the lateral view. The parallel configuration was superior to the orthogonal configuration regarding the stiffness of the radial column axial compression. There were significant differences between the two configurations regarding anterior movement of the capitellum during axial loading of the radial column. The posterolateral plate tended to bend anteriorly under axial compression compared with the medial or lateral plate. We believe that in the orthogonal configuration axial compression induced more anterior displacement of the capitellum than the trochlea, which eventually induced secondary fragment or screw dislocation on the posterolateral plate, or nonunion at the supracondylar level. In the parallel configuration, anterior movement of the capitellum or

  5. Optimising parallel R correlation matrix calculations on gene expression data using MapReduce.

    Science.gov (United States)

    Wang, Shicai; Pandis, Ioannis; Johnson, David; Emam, Ibrahim; Guitton, Florian; Oehmichen, Axel; Guo, Yike

    2014-11-05

    High-throughput molecular profiling data has been used to improve clinical decision making by stratifying subjects based on their molecular profiles. Unsupervised clustering algorithms can be used for stratification purposes. However, the current speed of the clustering algorithms cannot meet the requirement of large-scale molecular data due to poor performance of the correlation matrix calculation. With high-throughput sequencing technologies promising to produce even larger datasets per subject, we expect the performance of the state-of-the-art statistical algorithms to be further impacted unless efforts towards optimisation are carried out. MapReduce is a widely used high performance parallel framework that can solve the problem. In this paper, we evaluate the current parallel modes for correlation calculation methods and introduce an efficient data distribution and parallel calculation algorithm based on MapReduce to optimise the correlation calculation. We studied the performance of our algorithm using two gene expression benchmarks. In the micro-benchmark, our implementation using MapReduce, based on the R package RHIPE, demonstrates a 3.26-5.83 fold increase compared to the default Snowfall and 1.56-1.64 fold increase compared to the basic RHIPE in the Euclidean, Pearson and Spearman correlations. Though vanilla R and the optimised Snowfall outperforms our optimised RHIPE in the micro-benchmark, they do not scale well with the macro-benchmark. In the macro-benchmark the optimised RHIPE performs 2.03-16.56 times faster than vanilla R. Benefiting from the 3.30-5.13 times faster data preparation, the optimised RHIPE performs 1.22-1.71 times faster than the optimised Snowfall. Both the optimised RHIPE and the optimised Snowfall successfully performs the Kendall correlation with TCGA dataset within 7 hours. Both of them conduct more than 30 times faster than the estimated vanilla R. The performance evaluation found that the new MapReduce algorithm and its

  6. Supercritical Fluid Chromatography of Drugs: Parallel Factor Analysis for Column Testing in a Wide Range of Operational Conditions

    Directory of Open Access Journals (Sweden)

    Ramia Z. Al Bakain

    2017-01-01

    Full Text Available Retention mechanisms involved in supercritical fluid chromatography (SFC are influenced by interdependent parameters (temperature, pressure, chemistry of the mobile phase, and nature of the stationary phase, a complexity which makes the selection of a proper stationary phase for a given separation a challenging step. For the first time in SFC studies, Parallel Factor Analysis (PARAFAC was employed to evaluate the chromatographic behavior of eight different stationary phases in a wide range of chromatographic conditions (temperature, pressure, and gradient elution composition. Design of Experiment was used to optimize experiments involving 14 pharmaceutical compounds present in biological and/or environmental samples and with dissimilar physicochemical properties. The results showed the superiority of PARAFAC for the analysis of the three-way (column × drug × condition data array over unfolding the multiway array to matrices and performing several classical principal component analyses. Thanks to the PARAFAC components, similarity in columns’ function, chromatographic trend of drugs, and correlation between separation conditions could be simply depicted: columns were grouped according to their H-bonding forces, while gradient composition was dominating for condition classification. Also, the number of drugs could be efficiently reduced for columns classification as some of them exhibited a similar behavior, as shown by hierarchical clustering based on PARAFAC components.

  7. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    Science.gov (United States)

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  8. Optimal task mapping in safety-critical real-time parallel systems; Placement optimal de taches pour les systemes paralleles temps-reel critiques

    Energy Technology Data Exchange (ETDEWEB)

    Aussagues, Ch

    1998-12-11

    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.

  9. Encoding methods for B1+ mapping in parallel transmit systems at ultra high field

    Science.gov (United States)

    Tse, Desmond H. Y.; Poole, Michael S.; Magill, Arthur W.; Felder, Jörg; Brenner, Daniel; Jon Shah, N.

    2014-08-01

    Parallel radiofrequency (RF) transmission, either in the form of RF shimming or pulse design, has been proposed as a solution to the B1+ inhomogeneity problem in ultra high field magnetic resonance imaging. As a prerequisite, accurate B1+ maps from each of the available transmit channels are required. In this work, four different encoding methods for B1+ mapping, namely 1-channel-on, all-channels-on-except-1, all-channels-on-1-inverted and Fourier phase encoding, were evaluated using dual refocusing acquisition mode (DREAM) at 9.4 T. Fourier phase encoding was demonstrated in both phantom and in vivo to be the least susceptible to artefacts caused by destructive RF interference at 9.4 T. Unlike the other two interferometric encoding schemes, Fourier phase encoding showed negligible dependency on the initial RF phase setting and therefore no prior B1+ knowledge is required. Fourier phase encoding also provides a flexible way to increase the number of measurements to increase SNR, and to allow further reduction of artefacts by weighted decoding. These advantages of Fourier phase encoding suggest that it is a good choice for B1+ mapping in parallel transmit systems at ultra high field.

  10. Column-by-column compositional mapping by Z-contrast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Molina, S.I. [Departamento de Ciencia de los Materiales e I.M. y Q.I., Facultad de Ciencias, Universidad de Cadiz, Campus Rio San Pedro, s/n, 11510 Puerto Real, Cadiz (Spain)], E-mail: sergio.molina@uca.es; Sales, D.L. [Departamento de Ciencia de los Materiales e I.M. y Q.I., Facultad de Ciencias, Universidad de Cadiz, Campus Rio San Pedro, s/n, 11510 Puerto Real, Cadiz (Spain); Galindo, P.L. [Departamento de Lenguajes y Sistemas Informaticos, CASEM, Universidad de Cadiz, Campus Rio San Pedro, s/n, 11510 Puerto Real, Cadiz (Spain); Fuster, D.; Gonzalez, Y.; Alen, B.; Gonzalez, L. [Instituto de Microelectronica de Madrid (CNM, CSIC), Isaac Newton 8, 28760 Tres Cantos, Madrid (Spain); Varela, M.; Pennycook, S.J. [Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2009-01-15

    A phenomenological method is developed to determine the composition of materials, with atomic column resolution, by analysis of integrated intensities of aberration-corrected Z-contrast scanning transmission electron microscopy images. The method is exemplified for InAs{sub x}P{sub 1-x} alloys using epitaxial thin films with calibrated compositions as standards. Using this approach we have determined the composition of the two-dimensional wetting layer formed between self-assembled InAs quantum wires on InP(0 0 1) substrates.

  11. An Efficient MapReduce-Based Parallel Clustering Algorithm for Distributed Traffic Subarea Division

    Directory of Open Access Journals (Sweden)

    Dawen Xia

    2015-01-01

    Full Text Available Traffic subarea division is vital for traffic system management and traffic network analysis in intelligent transportation systems (ITSs. Since existing methods may not be suitable for big traffic data processing, this paper presents a MapReduce-based Parallel Three-Phase K-Means (Par3PKM algorithm for solving traffic subarea division problem on a widely adopted Hadoop distributed computing platform. Specifically, we first modify the distance metric and initialization strategy of K-Means and then employ a MapReduce paradigm to redesign the optimized K-Means algorithm for parallel clustering of large-scale taxi trajectories. Moreover, we propose a boundary identifying method to connect the borders of clustering results for each cluster. Finally, we divide traffic subarea of Beijing based on real-world trajectory data sets generated by 12,000 taxis in a period of one month using the proposed approach. Experimental evaluation results indicate that when compared with K-Means, Par2PK-Means, and ParCLARA, Par3PKM achieves higher efficiency, more accuracy, and better scalability and can effectively divide traffic subarea with big taxi trajectory data.

  12. Parallel definition of tear film maps on distributed-memory clusters for the support of dry eye diagnosis.

    Science.gov (United States)

    González-Domínguez, Jorge; Remeseiro, Beatriz; Martín, María J

    2017-02-01

    The analysis of the interference patterns on the tear film lipid layer is a useful clinical test to diagnose dry eye syndrome. This task can be automated with a high degree of accuracy by means of the use of tear film maps. However, the time required by the existing applications to generate them prevents a wider acceptance of this method by medical experts. Multithreading has been previously successfully employed by the authors to accelerate the tear film map definition on multicore single-node machines. In this work, we propose a hybrid message-passing and multithreading parallel approach that further accelerates the generation of tear film maps by exploiting the computational capabilities of distributed-memory systems such as multicore clusters and supercomputers. The algorithm for drawing tear film maps is parallelized using Message Passing Interface (MPI) for inter-node communications and the multithreading support available in the C++11 standard for intra-node parallelization. The original algorithm is modified to reduce the communications and increase the scalability. The hybrid method has been tested on 32 nodes of an Intel cluster (with two 12-core Haswell 2680v3 processors per node) using 50 representative images. Results show that maximum runtime is reduced from almost two minutes using the previous only-multithreaded approach to less than ten seconds using the hybrid method. The hybrid MPI/multithreaded implementation can be used by medical experts to obtain tear film maps in only a few seconds, which will significantly accelerate and facilitate the diagnosis of the dry eye syndrome. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Convolutional Codes with Maximum Column Sum Rank for Network Streaming

    OpenAIRE

    Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish

    2015-01-01

    The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...

  14. DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics

    OpenAIRE

    Liu, Yanchen; Cao, Fang; Mortazavi, Masood; Chen, Mengmeng; Yan, Ning; Ku, Chi; Adnaik, Aniket; Morgan, Stephen; Shi, Guangyu; Wang, Yuhu; Fang, Fan

    2015-01-01

    Part 10: Big Data and Text Mining; International audience; We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange op...

  15. A MapReduce-Based Parallel Frequent Pattern Growth Algorithm for Spatiotemporal Association Analysis of Mobile Trajectory Big Data

    Directory of Open Access Journals (Sweden)

    Dawen Xia

    2018-01-01

    Full Text Available Frequent pattern mining is an effective approach for spatiotemporal association analysis of mobile trajectory big data in data-driven intelligent transportation systems. While existing parallel algorithms have been successfully applied to frequent pattern mining of large-scale trajectory data, two major challenges are how to overcome the inherent defects of Hadoop to cope with taxi trajectory big data including massive small files and how to discover the implicitly spatiotemporal frequent patterns with MapReduce. To conquer these challenges, this paper presents a MapReduce-based Parallel Frequent Pattern growth (MR-PFP algorithm to analyze the spatiotemporal characteristics of taxi operating using large-scale taxi trajectories with massive small file processing strategies on a Hadoop platform. More specifically, we first implement three methods, that is, Hadoop Archives (HAR, CombineFileInputFormat (CFIF, and Sequence Files (SF, to overcome the existing defects of Hadoop and then propose two strategies based on their performance evaluations. Next, we incorporate SF into Frequent Pattern growth (FP-growth algorithm and then implement the optimized FP-growth algorithm on a MapReduce framework. Finally, we analyze the characteristics of taxi operating in both spatial and temporal dimensions by MR-PFP in parallel. The results demonstrate that MR-PFP is superior to existing Parallel FP-growth (PFP algorithm in efficiency and scalability.

  16. Topographic shear and the relation of ocular dominance columns to orientation columns in primate and cat visual cortex.

    Science.gov (United States)

    Wood, Richard J.; Schwartz, Eric L.

    1999-03-01

    Shear has been known to exist for many years in the topographic structure of the primary visual cortex, but has received little attention in the modeling literature. Although the topographic map of V1 is largely conformal (i.e. zero shear), several groups have observed topographic shear in the region of the V1/V2 border. Furthermore, shear has also been revealed by anisotropy of cortical magnification factor within a single ocular dominance column. In the present paper, we make a functional hypothesis: the major axis of the topographic shear tensor provides cortical neurons with a preferred direction of orientation tuning. We demonstrate that isotropic neuronal summation of a sheared topographic map, in the presence of additional random shear, can provide the major features of cortical functional architecture with the ocular dominance column system acting as the principal source of the shear tensor. The major principal axis of the shear tensor determines the direction and its eigenvalues the relative strength of cortical orientation preference. This hypothesis is then shown to be qualitatively consistent with a variety of experimental results on cat and monkey orientation column properties obtained from optical recording and from other anatomical and physiological techniques. In addition, we show that a recent result of Das and Gilbert (Das, A., & Gilbert, C. D., 1997. Distortions of visuotopic map match orientation singularities in primary visual cortex. Nature, 387, 594-598) is consistent with an infinite set of parameterized solutions for the cortical map. We exploit this freedom to choose a particular instance of the Das-Gilbert solution set which is consistent with the full range of local spatial structure in V1. These results suggest that further relationships between ocular dominance columns, orientation columns, and local topography may be revealed by experimental testing.

  17. Hardware-Oblivious Parallelism for In-Memory Column-Stores

    NARCIS (Netherlands)

    M. Heimel; M. Saecker; H. Pirk (Holger); S. Manegold (Stefan); V. Markl

    2013-01-01

    htmlabstractThe multi-core architectures of today’s computer systems make parallelism a necessity for performance critical applications. Writing such applications in a generic, hardware-oblivious manner is a challenging problem: Current database systems thus rely on labor-intensive and error-prone

  18. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  19. THE 'TRUE' COLUMN DENSITY DISTRIBUTION IN STAR-FORMING MOLECULAR CLOUDS

    International Nuclear Information System (INIS)

    Goodman, Alyssa A.; Pineda, Jaime E.; Schnee, Scott L.

    2009-01-01

    We use the COMPLETE Survey's observations of the Perseus star-forming region to assess and intercompare the three methods used for measuring column density in molecular clouds: near-infrared (NIR) extinction mapping; thermal emission mapping in the far-IR; and mapping the intensity of CO isotopologues. Overall, the structures shown by all three tracers are morphologically similar, but important differences exist among the tracers. We find that the dust-based measures (NIR extinction and thermal emission) give similar, log-normal, distributions for the full (∼20 pc scale) Perseus region, once careful calibration corrections are made. We also compare dust- and gas-based column density distributions for physically meaningful subregions of Perseus, and we find significant variations in the distributions for those (smaller, ∼few pc scale) regions. Even though we have used 12 CO data to estimate excitation temperatures, and we have corrected for opacity, the 13 CO maps seem unable to give column distributions that consistently resemble those from dust measures. We have edited out the effects of the shell around the B-star HD 278942 from the column density distribution comparisons. In that shell's interior and in the parts where it overlaps the molecular cloud, there appears to be a dearth of 13 CO, which is likely due either to 13 CO not yet having had time to form in this young structure and/or destruction of 13 CO in the molecular cloud by the HD 278942's wind and/or radiation. We conclude that the use of either dust or gas measures of column density without extreme attention to calibration (e.g., of thermal emission zero-levels) and artifacts (e.g., the shell) is more perilous than even experts might normally admit. And, the use of 13 CO data to trace total column density in detail, even after proper calibration, is unavoidably limited in utility due to threshold, depletion, and opacity effects. If one's main aim is to map column density (rather than temperature

  20. High-field fMRI unveils orientation columns in humans.

    Science.gov (United States)

    Yacoub, Essa; Harel, Noam; Ugurbil, Kâmil

    2008-07-29

    Functional (f)MRI has revolutionized the field of human brain research. fMRI can noninvasively map the spatial architecture of brain function via localized increases in blood flow after sensory or cognitive stimulation. Recent advances in fMRI have led to enhanced sensitivity and spatial accuracy of the measured signals, indicating the possibility of detecting small neuronal ensembles that constitute fundamental computational units in the brain, such as cortical columns. Orientation columns in visual cortex are perhaps the best known example of such a functional organization in the brain. They cannot be discerned via anatomical characteristics, as with ocular dominance columns. Instead, the elucidation of their organization requires functional imaging methods. However, because of insufficient sensitivity, spatial accuracy, and image resolution of the available mapping techniques, thus far, they have not been detected in humans. Here, we demonstrate, by using high-field (7-T) fMRI, the existence and spatial features of orientation- selective columns in humans. Striking similarities were found with the known spatial features of these columns in monkeys. In addition, we found that a larger number of orientation columns are devoted to processing orientations around 90 degrees (vertical stimuli with horizontal motion), whereas relatively similar fMRI signal changes were observed across any given active column. With the current proliferation of high-field MRI systems and constant evolution of fMRI techniques, this study heralds the exciting prospect of exploring unmapped and/or unknown columnar level functional organizations in the human brain.

  1. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  2. Experimental study of parallel multi-tungsten wire Z-pinch

    International Nuclear Information System (INIS)

    Huang Xianbin; China Academy of Engineering Physics, Mianyang; Lin Libin; Yang Libing; Deng Jianjun; Gu Yuanchao; Ye Shican; Yue Zhengpu; Zhou Shaotong; Li Fengping; Zhang Siqun

    2005-01-01

    The study of three parallel tungsten wire loads and five parallel tungsten wire loads implosion experiment on accelerator 'Yang' are reported. Tungsten wires (φ17 μm) with separation of 1 mm were used. The pinch was driven by a 350 kA peak current, 80 ns 10%-90% rise time. By means of pinhole camera and X-ray diagnostics technology, a non-uniform plasma column is formed among the wires and soft X-ray pulse are observed. the change of load current are analyzed, the development of sausage instability and kink instability, 'hot spot' effect and dispersion spot for plasma column are also discussed. (authors)

  3. Simulation Exploration through Immersive Parallel Planes

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Steve [Los Alamos Visualization Associates

    2017-05-25

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  4. TRACING H2 COLUMN DENSITY WITH ATOMIC CARBON (C I) AND CO ISOTOPOLOGS

    International Nuclear Information System (INIS)

    Lo, N.; Bronfman, L.; Cunningham, M. R.; Jones, P. A.; Lowe, V.; Cortes, P. C.; Simon, R.; Fissel, L.; Novak, G.

    2014-01-01

    We present the first results of neutral carbon ([C I] 3 P 1 - 3 P 0 at 492 GHz) and carbon monoxide ( 13 CO, J = 1-0) mapping in the Vela Molecular Ridge cloud C (VMR-C) and the G333 giant molecular cloud complexes with the NANTEN2 and Mopra telescopes. For the four regions mapped in this work, we find that [C I] has very similar spectral emission profiles to 13 CO, with comparable line widths. We find that [C I] has an opacity of 0.1-1.3 across the mapped region while the [C I]/ 13 CO peak brightness temperature ratio is between 0.2 and 0.8. The [C I] column density is an order of magnitude lower than that of 13 CO. The H 2 column density derived from [C I] is comparable to values obtained from 12 CO. Our maps show that C I is preferentially detected in gas with low temperatures (below 20 K), which possibly explains the comparable H 2 column density calculated from both tracers (both C I and 12 CO underestimate column density), as a significant amount of the C I in the warmer gas is likely in the higher energy state transition ([C I] 3 P 2 - 3 P 1 at 810 GHz), and thus it is likely that observations of both the above [C I] transitions are needed in order to recover the total H 2 column density

  5. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  6. Scalability of Parallel Scientific Applications on the Cloud

    Directory of Open Access Journals (Sweden)

    Satish Narayana Srirama

    2011-01-01

    Full Text Available Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study the effects of moving parallel scientific applications onto the cloud, we deployed several benchmark applications like matrix–vector operations and NAS parallel benchmarks, and DOUG (Domain decomposition On Unstructured Grids on the cloud. DOUG is an open source software package for parallel iterative solution of very large sparse systems of linear equations. The detailed analysis of DOUG on the cloud showed that parallel applications benefit a lot and scale reasonable on the cloud. We could also observe the limitations of the cloud and its comparison with cluster in terms of performance. However, for efficiently running the scientific applications on the cloud infrastructure, the applications must be reduced to frameworks that can successfully exploit the cloud resources, like the MapReduce framework. Several iterative and embarrassingly parallel algorithms are reduced to the MapReduce model and their performance is measured and analyzed. The analysis showed that Hadoop MapReduce has significant problems with iterative methods, while it suits well for embarrassingly parallel algorithms. Scientific computing often uses iterative methods to solve large problems. Thus, for scientific computing on the cloud, this paper raises the necessity for better frameworks or optimizations for MapReduce.

  7. CUB DI (Deionization) column control system

    International Nuclear Information System (INIS)

    Seino, K.C.

    1999-01-01

    For the old MR (Main Ring), deionization was done with two columns in CUB, using an ion exchange process. Typically 65 GPM of LCW flew through a column, and the resistivity was raised from 3 Mohm-cm to over 12 Mohm-cm. After a few weeks, columns lost their effectiveness and had to be regenerated in a process involving backwashing and adding hydrochloric acid and sodium hydroxide. For normal MR operations, LCW returned from the ring and passed through the two columns in parallel for deionization, although the system could have been operated satisfactorily with only one in use. A 3000 gallon reservoir (the Spheres) provided a reserve of LCW for allowing water leaks and expansions in the MR. During the MI (Main Injector) construction period, the third DI column was added to satisfy requirements for the MI. When the third column was added, the old regeneration controller was replaced with a new controller based on an Allen-Bradley PLC (i.e., SLC-5/04). The PLC is widely used and well documented, and therefore it may allow us to modify the regeneration programs in the future. In addition to the above regeneration controller, the old control panels (which were used to manipulate pumps and valves to supply LCW in Normal mode and to do Int. Recir. (Internal Recirculation) and Makeup) were replaced with a new control system based on Sixtrak Gateway and I/O modules. For simplicity, the new regeneration controller is called as the US Filter system, and the new control system is called as the Fermilab system in this writing

  8. ON THE ORIGIN OF THE HIGH COLUMN DENSITY TURNOVER IN THE H I COLUMN DENSITY DISTRIBUTION

    International Nuclear Information System (INIS)

    Erkal, Denis; Gnedin, Nickolay Y.; Kravtsov, Andrey V.

    2012-01-01

    We study the high column density regime of the H I column density distribution function and argue that there are two distinct features: a turnover at N H I ≈ 10 21 cm –2 , which is present at both z = 0 and z ≈ 3, and a lack of systems above N H I ≈ 10 22 cm –2 at z = 0. Using observations of the column density distribution, we argue that the H I-H 2 transition does not cause the turnover at N H I ≈ 10 21 cm –2 but can plausibly explain the turnover at N H I ∼> 10 22 cm –2 . We compute the H I column density distribution of individual galaxies in the THINGS sample and show that the turnover column density depends only weakly on metallicity. Furthermore, we show that the column density distribution of galaxies, corrected for inclination, is insensitive to the resolution of the H I map or to averaging in radial shells. Our results indicate that the similarity of H I column density distributions at z = 3 and 0 is due to the similarity of the maximum H I surface densities of high-z and low-z disks, set presumably by universal processes that shape properties of the gaseous disks of galaxies. Using fully cosmological simulations, we explore other candidate physical mechanisms that could produce a turnover in the column density distribution. We show that while turbulence within giant molecular clouds cannot affect the damped Lyα column density distribution, stellar feedback can affect it significantly if the feedback is sufficiently effective in removing gas from the central 2-3 kpc of high-redshift galaxies. Finally, we argue that it is meaningful to compare column densities averaged over ∼ kpc scales with those estimated from quasar spectra that probe sub-pc scales due to the steep power spectrum of H I column density fluctuations observed in nearby galaxies.

  9. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  10. Simulation Exploration through Immersive Parallel Planes: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve

    2016-03-01

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  11. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  12. Mechanized Sephadex LH-20 multiple column chromatography as a prerequisite for automated multi-steroid radioimmunoassays

    International Nuclear Information System (INIS)

    Sippell, W.G.; Bidlingmaier, F.; Knorr, D.

    1978-01-01

    To establish a procedure for the simultaneous determination of all major corticosteroid hormones and their immediate biological precursors in the same plasma sample, two different mechanized methods for the simultaneous isolation of aldosterone (A), corticosterone (B), 11-deoxycorticosterone (DOC), progesterone (P), 17-hydroxyprogesterone (17-OHP), 11-deoxycortisol (S), cortisol (F) and cortisone (E) from the methylene chloride extracts of 0.1 to 2.0ml plasma samples have been developed. In method I, steroids are separated with methylene chloride:methanol=98:2 as solvent system on 60-cm Sephadex LH-20 columns, up to eight of which are eluted in parallel using a multi-channel peristaltic pump and individual flow-rate control (40ml/h) by capillary valves and micro-flowmeters. Method II, on the other hand, utilizes the same solvent system on ten 75-cm LH-20 columns which are eluted in reversed flow simultaneously by a ten-channel, double-piston pump that precisely maintains an elution flow rate of 40ml/h in every column. In both methods, eluate fractions of each of the isolated steroids are automatically pooled and collected from all parallel columns by one programmable linear fraction collector. As a result of the high reproducibility of the elution patterns, both between different parallel columns and between 30 to 40 consecutive elutions, mean recoveries of tritiated steroids including extraction are 60 to 84% after a single separation and still over 50% after an additional separation on 40-cm LH-20 columns, with coefficients of variation below 15% (method II). Thus, the eight steroids can be completely isolated from each of ten plasma extracts within 3 to 4 hours, yielding 80 samples readily prepared for subsequent quantitation by radioimmunoassay. (author)

  13. Map-Based Power-Split Strategy Design with Predictive Performance Optimization for Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jixiang Fan

    2015-09-01

    Full Text Available In this paper, a map-based optimal energy management strategy is proposed to improve the consumption economy of a plug-in parallel hybrid electric vehicle. In the design of the maps, which provide both the torque split between engine and motor and the gear shift, not only the current vehicle speed and power demand, but also the optimality based on the predicted trajectory of vehicle dynamics are considered. To seek the optimality, the equivalent consumption, which trades off the fuel and electricity usages, is chosen as the cost function. Moreover, in order to decrease the model errors in the process of optimization conducted in the discrete time domain, the variational integrator is employed to calculate the evolution of the vehicle dynamics. To evaluate the proposed energy management strategy, the simulation results performed on a professional GT-Suit simulator are demonstrated and the comparison to a real-time optimization method is also given to show the advantage of the proposed off-line optimization approach.

  14. On-column reduction of catecholamine quinones in stainless steel columns during liquid chromatography.

    Science.gov (United States)

    Xu, R; Huang, X; Kramer, K J; Hawley, M D

    1995-10-10

    The chromatographic behavior of quinones derived from the oxidation of dopamine and N-acetyldopamine has been studied using liquid chromatography (LC) with both a diode array detector and an electrochemical detector that has parallel dual working electrodes. When stainless steel columns are used, an anodic peak for the oxidation of the catecholamine is observed at the same retention time as a cathodic peak for the reduction of the catecholamine quinone. In addition, the anodic peak exhibits a tail that extends to a second anodic peak for the catecholamine. The latter peak occurs at the normal retention time of the catecholamine. The origin of this phenomenon has been studied and metallic iron in the stainless steel components of the LC system has been found to reduce the quinones to their corresponding catecholamines. The simultaneous appearance of a cathodic peak for the reduction of catecholamine quinone and an anodic peak for the oxidation of the corresponding catecholamine occurs when metallic iron in the exit frit reduces some of the quinones as the latter exits the column. This phenomenon is designated as the "concurrent anodic-cathodic response." It is also observed for quinones of of 3,4-dihydroxybenzoic acid and probably occurs with o- or p-quinones of other dihydroxyphenyl compounds. The use of nonferrous components in LC systems is recommended to eliminate possible on-column reduction of quinones.

  15. Streaming nested data parallelism on multicores

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2016-01-01

    The paradigm of nested data parallelism (NDP) allows a variety of semi-regular computation tasks to be mapped onto SIMD-style hardware, including GPUs and vector units. However, some care is needed to keep down space consumption in situations where the available parallelism may vastly exceed...

  16. Column-to-column packing variation of disposable pre-packed columns for protein chromatography.

    Science.gov (United States)

    Schweiger, Susanne; Hinterberger, Stephan; Jungbauer, Alois

    2017-12-08

    In the biopharmaceutical industry, pre-packed columns are the standard for process development, but they must be qualified before use in experimental studies to confirm the required performance of the packed bed. Column qualification is commonly done by pulse response experiments and depends highly on the experimental testing conditions. Additionally, the peak analysis method, the variation in the 3D packing structure of the bed, and the measurement precision of the workstation influence the outcome of qualification runs. While a full body of literature on these factors is available for HPLC columns, no comparable studies exist for preparative columns for protein chromatography. We quantified the influence of these parameters for commercially available pre-packed and self-packed columns of disposable and non-disposable design. Pulse response experiments were performed on 105 preparative chromatography columns with volumes of 0.2-20ml. The analyte acetone was studied at six different superficial velocities (30, 60, 100, 150, 250 and 500cm/h). The column-to-column packing variation between disposable pre-packed columns of different diameter-length combinations varied by 10-15%, which was acceptable for the intended use. The column-to-column variation cannot be explained by the packing density, but is interpreted as a difference in particle arrangement in the column. Since it was possible to determine differences in the column-to-column performance, we concluded that the columns were well-packed. The measurement precision of the chromatography workstation was independent of the column volume and was in a range of±0.01ml for the first peak moment and±0.007 ml 2 for the second moment. The measurement precision must be considered for small columns in the range of 2ml or less. The efficiency of disposable pre-packed columns was equal or better than that of self-packed columns. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  17. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  18. Spatial frequency-dependent feedback of visual cortical area 21a modulating functional orientation column maps in areas 17 and 18 of the cat.

    Science.gov (United States)

    Huang, Luoxiu; Chen, Xin; Shou, Tiande

    2004-02-20

    The feedback effect of activity of area 21a on orientation maps of areas 17 and 18 was investigated in cats using intrinsic signal optical imaging. A spatial frequency-dependent decrease in response amplitude of orientation maps to grating stimuli was observed in areas 17 and 18 when area 21a was inactivated by local injection of GABA, or by a lesion induced by liquid nitrogen freezing. The decrease in response amplitude of orientation maps of areas 17 and 18 after the area 21a inactivation paralleled the normal response without the inactivation. Application in area 21a of bicuculline, a GABAa receptor antagonist caused an increase in response amplitude of orientation maps of area 17. The results indicate a positive feedback from high-order visual cortical area 21a to lower-order areas underlying a spatial frequency-dependent mechanism.

  19. Analyzing thematic maps and mapping for accuracy

    Science.gov (United States)

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  20. Commissioning and operation of distillation column at Madras Atomic Power Station (Paper No. 1.10)

    International Nuclear Information System (INIS)

    Neelakrishnan, G.; Subramanian, N.

    1992-01-01

    In Madras Atomic Power Station (MAPS), an upgrading plant based on vacuum distillation was constructed to upgrade the downgraded heavy water collected in vapor recovery dryers. There are two distillation columns and each having a capacity of 77.5 tonne per annum of reactor grade heavy water with average feed concentration of 30% IP. The performance of the distillation columns has been very good. The column I and column II have achieved an operating factor of 92% and 90% respectively. The commissioning activities, and subsequent improvements carried out in the distillation columns are described. (author)

  1. EX0801 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0801: Mapping Operations Shakedown...

  2. EX1701 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1701: Kingman/Palmyra, Jarvis (Mapping)...

  3. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  4. EX1403 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1403: East Coast Mapping and Exploration...

  5. EX0905 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0905: Mapping Field Trials II Mendocino...

  6. Automation of column-based radiochemical separations. A comparison of fluidic, robotic, and hybrid architectures

    Energy Technology Data Exchange (ETDEWEB)

    Grate, J.W.; O' Hara, M.J.; Farawila, A.F.; Ozanich, R.M.; Owsley, S.L. [Pacific Northwest National Laboratory, Richland, WA (United States)

    2011-07-01

    Two automated systems have been developed to perform column-based radiochemical separation procedures. These new systems are compared with past fluidic column separation architectures, with emphasis on using disposable components so that no sample contacts any surface that any other sample has contacted, and setting up samples and columns in parallel for subsequent automated processing. In the first new approach, a general purpose liquid handling robot has been modified and programmed to perform anion exchange separations using 2 mL bed columns in 6 mL plastic disposable column bodies. In the second new approach, a fluidic system has been developed to deliver clean reagents through disposable manual valves to six disposable columns, with a mechanized fraction collector that positions one of four rows of six vials below the columns. The samples are delivered to each column via a manual 3-port disposable valve from disposable syringes. This second approach, a hybrid of fluidic and mechanized components, is a simpler more efficient approach for performing anion exchange procedures for the recovery and purification of plutonium from samples. The automation architectures described can also be adapted to column-based extraction chromatography separations. (orig.)

  7. Parallel keyed hash function construction based on chaotic maps

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Deng Shaojiang

    2008-01-01

    Recently, a variety of chaos-based hash functions have been proposed. Nevertheless, none of them works efficiently in parallel computing environment. In this Letter, an algorithm for parallel keyed hash function construction is proposed, whose structure can ensure the uniform sensitivity of hash value to the message. By means of the mechanism of both changeable-parameter and self-synchronization, the keystream establishes a close relation with the algorithm key, the content and the order of each message block. The entire message is modulated into the chaotic iteration orbit, and the coarse-graining trajectory is extracted as the hash value. Theoretical analysis and computer simulation indicate that the proposed algorithm can satisfy the performance requirements of hash function. It is simple, efficient, practicable, and reliable. These properties make it a good choice for hash on parallel computing platform

  8. Parallel Tracking and Mapping for Controlling VTOL Airframe

    Directory of Open Access Journals (Sweden)

    Michal Jama

    2011-01-01

    Full Text Available This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV. This is a monocular vision based, simultaneous localization and mapping (SLAM system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1 extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2 improved performance of the SLAM algorithm for lower camera frame rates; and (3 the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.

  9. Further optimization of a parallel double-effect organosilicon distillation scheme through exergy analysis

    International Nuclear Information System (INIS)

    Sun, Jinsheng; Dai, Leilei; Shi, Ming; Gao, Hong; Cao, Xijia; Liu, Guangxin

    2014-01-01

    In our previous work, a significant improvement in organosilicon monomer distillation using parallel double-effect heat integration between a heavies removal column and six other columns, as well as heat integration between methyltrichlorosilane and dimethylchlorosilane columns, reduced the total exergy loss of the currently running counterpart by 40.41%. Further research regarding this optimized scheme demonstrated that it was necessary to reduce the higher operating pressure of the methyltrichlorosilane column, which is required for heat integration between the methyltrichlorosilane and dimethylchlorosilane columns. Therefore, in this contribution, a challenger scheme is presented with heat pumps introduced separately from the originally heat-coupled methyltrichlorosilane and dimethylchlorosilane columns in the above-mentioned optimized scheme, which is the prototype for this work. Both schemes are simulated using the same purity requirements used in running industrial units. The thermodynamic properties from the simulation are used to calculate the energy consumption and exergy loss of the two schemes. The results show that the heat pump option further reduces the flowsheet energy consumption and exergy loss by 27.35% and 10.98% relative to the prototype scheme. These results indicate that the heat pumps are superior to heat integration in the context of energy-savings during organosilicon monomer distillation. - Highlights: • Combine the paralleled double-effect and heat pump distillation to organosilicon distillation. • Compare the double-effect with the heat pump in saving energy. • Further cut down the flowsheet energy consumption and exergy loss by 27.35% and 10.98% respectively

  10. The dorsal tectal longitudinal column (TLCd): a second longitudinal column in the paramedian region of the midbrain tectum.

    Science.gov (United States)

    Aparicio, M-Auxiliadora; Saldaña, Enrique

    2014-03-01

    The tectal longitudinal column (TLC) is a longitudinally oriented, long and narrow nucleus that spans the paramedian region of the midbrain tectum of a large variety of mammals (Saldaña et al. in J Neurosci 27:13108-13116, 2007). Recent analysis of the organization of this region revealed another novel nucleus located immediately dorsal, and parallel, to the TLC. Because the name "tectal longitudinal column" also seems appropriate for this novel nucleus, we suggest the TLC described in 2007 be renamed the "ventral tectal longitudinal column (TLCv)", and the newly discovered nucleus termed the "dorsal tectal longitudinal column (TLCd)". This work represents the first characterization of the rat TLCd. A constellation of anatomical techniques was used to demonstrate that the TLCd differs from its surrounding structures (TLCv and superior colliculus) cytoarchitecturally, myeloarchitecturally, neurochemically and hodologically. The distinct expression of vesicular amino acid transporters suggests that TLCd neurons are GABAergic. The TLCd receives major projections from various areas of the cerebral cortex (secondary visual mediomedial area, and granular and dysgranular retrosplenial cortices) and from the medial pretectal nucleus. It densely innervates the ipsilateral lateral posterior and laterodorsal nuclei of the thalamus. Thus, the TLCd is connected with vision-related neural centers. The TLCd may be unique as it constitutes the only known nucleus made of GABAergic neurons dedicated to providing massive inhibition to higher order thalamic nuclei of a specific sensory modality.

  11. Probabilistic global maps of the CO2 column at daily and monthly scales from sparse satellite measurements

    Science.gov (United States)

    Chevallier, Frédéric; Broquet, Grégoire; Pierangelo, Clémence; Crisp, David

    2017-07-01

    The column-average dry air-mole fraction of carbon dioxide in the atmosphere (XCO2) is measured by scattered satellite measurements like those from the Orbiting Carbon Observatory (OCO-2). We show that global continuous maps of XCO2 (corresponding to level 3 of the satellite data) at daily or coarser temporal resolution can be inferred from these data with a Kalman filter built on a model of persistence. Our application of this approach on 2 years of OCO-2 retrievals indicates that the filter provides better information than a climatology of XCO2 at both daily and monthly scales. Provided that the assigned observation uncertainty statistics are tuned in each grid cell of the XCO2 maps from an objective method (based on consistency diagnostics), the errors predicted by the filter at daily and monthly scales represent the true error statistics reasonably well, except for a bias in the high latitudes of the winter hemisphere and a lack of resolution (i.e., a too small discrimination skill) of the predicted error standard deviations. Due to the sparse satellite sampling, the broad-scale patterns of XCO2 described by the filter seem to lag behind the real signals by a few weeks. Finally, the filter offers interesting insights into the quality of the retrievals, both in terms of random and systematic errors.

  12. Row—column visibility graph approach to two-dimensional landscapes

    International Nuclear Information System (INIS)

    Xiao Qin; Pan Xue; Li Xin-Li; Stephen Mutua; Yang Hui-Jie; Jiang Yan; Wang Jian-Yong; Zhang Qing-Jun

    2014-01-01

    A new concept, called the row—column visibility graph, is proposed to map two-dimensional landscapes to complex networks. A cluster coverage is introduced to describe the extensive property of node clusters on a Euclidean lattice. Graphs mapped from fractals generated with the probability redistribution model behave scale-free. They have pattern-induced hierarchical organizations and comparatively much more extensive structures. The scale-free exponent has a negative correlation with the Hurst exponent, however, there is no deterministic relation between them. Graphs for fractals generated with the midpoint displacement model are exponential networks. When the Hurst exponent is large enough (e.g., H > 0.5), the degree distribution decays much more slowly, the average coverage becomes significant large, and the initially hierarchical structure at H < 0.5 is destroyed completely. Hence, the row—column visibility graph can be used to detect the pattern-related new characteristics of two-dimensional landscapes. (interdisciplinary physics and related areas of science and technology)

  13. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  14. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  15. 1979-1999 satellite total ozone column measurements over West Africa

    Directory of Open Access Journals (Sweden)

    P. Di Carlo

    2000-06-01

    Full Text Available Total Ozone Mapping Spectrometer (TOMS instruments have been flown on NASA/GSFC satellites for over 20 years. They provide near real-time ozone data for Atmospheric Science Research. As part of preliminary efforts aimed to develop a Lidar station in Nigeria for monitoring the atmospheric ozone and aerosol levels, the monthly mean TOMS total column ozone measurements between 1979 to 1999 have been analysed. The trends of the total column ozone showed a spatial and temporal variation with signs of the Quasi Biennial Oscillation (QBO during the 20-year study period. The values of the TOMS total ozone column, over Nigeria (4-14°N is within the range of 230-280 Dobson Units, this is consistent with total ozone column data, measured since April 1993 with a Dobson Spectrophotometer at Lagos (3°21¢E, 6°33¢N, Nigeria.

  16. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    Directory of Open Access Journals (Sweden)

    Min-Kyu Kim

    2015-12-01

    Full Text Available This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs. The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  17. Decreasing Data Analytics Time: Hybrid Architecture MapReduce-Massive Parallel Processing for a Smart Grid

    Directory of Open Access Journals (Sweden)

    Abdeslam Mehenni

    2017-03-01

    Full Text Available As our populations grow in a world of limited resources enterprise seek ways to lighten our load on the planet. The idea of modifying consumer behavior appears as a foundation for smart grids. Enterprise demonstrates the value available from deep analysis of electricity consummation histories, consumers’ messages, and outage alerts, etc. Enterprise mines massive structured and unstructured data. In a nutshell, smart grids result in a flood of data that needs to be analyzed, for better adjust to demand and give customers more ability to delve into their power consumption. Simply put, smart grids will increasingly have a flexible data warehouse attached to them. The key driver for the adoption of data management strategies is clearly the need to handle and analyze the large amounts of information utilities are now faced with. New approaches to data integration are nauseating moment; Hadoop is in fact now being used by the utility to help manage the huge growth in data whilst maintaining coherence of the Data Warehouse. In this paper we define a new Meter Data Management System Architecture repository that differ with three leaders MDMS, where we use MapReduce programming model for ETL and Parallel DBMS in Query statements(Massive Parallel Processing MPP.

  18. Novel design for centrifugal counter-current chromatography: VI. Ellipsoid column.

    Science.gov (United States)

    Gu, Dongyu; Yang, Yi; Xin, Xuelei; Aisa, Haji Akber; Ito, Yoichiro

    2015-01-01

    A novel ellipsoid column was designed for centrifugal counter-current chromatography. Performance of the ellipsoid column with a capacity of 3.4 mL was examined with three different solvent systems composed of 1-butanol-acetic acid-water (4:1:5, v/v) (BAW), hexane-ethyl acetate-methanol-0.1 M HCl (1:1:1:1, v/v) (HEMH), and 12.5% (w/w) PEG1000 and 12.5% (w/w) dibasic potassium phosphate in water (PEG-DPP) each with suitable test samples. In dipeptide separation with BAW system, both stationary phase retention (Sf) and peak resolution (Rs) of the ellipsoid column were much higher at 0° column angle (column axis parallel to the centrifugal force) than at 90° column angle (column axis perpendicular to the centrifugal force), where elution with the lower phase at a low flow rate produced the best separation yielding Rs at 2.02 with 27.8% Sf at a flow rate of 0.07 ml/min. In the DNP-amino acid separation with HEMW system, the best results were obtained at a flow rate of 0.05 ml/min with 31.6% Sf yielding high Rs values at 2.16 between DNP-DL-glu and DNP-β-ala peaks and 1.81 between DNP-β-ala and DNP-L-ala peaks. In protein separation with PEG-DPP system, lysozyme and myolobin were resolved at Rs of 1.08 at a flow rate of 0.03 ml/min with 38.9% Sf. Most of those Rs values exceed those obtained from the figure-8 column under similar experimental conditions previously reported.

  19. The relation between the column density structures and the magnetic field orientation in the Vela C molecular complex

    Science.gov (United States)

    Soler, J. D.; Ade, P. A. R.; Angilè, F. E.; Ashton, P.; Benton, S. J.; Devlin, M. J.; Dober, B.; Fissel, L. M.; Fukui, Y.; Galitzki, N.; Gandilo, N. N.; Hennebelle, P.; Klein, J.; Li, Z.-Y.; Korotkov, A. L.; Martin, P. G.; Matthews, T. G.; Moncelsi, L.; Netterfield, C. B.; Novak, G.; Pascale, E.; Poidevin, F.; Santos, F. P.; Savini, G.; Scott, D.; Shariff, J. A.; Thomas, N. E.; Tucker, C. E.; Tucker, G. S.; Ward-Thompson, D.

    2017-07-01

    We statistically evaluated the relative orientation between gas column density structures, inferred from Herschel submillimetre observations, and the magnetic field projected on the plane of sky, inferred from polarized thermal emission of Galactic dust observed by the Balloon-borne Large-Aperture Submillimetre Telescope for Polarimetry (BLASTPol) at 250, 350, and 500 μm, towards the Vela C molecular complex. First, we find very good agreement between the polarization orientations in the three wavelength-bands, suggesting that, at the considered common angular resolution of 3.´0 that corresponds to a physical scale of approximately 0.61 pc, the inferred magnetic field orientation is not significantly affected by temperature or dust grain alignment effects. Second, we find that the relative orientation between gas column density structures and the magnetic field changes progressively with increasing gas column density, from mostly parallel or having no preferred orientation at low column densities to mostly perpendicular at the highest column densities. This observation is in agreement with previous studies by the Planck collaboration towards more nearby molecular clouds. Finally, we find a correspondencebetween (a) the trends in relative orientation between the column density structures and the projected magnetic field; and (b) the shape of the column density probability distribution functions (PDFs). In the sub-regions of Vela C dominated by one clear filamentary structure, or "ridges", where the high-column density tails of the PDFs are flatter, we find a sharp transition from preferentially parallel or having no preferred relative orientation at low column densities to preferentially perpendicular at highest column densities. In the sub-regions of Vela C dominated by several filamentary structures with multiple orientations, or "nests", where the maximum values of the column density are smaller than in the ridge-like sub-regions and the high-column density

  20. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    Science.gov (United States)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  1. Comparison of the performance of full scale pulsed columns vs. mixer-settlers for uranium solvent extraction

    International Nuclear Information System (INIS)

    Movsowitz, R.L.; Kleinberger, R.; Buchalter, E.M.; Grinbaum, B.

    2000-01-01

    A rare opportunity arose to compare the performance of Bateman Pulsed Columns (BPC) vs. Mixer-Settlers at an industrial site, over a long period, when the Uranium Solvent Extraction Plant of WMC at Olympic Dam, South Australia was upgraded. The original plant was operated for years with two trains of 2-stage mixer-settler batteries for the extraction of uranium. When the company decided to increase the yield of the plant, the existing two trains of mixer-settlers for uranium extraction were arranged in series, giving one 4-stage battery. In parallel, two Bateman Pulsed Columns, of the disc-and-doughnut type, were installed to compare the performance of both types of equipment over an extended period.The plant has been operating in parallel for three years and the results show that the performance of the columns is excellent: the extraction yield is similar to the 4 mixer-settlers in series - about 98%, the entrainment of solvent is lower, there are less mechanical failures, less problems with crud, smaller solvent losses and the operation is simpler. The results convinced WMC to install an additional 10 BPC's for the expansion of their uranium plant. These columns were successfully commissioned early 1999. This paper includes quantitative comparison of both types of equipment. (author)

  2. Exhaust properties of centre-column-limited plasmas on MAST

    International Nuclear Information System (INIS)

    Maddison, G.P.; Akers, R.J.; Brickley, C.; Gryaznevich, M.P.; Lott, F.C.; Patel, A.; Sykes, A.; Turner, A.; Valovic, M.

    2007-01-01

    The lowest aspect ratio possible in a spherical tokamak is defined by limiting the plasma on its centre column, which might therefore maximize many physics benefits of this fusion approach. A key issue for such discharges is whether loads exhausted onto the small surface area of the column remain acceptable. A first series of centre-column-limited pulses has been examined on MAST using fast infra-red thermography to infer incident power densities as neutral-beam heating was scanned from 0 to 2.5 MW. Simple mapping shows that efflux distributions on the column armour are governed mostly by magnetic geometry, which moreover spreads them advantageously over almost the whole vertical length. Hence steady peak power densities between sawteeth remained low, -2 , comparable with the target strike-point value in a reference diverted plasma at lower power. Plasma purity and normalized thermal energy confinement through the centre-column-limited (CCL) series were also similar to properties of MAST diverted cases. A major bonus of CCL geometry is a propensity for exhaust to penetrate through its inner scrape-off layer connecting to the column into an expanding outer plume, which forms a 'natural divertor'. Effectiveness of this process may even increase with plasma heating, owing to rising Shafranov shift and/or toroidal rotation. A larger CCL device could potentially offer a simpler, more economic next-step design

  3. Parallel segmented outlet flow high performance liquid chromatography with multiplexed detection

    International Nuclear Information System (INIS)

    Camenzuli, Michelle; Terry, Jessica M.; Shalliker, R. Andrew; Conlan, Xavier A.; Barnett, Neil W.; Francis, Paul S.

    2013-01-01

    Graphical abstract: -- Highlights: •Multiplexed detection for liquid chromatography. •‘Parallel segmented outlet flow’ distributes inner and outer portions of the analyte zone. •Three detectors were used simultaneously for the determination of opiate alkaloids. -- Abstract: We describe a new approach to multiplex detection for HPLC, exploiting parallel segmented outlet flow – a new column technology that provides pressure-regulated control of eluate flow through multiple outlet channels, which minimises the additional dead volume associated with conventional post-column flow splitting. Using three detectors: one UV-absorbance and two chemiluminescence systems (tris(2,2′-bipyridine)ruthenium(III) and permanganate), we examine the relative responses for six opium poppy (Papaver somniferum) alkaloids under conventional and multiplexed conditions, where approximately 30% of the eluate was distributed to each detector and the remaining solution directed to a collection vessel. The parallel segmented outlet flow mode of operation offers advantages in terms of solvent consumption, waste generation, total analysis time and solute band volume when applying multiple detectors to HPLC, but the manner in which each detection system is influenced by changes in solute concentration and solution flow rates must be carefully considered

  4. Parallel segmented outlet flow high performance liquid chromatography with multiplexed detection

    Energy Technology Data Exchange (ETDEWEB)

    Camenzuli, Michelle [Australian Centre for Research on Separation Science (ACROSS), School of Science and Health, University of Western Sydney (Parramatta), Sydney, NSW (Australia); Terry, Jessica M. [Centre for Chemistry and Biotechnology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria 3216 (Australia); Shalliker, R. Andrew, E-mail: r.shalliker@uws.edu.au [Australian Centre for Research on Separation Science (ACROSS), School of Science and Health, University of Western Sydney (Parramatta), Sydney, NSW (Australia); Conlan, Xavier A.; Barnett, Neil W. [Centre for Chemistry and Biotechnology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria 3216 (Australia); Francis, Paul S., E-mail: paul.francis@deakin.edu.au [Centre for Chemistry and Biotechnology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria 3216 (Australia)

    2013-11-25

    Graphical abstract: -- Highlights: •Multiplexed detection for liquid chromatography. •‘Parallel segmented outlet flow’ distributes inner and outer portions of the analyte zone. •Three detectors were used simultaneously for the determination of opiate alkaloids. -- Abstract: We describe a new approach to multiplex detection for HPLC, exploiting parallel segmented outlet flow – a new column technology that provides pressure-regulated control of eluate flow through multiple outlet channels, which minimises the additional dead volume associated with conventional post-column flow splitting. Using three detectors: one UV-absorbance and two chemiluminescence systems (tris(2,2′-bipyridine)ruthenium(III) and permanganate), we examine the relative responses for six opium poppy (Papaver somniferum) alkaloids under conventional and multiplexed conditions, where approximately 30% of the eluate was distributed to each detector and the remaining solution directed to a collection vessel. The parallel segmented outlet flow mode of operation offers advantages in terms of solvent consumption, waste generation, total analysis time and solute band volume when applying multiple detectors to HPLC, but the manner in which each detection system is influenced by changes in solute concentration and solution flow rates must be carefully considered.

  5. EX1103L1 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1103: Exploration and Mapping, Galapagos...

  6. Establishing column batch repeatability according to Quality by Design (QbD) principles using modeling software.

    Science.gov (United States)

    Rácz, Norbert; Kormány, Róbert; Fekete, Jenő; Molnár, Imre

    2015-04-10

    Column technology needs further improvement even today. To get information of batch-to-batch repeatability, intelligent modeling software was applied. Twelve columns from the same production process, but from different batches were compared in this work. In this paper, the retention parameters of these columns with real life sample solutes were studied. The following parameters were selected for measurements: gradient time, temperature and pH. Based on calculated results, batch-to-batch repeatability of BEH columns was evaluated. Two parallel measurements on two columns from the same batch were performed to obtain information about the quality of packing. Calculating the average of individual working points at the highest critical resolution (R(s,crit)) it was found that the robustness, calculated with a newly released robustness module, had a success rate >98% among the predicted 3(6) = 729 experiments for all 12 columns. With the help of retention modeling all substances could be separated independently from the batch and/or packing, using the same conditions, having high robustness of the experiments. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. EX0909L4 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0909L4: Mapping Field Trials -...

  8. EX1502L1 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1502L1: Caribbean Exploration (Mapping)...

  9. EX1502L2 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1502L2: Caribbean Exploration (Mapping)...

  10. A New, Large-scale Map of Interstellar Reddening Derived from H I Emission

    Science.gov (United States)

    Lenz, Daniel; Hensley, Brandon S.; Doré, Olivier

    2017-09-01

    We present a new map of interstellar reddening, covering the 39% of the sky with low H I column densities ({N}{{H}{{I}}}Peek and Graves based on observed reddening toward passive galaxies. We therefore argue that our H I-based map provides the most accurate interstellar reddening estimates in the low-column-density regime to date. Our reddening map is made publicly available at doi.org/10.7910/DVN/AFJNWJ.

  11. EX1503L1 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1503L1: Tropical Exploration (Mapping I)...

  12. EX0909L3 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0909L3: Mapping Field Trials - Hawaiian...

  13. EX0909L2 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX0909L2: Mapping Field Trials - Necker...

  14. Columnar discharge mode between parallel dielectric barrier electrodes in atmospheric pressure helium

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Yanpeng; Zheng, Bin; Liu, Yaoge [School of Electric Power, South China University of Technology, Guangzhou 510640 (China)

    2014-01-15

    Using a fast-gated intensified charge-coupled device, end- and side-view photographs were taken of columnar discharge between parallel dielectric barrier electrodes in atmospheric pressure helium. Based on three-dimensional images generated from end-view photographs, the number of discharge columns increased, whereas the diameter of each column decreased as the applied voltage was increased. Side-view photographs indicate that columnar discharges exhibited a mode transition ranging from Townsend to glow discharges generated by the same discharge physics as atmospheric pressure glow discharge.

  15. Automatic parallelization of while-Loops using speculative execution

    International Nuclear Information System (INIS)

    Collard, J.F.

    1995-01-01

    Automatic parallelization of imperative sequential programs has focused on nests of for-loops. The most recent of them consist in finding an affine mapping with respect to the loop indices to simultaneously capture the temporal and spatial properties of the parallelized program. Such a mapping is usually called a open-quotes space-time transformation.close quotes This work describes an extension of these techniques to while-loops using speculative execution. We show that space-time transformations are a good framework for summing up previous restructuration techniques of while-loop, such as pipelining. Moreover, we show that these transformations can be derived and applied automatically

  16. Modeling Stone Columns.

    Science.gov (United States)

    Castro, Jorge

    2017-07-11

    This paper reviews the main modeling techniques for stone columns, both ordinary stone columns and geosynthetic-encased stone columns. The paper tries to encompass the more recent advances and recommendations in the topic. Regarding the geometrical model, the main options are the "unit cell", longitudinal gravel trenches in plane strain conditions, cylindrical rings of gravel in axial symmetry conditions, equivalent homogeneous soil with improved properties and three-dimensional models, either a full three-dimensional model or just a three-dimensional row or slice of columns. Some guidelines for obtaining these simplified geometrical models are provided and the particular case of groups of columns under footings is also analyzed. For the latter case, there is a column critical length that is around twice the footing width for non-encased columns in a homogeneous soft soil. In the literature, the column critical length is sometimes given as a function of the column length, which leads to some disparities in its value. Here it is shown that the column critical length mainly depends on the footing dimensions. Some other features related with column modeling are also briefly presented, such as the influence of column installation. Finally, some guidance and recommendations are provided on parameter selection for the study of stone columns.

  17. EX1402L1 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1402L1: Gulf of Mexico Mapping and...

  18. EX1402L2 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1402L2: Gulf of Mexico Mapping and...

  19. Hemifield columns co-opt ocular dominance column structure in human achiasma.

    Science.gov (United States)

    Olman, Cheryl A; Bao, Pinglei; Engel, Stephen A; Grant, Andrea N; Purington, Chris; Qiu, Cheng; Schallmo, Michael-Paul; Tjan, Bosco S

    2018-01-01

    In the absence of an optic chiasm, visual input to the right eye is represented in primary visual cortex (V1) in the right hemisphere, while visual input to the left eye activates V1 in the left hemisphere. Retinotopic mapping In V1 reveals that in each hemisphere left and right visual hemifield representations are overlaid (Hoffmann et al., 2012). To explain how overlapping hemifield representations in V1 do not impair vision, we tested the hypothesis that visual projections from nasal and temporal retina create interdigitated left and right visual hemifield representations in V1, similar to the ocular dominance columns observed in neurotypical subjects (Victor et al., 2000). We used high-resolution fMRI at 7T to measure the spatial distribution of responses to left- and right-hemifield stimulation in one achiasmic subject. T 2 -weighted 2D Spin Echo images were acquired at 0.8mm isotropic resolution. The left eye was occluded. To the right eye, a presentation of flickering checkerboards alternated between the left and right visual fields in a blocked stimulus design. The participant performed a demanding orientation-discrimination task at fixation. A general linear model was used to estimate the preference of voxels in V1 to left- and right-hemifield stimulation. The spatial distribution of voxels with significant preference for each hemifield showed interdigitated clusters which densely packed V1 in the right hemisphere. The spatial distribution of hemifield-preference voxels in the achiasmic subject was stable between two days of testing and comparable in scale to that of human ocular dominance columns. These results are the first in vivo evidence showing that visual hemifield representations interdigitate in achiasmic V1 following a similar developmental course to that of ocular dominance columns in V1 with intact optic chiasm. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. High-resolution 2-deoxyglucose mapping of functional cortical columns in mouse barrel cortex.

    Science.gov (United States)

    McCasland, J S; Woolsey, T A

    1988-12-22

    Cortical columns associated with barrels in layer IV of the somatosensory cortex were characterized by high-resolution 2-deoxy-D-glucose (2DG) autoradiography in freely behaving mice. The method demonstrates a more exact match between columnar labeling and cytoarchitectonic barrel boundaries than previously reported. The pattern of cortical activation seen with stimulation of a single whisker (third whisker in the middle row of large hairs--C3) was compared with the patterns from two control conditions--normal animals with all whiskers present ("positive control")--and with all large whiskers clipped ("negative control"). Two types of measurements were made from 2DG autoradiograms of tangential cortical sections: 1) labeled cells were identified by eye and tabulated with a computer, and 2) grain densities were obtained automatically with a computer-controlled microscope and image processor. We studied the fine-grained patterns of 2DG labeling in a nine-barrel grid with the C3 barrel in the center. From the analysis we draw five major conclusions. 1. Approximately 30-40% of the total number of neurons in the C3 barrel column are activated when only the C3 whisker is stimulated. This is about twice the number of neurons labeled in the C3 column when all whiskers are stimulated and about ten times the number of neurons labeled when all large whiskers are clipped. 2. There is evidence for a vertical functional organization within a barrel-related whisker column which has smaller dimensions in the tangential direction than a barrel. There are densely labeled patches within a barrel which are unique to an individual cortex. The same patchy pattern is found in the appropriate regions of sections above and below the barrels through the full thickness of the cortex. This functional arrangement could be considered to be a "minicolumn" or more likely a group of "minicolumns" (Mountcastle: In G.M. Edelman and U.B. Mountcastle (eds): The Material Brain: Cortical Organization

  1. Post column derivatisation analyses review. Is post-column derivatisation incompatible with modern HPLC columns?

    Science.gov (United States)

    Jones, Andrew; Pravadali-Cekic, Sercan; Dennis, Gary R; Shalliker, R Andrew

    2015-08-19

    Post Column derivatisation (PCD) coupled with high performance liquid chromatography or ultra-high performance liquid chromatography is a powerful tool in the modern analytical laboratory, or at least it should be. One drawback with PCD techniques is the extra post-column dead volume due to reaction coils used to enable adequate reaction time and the mixing of reagents which causes peak broadening, hence a loss of separation power. This loss of efficiency is counter-productive to modern HPLC technologies, -such as UHPLC. We reviewed 87 PCD methods published from 2009 to 2014. We restricted our review to methods published between 2009 and 2014, because we were interested in the uptake of PCD methods in UHPLC environments. Our review focused on a range of system parameters including: column dimensions, stationary phase and particle size, as well as the geometry of the reaction loop. The most commonly used column in the methods investigated was not in fact a modern UHPLC version with sub-2-micron, (or even sub-3-micron) particles, but rather, work-house columns, such as, 250 × 4.6 mm i.d. columns packed with 5 μm C18 particles. Reaction loops were varied, even within the same type of analysis, but the majority of methods employed loop systems with volumes greater than 500 μL. A second part of this review illustrated briefly the effect of dead volume on column performance. The experiment evaluated the change in resolution and separation efficiency of some weak to moderately retained solutes on a 250 × 4.6 mm i.d. column packed with 5 μm particles. The data showed that reaction loops beyond 100 μL resulted in a very serious loss of performance. Our study concluded that practitioners of PCD methods largely avoid the use of UHPLC-type column formats, so yes, very much, PCD is incompatible with the modern HPLC column. Copyright © 2015. Published by Elsevier B.V.

  2. EX1402L3 Water Column Summary Report and Profile Data Collection

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A complete set of water column profile data and CTD Summary Report (if generated) generated by the Okeanos Explorer during EX1402L3: Gulf of Mexico Mapping and ROV...

  3. Acoustic mapping of shallow water gas releases using shipborne multibeam systems

    Science.gov (United States)

    Urban, Peter; Köser, Kevin; Weiß, Tim; Greinert, Jens

    2015-04-01

    Water column imaging (WCI) shipborne multibeam systems are effective tools for investigating marine free gas (bubble) release. Like single- and splitbeam systems they are very sensitive towards gas bubbles in the water column, and have the advantage of the wide swath opening angle, 120° or more allowing a better mapping and possible 3D investigations of targets in the water column. On the downside, WCI data are degraded by specific noise from side-lobe effects and are usually not calibrated for target backscattering strength analysis. Most approaches so far concentrated on manual investigations of bubbles in the water column data. Such investigations allow the detection of bubble streams (flares) and make it possible to get an impression about the strength of detected flares/the gas release. Because of the subjective character of these investigations it is difficult to understand how well an area has been investigated by a flare mapping survey and subjective impressions about flare strength can easily be fooled by the many acoustic effects multibeam systems create. Here we present a semi-automated approach that uses the behavior of bubble streams in varying water currents to detect and map their exact source positions. The focus of the method is application of objective rules for flare detection, which makes it possible to extract information about the quality of the seepage mapping survey, perform automated noise reduction and create acoustic maps with quality discriminators indicating how well an area has been mapped.

  4. Optimal Operation and Stabilising Control of the Concentric Heat-Integrated Distillation Column

    DEFF Research Database (Denmark)

    Bisgaard, Thomas; Skogestad, Sigurd; Huusom, Jakob Kjøbsted

    2016-01-01

    A systematic control structure design method is applied on the concentric heat integrated distillation column (HIDiC) separating benzene and toluene. A degrees of freedom analysis is provided for identifying potential manipulated and controlled variables. Optimal operation is mapped and active...

  5. Assembly for connecting the column ends of two capillary columns

    International Nuclear Information System (INIS)

    Kolb, B.; Auer, M.; Pospisil, P.

    1984-01-01

    In gas chromatography, the column ends of two capillary columns are inserted into a straight capillary from both sides forming annular gaps. The capillary is located in a tee out of which the capillary columns are sealingly guided, and to which carrier gas is supplied by means of a flushing flow conduit. A ''straight-forward operation'' having capillary columns connected in series and a ''flush-back operation'' are possible. The dead volume between the capillary columns can be kept small

  6. Multilevel Parallelization of AutoDock 4.2

    Directory of Open Access Journals (Sweden)

    Norgan Andrew P

    2011-04-01

    Full Text Available Abstract Background Virtual (computational screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4. Results Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Conclusions Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI and node-level (OpenMP parallelization to best fit both workloads and computational resources.

  7. Multilevel Parallelization of AutoDock 4.2.

    Science.gov (United States)

    Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P

    2011-04-28

    Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.

  8. The MAPS based PXL vertex detector for the STAR experiment

    Science.gov (United States)

    Contin, G.; Anderssen, E.; Greiner, L.; Schambach, J.; Silber, J.; Stezelberger, T.; Sun, X.; Szelezniak, M.; Vu, C.; Wieman, H.; Woodmansee, S.

    2015-03-01

    The Heavy Flavor Tracker (HFT) was installed in the STAR experiment for the 2014 heavy ion run of RHIC. Designed to improve the vertex resolution and extend the measurement capabilities in the heavy flavor domain, the HFT is composed of three different silicon detectors based on CMOS monolithic active pixels (MAPS), pads and strips respectively, arranged in four concentric cylinders close to the STAR interaction point. The two innermost HFT layers are placed at a radius of 2.7 and 8 cm from the beam line, respectively, and accommodate 400 ultra-thin (50 μ m) high resolution MAPS sensors arranged in 10-sensor ladders to cover a total silicon area of 0.16 m2. Each sensor includes a pixel array of 928 rows and 960 columns with a 20.7 μ m pixel pitch, providing a sensitive area of ~ 3.8 cm2. The architecture is based on a column parallel readout with amplification and correlated double sampling inside each pixel. Each column is terminated with a high precision discriminator, is read out in a rolling shutter mode and the output is processed through an integrated zero suppression logic. The results are stored in two SRAM with ping-pong arrangement for a continuous readout. The sensor features 185.6 μ s readout time and 170 mW/cm2 power dissipation. The detector is air-cooled, allowing a global material budget as low as 0.39% on the inner layer. A novel mechanical approach to detector insertion enables effective installation and integration of the pixel layers within an 8 hour shift during the on-going STAR run.In addition to a detailed description of the detector characteristics, the experience of the first months of data taking will be presented in this paper, with a particular focus on sensor threshold calibration, latch-up protection procedures and general system operations aimed at stabilizing the running conditions. Issues faced during the 2014 run will be discussed together with the implemented solutions. A preliminary analysis of the detector performance

  9. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  10. QDP++: Data Parallel Interface for QCD

    Energy Technology Data Exchange (ETDEWEB)

    Robert Edwards

    2003-03-01

    This is a user's guide for the C++ binding for the QDP Data Parallel Applications Programmer Interface developed under the auspices of the US Department of Energy Scientific Discovery through Advanced Computing (SciDAC) program. The QDP Level 2 API has the following features: (1) Provides data parallel operations (logically SIMD) on all sites across the lattice or subsets of these sites. (2) Operates on lattice objects, which have an implementation-dependent data layout that is not visible above this API. (3) Hides details of how the implementation maps onto a given architecture, namely how the logical problem grid (i.el lattice) is mapped onto the machine architecture. (4) Allows asynchronous (non-blocking) shifts of lattice level objects over any permutation map of site sonto sites. However, from the user's view these instructions appear blocking and in fact may be so in some implementation. (5) Provides broadcast operations (filling a lattice quantity from a scalar value(s)), global reduction operations, and lattice-wide operations on various data-type primitives, such as matrices, vectors, and tensor products of matrices (propagators). (6) Operator syntax that support complex expression constructions.

  11. Fast phase processing in off-axis holography by CUDA including parallel phase unwrapping.

    Science.gov (United States)

    Backoach, Ohad; Kariv, Saar; Girshovitz, Pinhas; Shaked, Natan T

    2016-02-22

    We present parallel processing implementation for rapid extraction of the quantitative phase maps from off-axis holograms on the Graphics Processing Unit (GPU) of the computer using computer unified device architecture (CUDA) programming. To obtain efficient implementation, we parallelized both the wrapped phase map extraction algorithm and the two-dimensional phase unwrapping algorithm. In contrast to previous implementations, we utilized unweighted least squares phase unwrapping algorithm that better suits parallelism. We compared the proposed algorithm run times on the CPU and the GPU of the computer for various sizes of off-axis holograms. Using the GPU implementation, we extracted the unwrapped phase maps from the recorded off-axis holograms at 35 frames per second (fps) for 4 mega pixel holograms, and at 129 fps for 1 mega pixel holograms, which presents the fastest processing framerates obtained so far, to the best of our knowledge. We then used common-path off-axis interferometric imaging to quantitatively capture the phase maps of a micro-organism with rapid flagellum movements.

  12. Distributed and parallel approach for handle and perform huge datasets

    Science.gov (United States)

    Konopko, Joanna

    2015-12-01

    Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.

  13. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    Science.gov (United States)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  14. ADVANCED DIAGNOSTIC TECHNIQUES FOR THREE-PHASE SLURRY BUBBLE COLUMN REACTORS (SBCR)

    Energy Technology Data Exchange (ETDEWEB)

    M.H. Al-Dahhan; M.P. Dudukovic; L.S. Fan

    2001-07-25

    This report summarizes the accomplishment made during the second year of this cooperative research effort between Washington University, Ohio State University and Air Products and Chemicals. The technical difficulties that were encountered in implementing Computer Automated Radioactive Particle Tracking (CARPT) in high pressure SBCR have been successfully resolved. New strategies for data acquisition and calibration procedure have been implemented. These have been performed as a part of other projects supported by Industrial Consortium and DOE via contract DE-2295PC95051 which are executed in parallel with this grant. CARPT and Computed Tomography (CT) experiments have been performed using air-water-glass beads in 6 inch high pressure stainless steel slurry bubble column reactor at selected conditions. Data processing of this work is in progress. The overall gas holdup and the hydrodynamic parameters are measured by Laser Doppler Anemometry (LDA) in 2 inch slurry bubble column using Norpar 15 that mimic at room temperature the Fischer Tropsch wax at FT reaction conditions of high pressure and temperature. To improve the design and scale-up of bubble column, new correlations have been developed to predict the radial gas holdup and the time averaged axial liquid recirculation velocity profiles in bubble columns.

  15. Water column correction for coral reef studies by remote sensing.

    Science.gov (United States)

    Zoffoli, Maria Laura; Frouin, Robert; Kampel, Milton

    2014-09-11

    Human activity and natural climate trends constitute a major threat to coral reefs worldwide. Models predict a significant reduction in reef spatial extension together with a decline in biodiversity in the relatively near future. In this context, monitoring programs to detect changes in reef ecosystems are essential. In recent years, coral reef mapping using remote sensing data has benefited from instruments with better resolution and computational advances in storage and processing capabilities. However, the water column represents an additional complexity when extracting information from submerged substrates by remote sensing that demands a correction of its effect. In this article, the basic concepts of bottom substrate remote sensing and water column interference are presented. A compendium of methodologies developed to reduce water column effects in coral ecosystems studied by remote sensing that include their salient features, advantages and drawbacks is provided. Finally, algorithms to retrieve the bottom reflectance are applied to simulated data and actual remote sensing imagery and their performance is compared. The available methods are not able to completely eliminate the water column effect, but they can minimize its influence. Choosing the best method depends on the marine environment, available input data and desired outcome or scientific application.

  16. Derringer desirability and kinetic plot LC-column comparison approach for MS-compatible lipopeptide analysis.

    Science.gov (United States)

    D'Hondt, Matthias; Verbeke, Frederick; Stalmans, Sofie; Gevaert, Bert; Wynendaele, Evelien; De Spiegeleer, Bart

    2014-06-01

    Lipopeptides are currently re-emerging as an interesting subgroup in the peptide research field, having historical applications as antibacterial and antifungal agents and new potential applications as antiviral, antitumor, immune-modulating and cell-penetrating compounds. However, due to their specific structure, chromatographic analysis often requires special buffer systems or the use of trifluoroacetic acid, limiting mass spectrometry detection. Therefore, we used a traditional aqueous/acetonitrile based gradient system, containing 0.1% (m/v) formic acid, to separate four pharmaceutically relevant lipopeptides (polymyxin B 1 , caspofungin, daptomycin and gramicidin A 1 ), which were selected based upon hierarchical cluster analysis (HCA) and principal component analysis (PCA). In total, the performance of four different C18 columns, including one UPLC column, were evaluated using two parallel approaches. First, a Derringer desirability function was used, whereby six single and multiple chromatographic response values were rescaled into one overall D -value per column. Using this approach, the YMC Pack Pro C18 column was ranked as the best column for general MS-compatible lipopeptide separation. Secondly, the kinetic plot approach was used to compare the different columns at different flow rate ranges. As the optimal kinetic column performance is obtained at its maximal pressure, the length elongation factor λ ( P max / P exp ) was used to transform the obtained experimental data (retention times and peak capacities) and construct kinetic performance limit (KPL) curves, allowing a direct visual and unbiased comparison of the selected columns, whereby the YMC Triart C18 UPLC and ACE C18 columns performed as best. Finally, differences in column performance and the (dis)advantages of both approaches are discussed.

  17. Contributions to reversed-phase column selectivity: III. Column hydrogen-bond basicity.

    Science.gov (United States)

    Carr, P W; Dolan, J W; Dorsey, J G; Snyder, L R; Kirkland, J J

    2015-05-22

    Column selectivity in reversed-phase chromatography (RPC) can be described in terms of the hydrophobic-subtraction model, which recognizes five solute-column interactions that together determine solute retention and column selectivity: hydrophobic, steric, hydrogen bonding of an acceptor solute (i.e., a hydrogen-bond base) by a stationary-phase donor group (i.e., a silanol), hydrogen bonding of a donor solute (e.g., a carboxylic acid) by a stationary-phase acceptor group, and ionic. Of these five interactions, hydrogen bonding between donor solutes (acids) and stationary-phase acceptor groups is the least well understood; the present study aims at resolving this uncertainty, so far as possible. Previous work suggests that there are three distinct stationary-phase sites for hydrogen-bond interaction with carboxylic acids, which we will refer to as column basicity I, II, and III. All RPC columns exhibit a selective retention of carboxylic acids (column basicity I) in varying degree. This now appears to involve an interaction of the solute with a pair of vicinal silanols in the stationary phase. For some type-A columns, an additional basic site (column basicity II) is similar to that for column basicity I in primarily affecting the retention of carboxylic acids. The latter site appears to be associated with metal contamination of the silica. Finally, for embedded-polar-group (EPG) columns, the polar group can serve as a proton acceptor (column basicity III) for acids, phenols, and other donor solutes. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. JCE Feature Columns

    Science.gov (United States)

    Holmes, Jon L.

    1999-05-01

    The Features area of JCE Online is now readily accessible through a single click from our home page. In the Features area each column is linked to its own home page. These column home pages also have links to them from the online Journal Table of Contents pages or from any article published as part of that feature column. Using these links you can easily find abstracts of additional articles that are related by topic. Of course, JCE Online+ subscribers are then just one click away from the entire article. Finding related articles is easy because each feature column "site" contains links to the online abstracts of all the articles that have appeared in the column. In addition, you can find the mission statement for the column and the email link to the column editor that I mentioned above. At the discretion of its editor, a feature column site may contain additional resources. As an example, the Chemical Information Instructor column edited by Arleen Somerville will have a periodically updated bibliography of resources for teaching and using chemical information. Due to the increase in the number of these resources available on the WWW, it only makes sense to publish this information online so that you can get to these resources with a simple click of the mouse. We expect that there will soon be additional information and resources at several other feature column sites. Following in the footsteps of the Chemical Information Instructor, up-to-date bibliographies and links to related online resources can be made available. We hope to extend the online component of our feature columns with moderated online discussion forums. If you have a suggestion for an online resource you would like to see included, let the feature editor or JCE Online (jceonline@chem.wisc.edu) know about it. JCE Internet Features JCE Internet also has several feature columns: Chemical Education Resource Shelf, Conceptual Questions and Challenge Problems, Equipment Buyers Guide, Hal's Picks, Mathcad

  19. Multi-Column Experimental Test Bed Using CaSDB MOF for Xe/Kr Separation

    Energy Technology Data Exchange (ETDEWEB)

    Welty, Amy Keil [Idaho National Lab. (INL), Idaho Falls, ID (United States); Greenhalgh, Mitchell Randy [Idaho National Lab. (INL), Idaho Falls, ID (United States); Garn, Troy Gerry [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-03-01

    Processing of spent nuclear fuel produces off-gas from which several volatile radioactive components must be separated for further treatment or storage. As part of the Off-gas Sigma Team, parallel research at INL and PNNL has produced several promising sorbents for the selective capture of xenon and krypton from these off-gas streams. In order to design full-scale treatment systems, sorbents that are promising on a laboratory scale must be proven under process conditions to be considered for pilot and then full-scale use. To that end, a bench-scale multi-column system with capability to test multiple sorbents was designed and constructed at INL. This report details bench-scale testing of CaSDB MOF, produced at PNNL, and compares the results to those reported last year using INL engineered sorbents. Two multi-column tests were performed with the CaSDB MOF installed in the first column, followed with HZ-PAN installed in the second column. The CaSDB MOF column was placed in a Stirling cryocooler while the cryostat was employed for the HZ-PAN column. Test temperatures of 253 K and 191 K were selected for the first column while the second column was held at 191 K for both tests. Calibrated volume sample bombs were utilized for gas stream analyses. At the conclusion of each test, samples were collected from each column and analyzed for gas composition. While CaSDB MOF does appear to have good capacity for Xe, the short time to initial breakthrough would make design of a continuous adsorption/desorption cycle difficult, requiring either very large columns or a large number of smaller columns. Because of the tenacity with which Xe and Kr adhere to the material once adsorbed, this CaSDB MOF may be more suitable for use as a long-term storage solution. Further testing is recommended to determine if CaSDB MOF is suitable for this purpose.

  20. Uranium facilitated transport by water-dispersible colloids in field and soil columns

    Energy Technology Data Exchange (ETDEWEB)

    Crancon, P., E-mail: pierre.crancon@cea.fr [CEA, DAM, DIF, F-91297 Arpajon (France); Pili, E. [CEA, DAM, DIF, F-91297 Arpajon (France); Charlet, L. [Laboratoire de Geophysique Interne et Tectonophysique (LGIT-OSUG), University of Grenoble-I, UMR5559-CNRS-UJF, BP53, 38041 Grenoble cedex 9 (France)

    2010-04-01

    The transport of uranium through a sandy podzolic soil has been investigated in the field and in column experiments. Field monitoring, numerous years after surface contamination by depleted uranium deposits, revealed a 20 cm deep uranium migration in soil. Uranium retention in soil is controlled by the < 50 {mu}m mixed humic and clayey coatings in the first 40 cm i.e. in the E horizon. Column experiments of uranium transport under various conditions were run using isotopic spiking. After 100 pore volumes elution, 60% of the total input uranium is retained in the first 2 cm of the column. Retardation factor of uranium on E horizon material ranges from 1300 (column) to 3000 (batch). In parallel to this slow uranium migration, we experimentally observed a fast elution related to humic colloids of about 1-5% of the total-uranium input, transferred at the mean porewater velocity through the soil column. In order to understand the effect of rain events, ionic strength of the input solution was sharply changed. Humic colloids are retarded when ionic strength increases, while a major mobilization of humic colloids and colloid-borne uranium occurs as ionic strength decreases. Isotopic spiking shows that both {sup 238}U initially present in the soil column and {sup 233}U brought by input solution are desorbed. The mobilization process observed experimentally after a drop of ionic strength may account for a rapid uranium migration in the field after a rainfall event, and for the significant uranium concentrations found in deep soil horizons and in groundwater, 1 km downstream from the pollution source.

  1. Water Column Correction for Coral Reef Studies by Remote Sensing

    Directory of Open Access Journals (Sweden)

    Maria Laura Zoffoli

    2014-09-01

    Full Text Available Human activity and natural climate trends constitute a major threat to coral reefs worldwide. Models predict a significant reduction in reef spatial extension together with a decline in biodiversity in the relatively near future. In this context, monitoring programs to detect changes in reef ecosystems are essential. In recent years, coral reef mapping using remote sensing data has benefited from instruments with better resolution and computational advances in storage and processing capabilities. However, the water column represents an additional complexity when extracting information from submerged substrates by remote sensing that demands a correction of its effect. In this article, the basic concepts of bottom substrate remote sensing and water column interference are presented. A compendium of methodologies developed to reduce water column effects in coral ecosystems studied by remote sensing that include their salient features, advantages and drawbacks is provided. Finally, algorithms to retrieve the bottom reflectance are applied to simulated data and actual remote sensing imagery and their performance is compared. The available methods are not able to completely eliminate the water column effect, but they can minimize its influence. Choosing the best method depends on the marine environment, available input data and desired outcome or scientific application.

  2. Water Column Correction for Coral Reef Studies by Remote Sensing

    Science.gov (United States)

    Zoffoli, Maria Laura; Frouin, Robert; Kampel, Milton

    2014-01-01

    Human activity and natural climate trends constitute a major threat to coral reefs worldwide. Models predict a significant reduction in reef spatial extension together with a decline in biodiversity in the relatively near future. In this context, monitoring programs to detect changes in reef ecosystems are essential. In recent years, coral reef mapping using remote sensing data has benefited from instruments with better resolution and computational advances in storage and processing capabilities. However, the water column represents an additional complexity when extracting information from submerged substrates by remote sensing that demands a correction of its effect. In this article, the basic concepts of bottom substrate remote sensing and water column interference are presented. A compendium of methodologies developed to reduce water column effects in coral ecosystems studied by remote sensing that include their salient features, advantages and drawbacks is provided. Finally, algorithms to retrieve the bottom reflectance are applied to simulated data and actual remote sensing imagery and their performance is compared. The available methods are not able to completely eliminate the water column effect, but they can minimize its influence. Choosing the best method depends on the marine environment, available input data and desired outcome or scientific application. PMID:25215941

  3. Blind column selection protocol for two-dimensional high performance liquid chromatography.

    Science.gov (United States)

    Burns, Niki K; Andrighetto, Luke M; Conlan, Xavier A; Purcell, Stuart D; Barnett, Neil W; Denning, Jacquie; Francis, Paul S; Stevenson, Paul G

    2016-07-01

    The selection of two orthogonal columns for two-dimensional high performance liquid chromatography (LC×LC) separation of natural product extracts can be a labour intensive and time consuming process and in many cases is an entirely trial-and-error approach. This paper introduces a blind optimisation method for column selection of a black box of constituent components. A data processing pipeline, created in the open source application OpenMS®, was developed to map the components within the mixture of equal mass across a library of HPLC columns; LC×LC separation space utilisation was compared by measuring the fractional surface coverage, fcoverage. It was found that for a test mixture from an opium poppy (Papaver somniferum) extract, the combination of diphenyl and C18 stationary phases provided a predicted fcoverage of 0.48 and was matched with an actual usage of 0.43. OpenMS®, in conjunction with algorithms designed in house, have allowed for a significantly quicker selection of two orthogonal columns, which have been optimised for a LC×LC separation of crude extractions of plant material. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Okeanos Explorer (EX1402L3): Gulf of Mexico Mapping and ROV Exploration

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Multibeam mapping, single beam, water column sonar, sub-bottom profile, water column profile, ship sensor, ROV sensor, video and image data will all be collected...

  5. Column Liquid Chromatography.

    Science.gov (United States)

    Majors, Ronald E.; And Others

    1984-01-01

    Reviews literature covering developments of column liquid chromatography during 1982-83. Areas considered include: books and reviews; general theory; columns; instrumentation; detectors; automation and data handling; multidimensional chromatographic and column switching techniques; liquid-solid chromatography; normal bonded-phase, reversed-phase,…

  6. Sorting, Searching, and Simulation in the MapReduce Framework

    DEFF Research Database (Denmark)

    Goodrich, Michael T.; Sitchinava, Nodari; Zhang, Qin

    2011-01-01

    usefulness of our approach by designing and analyzing efficient MapReduce algorithms for fundamental sorting, searching, and simulation problems. This study is motivated by a goal of ultimately putting the MapReduce framework on an equal theoretical footing with the well-known PRAM and BSP parallel...... in parallel computational geometry for the MapReduce framework, which result in efficient MapReduce algorithms for sorting, 2- and 3-dimensional convex hulls, and fixed-dimensional linear programming. For the case when mappers and reducers have a memory/message-I/O size of M = (N), for a small constant > 0...

  7. The shapes of column density PDFs. The importance of the last closed contour

    Science.gov (United States)

    Alves, João; Lombardi, Marco; Lada, Charles J.

    2017-10-01

    The probability distribution function of column density (PDF) has become the tool of choice for cloud structure analysis and star formation studies. Its simplicity is attractive, and the PDF could offer access to cloud physical parameters otherwise difficult to measure, but there has been some confusion in the literature on the definition of its completeness limit and shape at the low column density end. In this letter we use the natural definition of the completeness limit of a column density PDF, the last closed column density contour inside a surveyed region, and apply it to a set of large-scale maps of nearby molecular clouds. We conclude that there is no observational evidence for log-normal PDFs in these objects. We find that all studied molecular clouds have PDFs well described by power laws, including the diffuse cloud Polaris. Our results call for a new physical interpretation of the shape of the column density PDFs. We find that the slope of a cloud PDF is invariant to distance but not to the spatial arrangement of cloud material, and as such it is still a useful tool for investigating cloud structure.

  8. The evolution of the cognitive map.

    Science.gov (United States)

    Jacobs, Lucia F

    2003-01-01

    The hippocampal formation of mammals and birds mediates spatial orientation behaviors consistent with a map-like representation, which allows the navigator to construct a new route across unfamiliar terrain. This cognitive map thus appears to underlie long-distance navigation. Its mediation by the hippocampal formation and its presence in birds and mammals suggests that at least one function of the ancestral medial pallium was spatial navigation. Recent studies of the goldfish and certain reptile species have shown that the medial pallium homologue in these species can also play an important role in spatial orientation. It is not yet clear, however, whether one type of cognitive map is found in these groups or indeed in all vertebrates. To answer this question, we need a more precise definition of the map. The recently proposed parallel map theory of hippocampal function provides a new perspective on this question, by unpacking the mammalian cognitive map into two dissociable mapping processes, mediated by different hippocampal subfields. If the cognitive map of non-mammals is constructed in a similar manner, the parallel map theory may facilitate the analysis of homologies, both in behavior and in the function of medial pallium subareas. Copyright 2003 S. Karger AG, Basel

  9. Uranium facilitated transport by water-dispersible colloids in field and soil columns

    Energy Technology Data Exchange (ETDEWEB)

    Crancon, P.; Pili, E. [CEA Bruyeres-le-Chatel, DIF, 91 (France); Charlet, L. [Univ Grenoble 1, Lab Geophys Interne and Tectonophys LGIT OSUG, CNRS, UJF, UMR5559, F-38041 Grenoble 9 (France)

    2010-07-01

    The transport of uranium through a sandy podsolic soil has been investigated in the field and in column experiments. Field monitoring, numerous years after surface contamination by depleted uranium deposits, revealed a 20 cm deep uranium migration in soil. Uranium retention in soil is controlled by the {<=} 50 {mu}m mixed humic and clayey coatings in the first 40 cm i.e. in the E horizon. Column experiments of uranium transport under various conditions were run using isotopic spiking. After 100 pore volumes elution, 60% of the total input uranium is retained in the first 2 cm of the column. Retardation factor of uranium on E horizon material ranges from 1300 (column) to 3000 (batch). In parallel to this slow uranium migration, we experimentally observed a fast elution related to humic colloids of about 1-5% of the total-uranium input, transferred at the mean pore-water velocity through the soil column. In order to understand the effect of rain events, ionic strength of the input solution was sharply changed. Humic colloids are retarded when ionic strength increases, while a major mobilization of humic colloids and colloid-borne uranium occurs as ionic strength decreases. Isotopic spiking shows that both {sup 238}U initially present in the soil column and {sup 233}U brought by input solution are desorbed. The mobilization process observed experimentally after a drop of ionic strength may account for a rapid uranium migration in the field after a rainfall event, and for the significant uranium concentrations found in deep soil horizons and in groundwater, 1 km downstream from the pollution source. (authors)

  10. A class of parallel algorithms for computation of the manipulator inertia matrix

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  11. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    Science.gov (United States)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  12. Two-dimensional liquid chromatography consisting of twelve second-dimension columns for comprehensive analysis of intact proteins.

    Science.gov (United States)

    Ren, Jiangtao; Beckner, Matthew A; Lynch, Kyle B; Chen, Huang; Zhu, Zaifang; Yang, Yu; Chen, Apeng; Qiao, Zhenzhen; Liu, Shaorong; Lu, Joann J

    2018-05-15

    A comprehensive two-dimensional liquid chromatography (LCxLC) system consisting of twelve columns in the second dimension was developed for comprehensive analysis of intact proteins in complex biological samples. The system consisted of an ion-exchange column in the first dimension and the twelve reverse-phase columns in the second dimension; all thirteen columns were monolithic and prepared inside 250 µm i.d. capillaries. These columns were assembled together through the use of three valves and an innovative configuration. The effluent from the first dimension was continuously fractionated and sequentially transferred into the twelve second-dimension columns, while the second-dimension separations were carried out in a series of batches (six columns per batch). This LCxLC system was tested first using standard proteins followed by real-world samples from E. coli. Baseline separation was observed for eleven standard proteins and hundreds of peaks were observed for the real-world sample analysis. Two-dimensional liquid chromatography, often considered as an effective tool for mapping proteins, is seen as laborious and time-consuming when configured offline. Our online LCxLC system with increased second-dimension columns promises to provide a solution to overcome these hindrances. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Smooth H I Low Column Density Outskirts in Nearby Galaxies

    Science.gov (United States)

    Ianjamasimanana, R.; Walter, Fabian; de Blok, W. J. G.; Heald, George H.; Brinks, Elias

    2018-06-01

    The low column density gas at the outskirts of galaxies as traced by the 21 cm hydrogen line emission (H I) represents the interface between galaxies and the intergalactic medium, i.e., where galaxies are believed to get their supply of gas to fuel future episodes of star formation. Photoionization models predict a break in the radial profiles of H I at a column density of ∼5 × 1019 cm‑2 due to the lack of self-shielding against extragalactic ionizing photons. To investigate the prevalence of such breaks in galactic disks and to characterize what determines the potential edge of the H I disks, we study the azimuthally averaged H I column density profiles of 17 nearby galaxies from the H I Nearby Galaxy Survey and supplemented in two cases with published Hydrogen Accretion in LOcal GAlaxieS data. To detect potential faint H I emission that would otherwise be undetected using conventional moment map analysis, we line up individual profiles to the same reference velocity and average them azimuthally to derive stacked radial profiles. To do so, we use model velocity fields created from a simple extrapolation of the rotation curves to align the profiles in velocity at radii beyond the extent probed with the sensitivity of traditional integrated H I maps. With this method, we improve our sensitivity to outer-disk H I emission by up to an order of magnitude. Except for a few disturbed galaxies, none show evidence of a sudden change in the slope of the H I radial profiles: the alleged signature of ionization by the extragalactic background.

  14. Differential receptive field organizations give rise to nearly identical neural correlations across three parallel sensory maps in weakly electric fish.

    Science.gov (United States)

    Hofmann, Volker; Chacron, Maurice J

    2017-09-01

    Understanding how neural populations encode sensory information thereby leading to perception and behavior (i.e., the neural code) remains an important problem in neuroscience. When investigating the neural code, one must take into account the fact that neural activities are not independent but are actually correlated with one another. Such correlations are seen ubiquitously and have a strong impact on neural coding. Here we investigated how differences in the antagonistic center-surround receptive field (RF) organization across three parallel sensory maps influence correlations between the activities of electrosensory pyramidal neurons. Using a model based on known anatomical differences in receptive field center size and overlap, we initially predicted large differences in correlated activity across the maps. However, in vivo electrophysiological recordings showed that, contrary to modeling predictions, electrosensory pyramidal neurons across all three segments displayed nearly identical correlations. To explain this surprising result, we incorporated the effects of RF surround in our model. By systematically varying both the RF surround gain and size relative to that of the RF center, we found that multiple RF structures gave rise to similar levels of correlation. In particular, incorporating known physiological differences in RF structure between the three maps in our model gave rise to similar levels of correlation. Our results show that RF center overlap alone does not determine correlations which has important implications for understanding how RF structure influences correlated neural activity.

  15. The MAPS based PXL vertex detector for the STAR experiment

    International Nuclear Information System (INIS)

    Contin, G.; Anderssen, E.; Greiner, L.; Silber, J.; Stezelberger, T.; Vu, C.; Wieman, H.; Woodmansee, S.; Schambach, J.; Sun, X.; Szelezniak, M.

    2015-01-01

    The Heavy Flavor Tracker (HFT) was installed in the STAR experiment for the 2014 heavy ion run of RHIC. Designed to improve the vertex resolution and extend the measurement capabilities in the heavy flavor domain, the HFT is composed of three different silicon detectors based on CMOS monolithic active pixels (MAPS), pads and strips respectively, arranged in four concentric cylinders close to the STAR interaction point. The two innermost HFT layers are placed at a radius of 2.7 and 8 cm from the beam line, respectively, and accommodate 400 ultra-thin (50 μ m) high resolution MAPS sensors arranged in 10-sensor ladders to cover a total silicon area of 0.16 m 2 . Each sensor includes a pixel array of 928 rows and 960 columns with a 20.7 μ m pixel pitch, providing a sensitive area of ∼ 3.8 cm 2 . The architecture is based on a column parallel readout with amplification and correlated double sampling inside each pixel. Each column is terminated with a high precision discriminator, is read out in a rolling shutter mode and the output is processed through an integrated zero suppression logic. The results are stored in two SRAM with ping-pong arrangement for a continuous readout. The sensor features 185.6 μ s readout time and 170 mW/cm 2 power dissipation. The detector is air-cooled, allowing a global material budget as low as 0.39% on the inner layer. A novel mechanical approach to detector insertion enables effective installation and integration of the pixel layers within an 8 hour shift during the on-going STAR run.In addition to a detailed description of the detector characteristics, the experience of the first months of data taking will be presented in this paper, with a particular focus on sensor threshold calibration, latch-up protection procedures and general system operations aimed at stabilizing the running conditions. Issues faced during the 2014 run will be discussed together with the implemented solutions. A preliminary analysis of the detector

  16. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  17. Determination of zearalenone content in cereals and feedstuffs by immunoaffinity column coupled with liquid chromatography.

    Science.gov (United States)

    Fazekas, B; Tar, A

    2001-01-01

    The zearalenone content of maize, wheat, barley, swine feed, and poultry feed samples was determined by immunoaffinity column cleanup followed by liquid chromatography (IAC-LC). Samples were extracted in methanol-water (8 + 2, v/v) solution. The filtered extract was diluted with distilled water and applied to immunoaffinity columns. Zearalenone was eluted with methanol, dried by evaporation, and dissolved in acetonitrile-water (3 + 7, v/v). Zearalenone was separated by isocratic elution of acetonitrile-water (50 + 50, v/v) on reversed-phase C18 column. The quantitative analysis was performed by fluorescence detector and confirmation was based on the UV spectrum obtained by a diode array detector. The mean recovery rate of zearalenone was 82-97% (RSD, 1.4-4.1%) on the original (single-use) immunoaffinity columns. The limit of detection of zearalenone by fluorescence was 10 ng/g at a signal-to-noise ratio of 10:1 and 30 ng/g by spectral confirmation in UV. A good correlation was found (R2 = 0.89) between the results obtained by IAC-LC and by the official AOAC-LC method. The specificity of the method was increased by using fluorescence detection in parallel with UV detection. This method was applicable to the determination of zearalenone content in cereals and other kinds of feedstuffs. Reusability of immunoaffinity columns was examined by washing with water after sample elution and allowing columns to stand for 24 h at room temperature. The zearalenone recovery rate of the regenerated columns varied between 79 and 95% (RSD, 3.2-6.3%). Columns can be regenerated at least 3 times without altering their performance and without affecting the results of repeated determinations.

  18. Parallel Mappings as a Key for Understanding the Bioinorganic Materials

    International Nuclear Information System (INIS)

    Kuczumow, A.; Nowak, J.; Chalas, R.

    2009-01-01

    Important bio inorganic objects, both living and fossilized are as a rule characterized by a complex microscopic structure. For biological samples, the cell-like and laminar as well as growth ring structures are among most significant. Moreover, these objects belong to a now widely studied category of bio minerals with composite, inorganic-organic structure. Such materials are composed of a limited number of inorganic compounds and several natural organic polymers. This apparently simple composition leads to an abnormal variety of constructions significant from the medical (repairs and implants), natural (ecological effectiveness) and material science (biomimetic synthesis) point of view. The analysis of an image obtained in an optical microscope, optionally in a scanning electron microscope is a topographical reference for further investigations. For the characterization of the distribution of chemical elements and compounds in a material, techniques such as X-ray, electron- or proton microprobes are applied. Essentially, elemental mappings are collected in this stage. The need for the application of an X-ray diffraction microprobe is obvious and our experience indicates on the necessity of using the synchrotron-based devices due to their better spatial resolution and good X-ray intensity. To examine the presence of the organic compounds, the Raman microprobe measurements are good options. They deliver information about the spatial distribution of functional groups and oscillating fragments of molecules. For the comprehensive investigation of bio inorganic material structural and chemical features, we propose the following sequence of methods: optical imaging, elemental mapping, crystallographic mapping, organic mapping and micromechanical mapping. The examples of such an approach are given for: petrified wood, human teeth, and an ammonite shell. (authors)

  19. Sequential and parallel image restoration: neural network implementations.

    Science.gov (United States)

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  20. Implementation of multidimensional databases in column-oriented NoSQL systems

    OpenAIRE

    Chevalier, Max; El Malki, Mohammed; Kopliku, Arlind; Teste, Olivier; Tournier, Ronan

    2015-01-01

    International audience; NoSQL (Not Only SQL) systems are becoming popular due to known advantages such as horizontal scalability and elasticity. In this paper, we study the implementation of multidimensional data warehouses with columnoriented NoSQL systems. We define mapping rules that transform the conceptual multidimensional data model to logical column-oriented models. We consider three different logical models and we use them to instantiate data warehouses. We focus on data loading, mode...

  1. Behavior of chemicals in the seawater column by shadowscopy

    Science.gov (United States)

    Fuhrer, Mélanie; Aprin, Laurent; Le Floch, Stéphane; Slangen, Pierre; Dusserre, Gilles

    2012-10-01

    Ninety percent of the Global Movement of Goods transit by ship. The transportation of HNS (Hazardous and Noxious Substances) in bulk highly increases with the tanker traffic. The huge volume capacities induce a major risk of accident involving chemicals. Among the latest accidents, many have led to vessels sinking (Ievoli Sun, 2000 - ECE, 2006). In case of floating substances, liquid release in depth entails an ascending two phase flow. The visualization of that flow is complex. Indeed, liquid chemicals have mostly a refractive index close to water, causing difficulties for the assessment of the two phase medium behavior. Several physics aspects are points of interest: droplets characterization (shape evolution and velocity), dissolution kinetics and hydrodynamic vortices. Previous works, presented in the 2010 Speckle conference in Brazil, employed Dynamic Speckle Interferometry to study Methyl Ethyl Ketone (MEK) dissolution in a 15 cm high and 1 cm thick water column. This paper deals with experiments achieved with the Cedre Experimental Column (CEC - 5 m high and 0.8 m in diameter). As the water thickness has been increased, Dynamic Speckle Interferometry results are improved by shadowscopic measurements. A laser diode is used to generate parallel light while high speed imaging records the products rising. Two measurements systems are placed at the bottom and the top of the CEC. The chemical class of pollutant like floaters, dissolvers (plume, trails or droplets) has been then identified. Physics of the two phase flow is presented and shows up the dependence on chemicals properties such as interfacial tension, viscosity and density. Furthermore, parallel light propagation through this disturbed medium has revealed trailing edges vortices for some substances (e.g. butanol) presenting low refractive index changes.

  2. Design and implementation of a micron-sized electron column fabricated by focused ion beam milling

    Energy Technology Data Exchange (ETDEWEB)

    Wicki, Flavio, E-mail: flavio.wicki@physik.uzh.ch; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner

    2016-01-15

    We have designed, fabricated and tested a micron-sized electron column with an overall length of about 700 microns comprising two electron lenses; a micro-lens with a minimal bore of 1 micron followed by a second lens with a bore of up to 50 microns in diameter to shape a coherent low-energy electron wave front. The design criteria follow the notion of scaling down source size, lens-dimensions and kinetic electron energy for minimizing spherical aberrations to ensure a parallel coherent electron wave front. All lens apertures have been milled employing a focused ion beam and could thus be precisely aligned within a tolerance of about 300 nm from the optical axis. Experimentally, the final column shapes a quasi-planar wave front with a minimal full divergence angle of 4 mrad and electron energies as low as 100 eV. - Highlights: • Electron optics • Scaling laws • Low-energy electrons • Coherent electron beams • Micron-sized electron column.

  3. Column-Oriented Database Systems (Tutorial)

    NARCIS (Netherlands)

    D. Abadi; P.A. Boncz (Peter); S. Harizopoulos

    2009-01-01

    textabstractColumn-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as

  4. Mechanized sephadex LH-20 multiple column chromatography as a prerequisite to automated multi-steroid radioimmunoassays

    International Nuclear Information System (INIS)

    Sippell, W.G.; Bidlingmaier, F.; Knorr, D.

    1977-01-01

    In order to establish a procedure for the simultaneous determination of all major corticosteroid hormones and their immediate biological precursors in the same plasma sample, two different mechanized methods for the simultaneous isolation of aldosterone (A), corticosterone (B), 11-deoxycorticosterone (DOC), progesterone (P), 17-hydroxyprogesterone (17-OHP), 11-deoxycorticol (S), cortisol (F), and cortisone (E) from the methylene chloride extracts of 0.1 to 2.0 ml plasma samples have been developed. In both methods, eluate fractions of each of the isolated steroids are automatically pooled and collected from all parallel columns by one programmable linear fraction collector. Due to the high reproducibility of the elution patterns both between different parallel columns and between 30 to 40 consecutive elutions, mean recoveries of tritiated steroids including extraction are 60 to 84% after a single elution and still over 50% after an additional chromatography on 40cm LH-20 colums, with coefficients of variation below 15%. Thus, the eight steroids can be completely isolated from each of ten plasma extracts within 3 to 4 hours, yielding 80 samples readily prepared for subsequent quantitation by radioimmunoassay. (orig./AJ) [de

  5. Relationship between surface, free tropospheric and total column ozone in 2 contrasting areas in South-Africa

    CSIR Research Space (South Africa)

    Combrink, J

    1995-04-01

    Full Text Available Measurements of surface ozone in two contrasting areas of South Africa are compared with free tropospheric and Total Ozone Mapping Spectrometer (TOMS) total column ozone data. Cape Point is representative of a background monitoring station which...

  6. Quantitative atomic resolution elemental mapping via absolute-scale energy dispersive X-ray spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z. [School of Physics and Astronomy, Monash University, Clayton, Victoria 3800 (Australia); Weyland, M. [Monash Centre for Electron Microscopy, Monash University, Clayton, Victoria 3800 (Australia); Department of Materials Science and Engineering, Monash University, Clayton, Victoria 3800 (Australia); Sang, X.; Xu, W.; Dycus, J.H.; LeBeau, J.M. [Department of Materials Science and Engineering, North Carolina State University, Raleigh, NC 27695 (United States); D' Alfonso, A.J.; Allen, L.J. [School of Physics, University of Melbourne, Parkville, Victoria 3010 (Australia); Findlay, S.D., E-mail: scott.findlay@monash.edu [School of Physics and Astronomy, Monash University, Clayton, Victoria 3800 (Australia)

    2016-09-15

    Quantitative agreement on an absolute scale is demonstrated between experiment and simulation for two-dimensional, atomic-resolution elemental mapping via energy dispersive X-ray spectroscopy. This requires all experimental parameters to be carefully characterized. The agreement is good, but some discrepancies remain. The most likely contributing factors are identified and discussed. Previous predictions that increasing the probe forming aperture helps to suppress the channelling enhancement in the average signal are confirmed experimentally. It is emphasized that simple column-by-column analysis requires a choice of sample thickness that compromises between being thick enough to yield a good signal-to-noise ratio while being thin enough that the overwhelming majority of the EDX signal derives from the column on which the probe is placed, despite strong electron scattering effects. - Highlights: • Absolute scale quantification of 2D atomic-resolution EDX maps is demonstrated. • Factors contributing to remaining small quantitative discrepancies are identified. • Experiment confirms large probe-forming apertures suppress channelling enhancement. • The thickness range suitable for reliable column-by-column analysis is discussed.

  7. Family of columns isospectral to gravity-loaded columns with tip force: A discrete approach

    Science.gov (United States)

    Ramachandran, Nirmal; Ganguli, Ranjan

    2018-06-01

    A discrete model is introduced to analyze transverse vibration of straight, clamped-free (CF) columns of variable cross-sectional geometry under the influence of gravity and a constant axial force at the tip. The discrete model is used to determine critical combinations of loading parameters - a gravity parameter and a tip force parameter - that cause onset of dynamic instability in the CF column. A methodology, based on matrix-factorization, is described to transform the discrete model into a family of models corresponding to weightless and unloaded clamped-free (WUCF) columns, each with a transverse vibration spectrum isospectral to the original model. Characteristics of models in this isospectral family are dependent on three transformation parameters. A procedure is discussed to convert the isospectral discrete model description into geometric description of realistic columns i.e. from the discrete model, we construct isospectral WUCF columns with rectangular cross-sections varying in width and depth. As part of numerical studies to demonstrate efficacy of techniques presented, frequency parameters of a uniform column and three types of tapered CF columns under different combinations of loading parameters are obtained from the discrete model. Critical combinations of these parameters for a typical tapered column are derived. These results match with published results. Example CF columns, under arbitrarily-chosen combinations of loading parameters are considered and for each combination, isospectral WUCF columns are constructed. Role of transformation parameters in determining characteristics of isospectral columns is discussed and optimum values are deduced. Natural frequencies of these WUCF columns computed using Finite Element Method (FEM) match well with those of the given gravity-loaded CF column with tip force, hence confirming isospectrality.

  8. A privacy-preserving parallel and homomorphic encryption scheme

    Directory of Open Access Journals (Sweden)

    Min Zhaoe

    2017-04-01

    Full Text Available In order to protect data privacy whilst allowing efficient access to data in multi-nodes cloud environments, a parallel homomorphic encryption (PHE scheme is proposed based on the additive homomorphism of the Paillier encryption algorithm. In this paper we propose a PHE algorithm, in which plaintext is divided into several blocks and blocks are encrypted with a parallel mode. Experiment results demonstrate that the encryption algorithm can reach a speed-up ratio at about 7.1 in the MapReduce environment with 16 cores and 4 nodes.

  9. Distillation Column Flooding Predictor

    Energy Technology Data Exchange (ETDEWEB)

    George E. Dzyacky

    2010-11-23

    The Flooding Predictor™ is a patented advanced control technology proven in research at the Separations Research Program, University of Texas at Austin, to increase distillation column throughput by over 6%, while also increasing energy efficiency by 10%. The research was conducted under a U. S. Department of Energy Cooperative Agreement awarded to George Dzyacky of 2ndpoint, LLC. The Flooding Predictor™ works by detecting the incipient flood point and controlling the column closer to its actual hydraulic limit than historical practices have allowed. Further, the technology uses existing column instrumentation, meaning no additional refining infrastructure is required. Refiners often push distillation columns to maximize throughput, improve separation, or simply to achieve day-to-day optimization. Attempting to achieve such operating objectives is a tricky undertaking that can result in flooding. Operators and advanced control strategies alike rely on the conventional use of delta-pressure instrumentation to approximate the column’s approach to flood. But column delta-pressure is more an inference of the column’s approach to flood than it is an actual measurement of it. As a consequence, delta pressure limits are established conservatively in order to operate in a regime where the column is never expected to flood. As a result, there is much “left on the table” when operating in such a regime, i.e. the capacity difference between controlling the column to an upper delta-pressure limit and controlling it to the actual hydraulic limit. The Flooding Predictor™, an innovative pattern recognition technology, controls columns at their actual hydraulic limit, which research shows leads to a throughput increase of over 6%. Controlling closer to the hydraulic limit also permits operation in a sweet spot of increased energy-efficiency. In this region of increased column loading, the Flooding Predictor is able to exploit the benefits of higher liquid

  10. Annular pulse column development studies

    International Nuclear Information System (INIS)

    Benedict, G.E.

    1980-01-01

    The capacity of critically safe cylindrical pulse columns limits the size of nuclear fuel solvent extraction plants because of the limited cross-sectional area of plutonium, U-235, or U-233 processing columns. Thus, there is a need to increase the cross-sectional area of these columns. This can be accomplished through the use of a column having an annular cross section. The preliminary testing of a pilot-plant-scale annular column has been completed and is reported herein. The column is made from 152.4-mm (6-in.) glass pipe sections with an 89-mm (3.5-in.) o.d. internal tube, giving an annular width of 32-mm (1.25-in.). Louver plates are used to swirl the column contents to prevent channeling of the phases. The data from this testing indicate that this approach can successfully provide larger-cross-section critically safe pulse columns. While the capacity is only 70% of that of a cylindrical column of similar cross section, the efficiency is almost identical to that of a cylindrical column. No evidence was seen of any non-uniform pulsing action from one side of the column to the other

  11. An Effective NoSQL-Based Vector Map Tile Management Approach

    Directory of Open Access Journals (Sweden)

    Lin Wan

    2016-11-01

    Full Text Available Within a digital map service environment, the rapid growth of Spatial Big-Data is driving new requirements for effective mechanisms for massive online vector map tile processing. The emergence of Not Only SQL (NoSQL databases has resulted in a new data storage and management model for scalable spatial data deployments and fast tracking. They better suit the scenario of high-volume, low-latency network map services than traditional standalone high-performance computer (HPC or relational databases. In this paper, we propose a flexible storage framework that provides feasible methods for tiled map data parallel clipping and retrieval operations within a distributed NoSQL database environment. We illustrate the parallel vector tile generation and querying algorithms with the MapReduce programming model. Three different processing approaches, including local caching, distributed file storage, and the NoSQL-based method, are compared by analyzing the concurrent load and calculation time. An online geological vector tile map service prototype was developed to embed our processing framework in the China Geological Survey Information Grid. Experimental results show that our NoSQL-based parallel tile management framework can support applications that process huge volumes of vector tile data and improve performance of the tiled map service.

  12. Column-Oriented Database Systems (Tutorial)

    OpenAIRE

    Abadi, D.; Boncz, Peter; Harizopoulos, S.

    2009-01-01

    textabstractColumn-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as opposed to traditional database systems that store entire records (rows) one after the other. Reading a subset of a table’s columns becomes faster, at the potential expense of excessive disk-head s...

  13. Nuclear reactor control column

    International Nuclear Information System (INIS)

    Bachovchin, D.M.

    1982-01-01

    The nuclear reactor control column comprises a column disposed within the nuclear reactor core having a variable cross-section hollow channel and containing balls whose vertical location is determined by the flow of the reactor coolant through the column. The control column is divided into three basic sections wherein each of the sections has a different cross-sectional area. The uppermost section of the control column has the greatest crosssectional area, the intermediate section of the control column has the smallest cross-sectional area, and the lowermost section of the control column has the intermediate cross-sectional area. In this manner, the area of the uppermost section can be established such that when the reactor coolant is flowing under normal conditions therethrough, the absorber balls will be lifted and suspended in a fluidized bed manner in the upper section. However, when the reactor coolant flow falls below a predetermined value, the absorber balls will fall through the intermediate section and into the lowermost section, thereby reducing the reactivity of the reactor core and shutting down the reactor

  14. Column Selection for Biomedical Analysis Supported by Column Classification Based on Four Test Parameters.

    Science.gov (United States)

    Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz

    2016-01-21

    This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis.

  15. Improvements in solvent extraction columns

    International Nuclear Information System (INIS)

    Aughwane, K.R.

    1987-01-01

    Solvent extraction columns are used in the reprocessing of irradiated nuclear fuel. For an effective reprocessing operation a solvent extraction column is required which is capable of distributing the feed over most of the column. The patent describes improvements in solvent extractions columns which allows the feed to be distributed over an increased length of column than was previously possible. (U.K.)

  16. Neural net generated seismic facies map and attribute facies map

    International Nuclear Information System (INIS)

    Addy, S.K.; Neri, P.

    1998-01-01

    The usefulness of 'seismic facies maps' in the analysis of an Upper Wilcox channel system in a 3-D survey shot by CGG in 1995 in Lavaca county in south Texas was discussed. A neural net-generated seismic facies map is a quick hydrocarbon exploration tool that can be applied regionally as well as on a prospect scale. The new technology is used to classify a constant interval parallel to a horizon in a 3-D seismic volume based on the shape of the wiggle traces using a neural network technology. The tool makes it possible to interpret sedimentary features of a petroleum deposit. The same technology can be used in regional mapping by making 'attribute facies maps' in which various forms of amplitude attributes, phase attributes or frequency attributes can be used

  17. A New ENSO Index Derived from Satellite Measurements of Column Ozone

    Science.gov (United States)

    Ziemke, J. R.; Chandra, S.; Oman, L. D.; Bhartia, P. K.

    2010-01-01

    Column Ozone measured in tropical latitudes from Nimbus 7 total ozone mapping spectrometer (TOMS), Earth Probe TOMS, solar backscatter ultraviolet (SBUV), and Aura ozone monitoring instrument (OMI) are used to derive an El Nino-Southern Oscillation (ENSO) index. This index, which covers a time period from 1979 to the present, is defined as the Ozone ENSO Index (OEI) and is the first developed from atmospheric trace gas measurements. The OEI is constructed by first averaging monthly mean column ozone over two broad regions in the western and eastern Pacific and then taking their difference. This differencing yields a self-calibrating ENSO index which is independent of individual instrument calibration offsets and drifts in measurements over the long record. The combined Aura OMI and MLS ozone data confirm that zonal variability in total column ozone in the tropics caused by ENSO events lies almost entirely in the troposphere. As a result, the OEI can be derived directly from total column ozone instead of tropospheric column ozone. For clear-sky ozone measurements a +1K change in Nino 3.4 index corresponds to +2.9 Dobson Unit (DU) change in the OEI, while a +1 hPa change in SOI coincides with a -1.7DU change in the OEI. For ozone measurements under all cloud conditions these numbers are +2.4DU and -1.4 DU, respectively. As an ENSO index based upon ozone, it is potentially useful in evaluating climate models predicting long term changes in ozone and other trace gases.

  18. Scaling up machine learning: parallel and distributed approaches

    National Research Council Canada - National Science Library

    Bekkerman, Ron; Bilenko, Mikhail; Langford, John

    2012-01-01

    ... presented in the book cover a range of parallelization platforms from FPGAs and GPUs to multi-core systems and commodity clusters; concurrent programming frameworks that include CUDA, MPI, MapReduce, and DryadLINQ; and various learning settings: supervised, unsupervised, semi-supervised, and online learning. Extensive coverage of parallelizat...

  19. SU-F-J-146: Experimental Validation of 6 MV Photon PDD in Parallel Magnetic Field Calculated by EGSnrc

    Energy Technology Data Exchange (ETDEWEB)

    Ghila, A; Steciw, S; Fallone, B; Rathee, S [Cross Cancer Institute, Edmonton, AB (Canada)

    2016-06-15

    Purpose: Integrated linac-MR systems are uniquely suited for real time tumor tracking during radiation treatment. Understanding the magnetic field dose effects and incorporating them in treatment planning is paramount for linac-MR clinical implementation. We experimentally validated the EGSnrc dose calculations in the presence of a magnetic field parallel to the radiation beam travel. Methods: Two cylindrical bore electromagnets produced a 0.21 T magnetic field parallel to the central axis of a 6 MV photon beam. A parallel plate ion chamber was used to measure the PDD in a polystyrene phantom, placed inside the bore in two setups: phantom top surface coinciding with the magnet bore center (183 cm SSD), and with the magnet bore’s top surface (170 cm SSD). We measured the field of the magnet at several points and included the exact dimensions of the coils to generate a 3D magnetic field map in a finite element model. BEAMnrc and DOSXYZnrc simulated the PDD experiments in parallel magnetic field (i.e. 3D magnetic field included) and with no magnetic field. Results: With the phantom surface at the top of the electromagnet, the surface dose increased by 10% (compared to no-magnetic field), due to electrons being focused by the smaller fringe fields of the electromagnet. With the phantom surface at the bore center, the surface dose increased by 30% since extra 13 cm of air column was in relatively higher magnetic field (>0.13T) in the magnet bore. EGSnrc Monte Carlo code correctly calculated the radiation dose with and without the magnetic field, and all points passed the 2%, 2 mm Gamma criterion when the ion chamber’s entrance window and air cavity were included in the simulated phantom. Conclusion: A parallel magnetic field increases the surface and buildup dose during irradiation. The EGSnrc package can model these magnetic field dose effects accurately. Dr. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi

  20. ( Anogeissus leiocarpus ) timber columns

    African Journals Online (AJOL)

    A procedure for designing axially loaded Ayin (Anogeissus leiocarpus) wood column or strut has been investigated. Instead of the usual categorization of columns into short, intermediate and slender according to the value of slenderness ratio, a continuous column formula representing the three categories was derived.

  1. LIQUID-LIQUID EXTRACTION COLUMNS

    Science.gov (United States)

    Thornton, J.D.

    1957-12-31

    This patent relates to liquid-liquid extraction columns having a means for pulsing the liquid in the column to give it an oscillatory up and down movement, and consists of a packed column, an inlet pipe for the dispersed liquid phase and an outlet pipe for the continuous liquid phase located in the direct communication with the liquid in the lower part of said column, an inlet pipe for the continuous liquid phase and an outlet pipe for the dispersed liquid phase located in direct communication with the liquid in the upper part of said column, a tube having one end communicating with liquid in the lower part of said column and having its upper end located above the level of said outlet pipe for the dispersed phase, and a piston and cylinder connected to the upper end of said tube for applying a pulsating pneumatic pressure to the surface of the liquid in said tube so that said surface rises and falls in said tube.

  2. Wall modified photonic crystal fibre capillaries as porous layer open tubular columns for in-capillary micro-extraction and capillary chromatography

    International Nuclear Information System (INIS)

    Kazarian, Artaches A.; Sanz Rodriguez, Estrella; Deverell, Jeremy A.; McCord, James; Muddiman, David C.; Paull, Brett

    2016-01-01

    Wall modified photonic crystal fibre capillary columns for in-capillary micro-extraction and liquid chromatographic separations is presented. Columns contained 126 internal parallel 4 μm channels, each containing a wall bonded porous monolithic type polystyrene-divinylbenzene layer in open tubular column format (PLOT). Modification longitudinal homogeneity was monitored using scanning contactless conductivity detection and scanning electron microscopy. The multichannel open tubular capillary column showed channel diameter and polymer layer consistency of 4.2 ± 0.1 μm and 0.26 ± 0.02 μm respectively, and modification of 100% of the parallel channels with the monolithic polymer. The modified multi-channel capillaries were applied to the in-capillary micro-extraction of water samples. 500 μL of water samples containing single μg L"−"1 levels of polyaromatic hydrocarbons were extracted at a flow rate of 10 μL min"−"1, and eluted in 50 μL of acetonitrile for analysis using HPLC with fluorescence detection. HPLC LODs were 0.08, 0.02 and 0.05 μg L"−"1 for acenaphthene, anthracene and pyrene, respectively, with extraction recoveries of between 77 and 103%. The modified capillaries were also investigated briefly for direct application to liquid chromatographic separations, with the retention and elution of a standard protein (cytochrome c) under isocratic conditions demonstrated, proving chromatographic potential of the new column format, with run-to-run retention time reproducibility of below 1%. - Highlights: • Novel PS-DVB modified photonic crystal fibres for in-capillary micro-extraction. • New method for micro-extraction of PAHs and HPLC-FL detection at sub-ppb levels. • Demonstration of PS-DVB modified photonic crystal fibres for capillary bioseparations.

  3. Wall modified photonic crystal fibre capillaries as porous layer open tubular columns for in-capillary micro-extraction and capillary chromatography

    Energy Technology Data Exchange (ETDEWEB)

    Kazarian, Artaches A. [Australian Centre for Research on Separation Science, School of Physical Sciences, University of Tasmania, Private Bag 75, Hobart, Tasmania 7001 (Australia); W.M. Keck FT-ICR-MS Laboratory, Department of Chemistry, North Carolina State University, Raleigh, NC (United States); Sanz Rodriguez, Estrella; Deverell, Jeremy A. [Australian Centre for Research on Separation Science, School of Physical Sciences, University of Tasmania, Private Bag 75, Hobart, Tasmania 7001 (Australia); McCord, James; Muddiman, David C. [W.M. Keck FT-ICR-MS Laboratory, Department of Chemistry, North Carolina State University, Raleigh, NC (United States); Paull, Brett, E-mail: Brett.Paull@utas.edu.au [Australian Centre for Research on Separation Science, School of Physical Sciences, University of Tasmania, Private Bag 75, Hobart, Tasmania 7001 (Australia); ARC Centre of Excellence for Electromaterials Science, School of Physical Sciences, University of Tasmania, Private Bag 75, Hobart, Tasmania 7001 (Australia)

    2016-01-28

    Wall modified photonic crystal fibre capillary columns for in-capillary micro-extraction and liquid chromatographic separations is presented. Columns contained 126 internal parallel 4 μm channels, each containing a wall bonded porous monolithic type polystyrene-divinylbenzene layer in open tubular column format (PLOT). Modification longitudinal homogeneity was monitored using scanning contactless conductivity detection and scanning electron microscopy. The multichannel open tubular capillary column showed channel diameter and polymer layer consistency of 4.2 ± 0.1 μm and 0.26 ± 0.02 μm respectively, and modification of 100% of the parallel channels with the monolithic polymer. The modified multi-channel capillaries were applied to the in-capillary micro-extraction of water samples. 500 μL of water samples containing single μg L{sup −1} levels of polyaromatic hydrocarbons were extracted at a flow rate of 10 μL min{sup −1}, and eluted in 50 μL of acetonitrile for analysis using HPLC with fluorescence detection. HPLC LODs were 0.08, 0.02 and 0.05 μg L{sup −1} for acenaphthene, anthracene and pyrene, respectively, with extraction recoveries of between 77 and 103%. The modified capillaries were also investigated briefly for direct application to liquid chromatographic separations, with the retention and elution of a standard protein (cytochrome c) under isocratic conditions demonstrated, proving chromatographic potential of the new column format, with run-to-run retention time reproducibility of below 1%. - Highlights: • Novel PS-DVB modified photonic crystal fibres for in-capillary micro-extraction. • New method for micro-extraction of PAHs and HPLC-FL detection at sub-ppb levels. • Demonstration of PS-DVB modified photonic crystal fibres for capillary bioseparations.

  4. Mobile and replicated alignment of arrays in data-parallel programs

    Science.gov (United States)

    Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert

    1993-01-01

    When a data-parallel language like FORTRAN 90 is compiled for a distributed-memory machine, aggregate data objects (such as arrays) are distributed across the processor memories. The mapping determines the amount of residual communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: first, an alignment that maps all the objects to an abstract template, and then a distribution that maps the template to the processors. We solve two facets of the problem of finding alignments that reduce residual communication: we determine alignments that vary in loops, and objects that should have replicated alignments. We show that loop-dependent mobile alignment is sometimes necessary for optimum performance, and we provide algorithms with which a compiler can determine good mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself (via spread operations) or can be used to improve performance. We propose an algorithm based on network flow that determines which objects to replicate so as to minimize the total amount of broadcast communication in replication. This work on mobile and replicated alignment extends our earlier work on determining static alignment.

  5. Speeding Up the String Comparison of the IDS Snort using Parallel Programming: A Systematic Literature Review on the Parallelized Aho-Corasick Algorithm

    Directory of Open Access Journals (Sweden)

    SILVA JUNIOR,J. B.

    2016-12-01

    Full Text Available The Intrusion Detection System (IDS needs to compare the contents of all packets arriving at the network interface with a set of signatures for indicating possible attacks, a task that consumes much CPU processing time. In order to alleviate this problem, some researchers have tried to parallelize the IDS's comparison engine, transferring execution from the CPU to GPU. This paper identifies and maps the parallelization features of the Aho-Corasick algorithm, which is used in Snort to compare patterns, in order to show this algorithm's implementation and execution issues, as well as optimization techniques for the Aho-Corasick machine. We have found 147 papers from important computer science publications databases, and have mapped them. We selected 22 and analyzed them in order to find our results. Our analysis of the papers showed, among other results, that parallelization of the AC algorithm is a new task and the authors have focused on the State Transition Table as the most common way to implement the algorithm on the GPU. Furthermore, we found that some techniques speed up the algorithm and reduce the required machine storage space are highly used, such as the algorithm running on the fastest memories and mechanisms for reducing the number of nodes and bit maping.

  6. NeatMap--non-clustering heat map alternatives in R.

    Science.gov (United States)

    Rajaram, Satwik; Oono, Yoshi

    2010-01-22

    The clustered heat map is the most popular means of visualizing genomic data. It compactly displays a large amount of data in an intuitive format that facilitates the detection of hidden structures and relations in the data. However, it is hampered by its use of cluster analysis which does not always respect the intrinsic relations in the data, often requiring non-standardized reordering of rows/columns to be performed post-clustering. This sometimes leads to uninformative and/or misleading conclusions. Often it is more informative to use dimension-reduction algorithms (such as Principal Component Analysis and Multi-Dimensional Scaling) which respect the topology inherent in the data. Yet, despite their proven utility in the analysis of biological data, they are not as widely used. This is at least partially due to the lack of user-friendly visualization methods with the visceral impact of the heat map. NeatMap is an R package designed to meet this need. NeatMap offers a variety of novel plots (in 2 and 3 dimensions) to be used in conjunction with these dimension-reduction techniques. Like the heat map, but unlike traditional displays of such results, it allows the entire dataset to be displayed while visualizing relations between elements. It also allows superimposition of cluster analysis results for mutual validation. NeatMap is shown to be more informative than the traditional heat map with the help of two well-known microarray datasets. NeatMap thus preserves many of the strengths of the clustered heat map while addressing some of its deficiencies. It is hoped that NeatMap will spur the adoption of non-clustering dimension-reduction algorithms.

  7. Scalability of pre-packed preparative chromatography columns with different diameters and lengths taking into account extra column effects.

    Science.gov (United States)

    Schweiger, Susanne; Jungbauer, Alois

    2018-02-16

    Small pre-packed columns are commonly used to estimate the optimum run parameters for pilot and production scale. The question arises if the experiments obtained with these columns are scalable, because there are substantial changes in extra column volume when going from a very small scale to a benchtop column. In this study we demonstrate the scalability of pre-packed disposable and non-disposable columns of volumes in the range of 0.2-20 ml packed with various media using superficial velocities in the range of 30-500 cm/h. We found that the relative contribution of extra column band broadening to total band broadening was not only high for columns with small diameters, but also for columns with a larger volume due to their wider diameter. The extra column band broadening can be more than 50% for columns with volumes larger than 10 ml. An increase in column diameter leads to high additional extra column band broadening in the filter, frits, and adapters of the columns. We found a linear relationship between intra column band broadening and column length, which increased stepwise with increases in column diameter. This effect was also corroborated by CFD simulation. The intra column band broadening was the same for columns packed with different media. An empirical engineering equation and the data gained from the extra column effects allowed us to predict the intra, extra, and total column band broadening just from column length, diameter, and flow rate. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.

  8. NONLINEAR FINITE ELEMENT ANALYSIS OF NONSEISMICALLY DETAILED INTERIOR RC BEAM-COLUMN CONNECTION UNDER REVERSED CYCLIC LOAD

    Directory of Open Access Journals (Sweden)

    Teeraphot Supaviriyakit

    2017-11-01

    Full Text Available This paper presents a nonlinear finite element analysis of non-seismically detailed RC beam column connections under reversed cyclic load. The test of half-scale nonductile reinforced concrete beam-column joints was conducted. The tested specimens represented those of the actual mid-rise reinforced concrete frame buildings designed according to the non-seismic provisions of the ACI building code.  The test results show that specimens representing small and medium column tributary area failed in brittle joint shear while specimen representing large column tributary area failed by ductile flexure though no ductile reinforcement details were provided. The nonlinear finite element analysis was applied to simulate the behavior of the specimens. The finite element analysis employs the smeared crack approach for modeling beam, column and joint, and employs the discrete crack approach for modeling the interface between beam and joint face. The nonlinear constitutive models of reinforced concrete elements consist of coupled tension-compression model to model normal force orthogonal and parallel to the crack and shear transfer model to capture the shear sliding mechanism. The FEM shows good comparison with test results in terms of load-displacement relations, hysteretic loops, cracking process and the failure mode of the tested specimens. The finite element analysis clarifies that the joint shear failure was caused by the collapse of principal diagonal concrete strut.

  9. Separate the inseparable one-layer mapping

    Science.gov (United States)

    Hu, Chia-Lun J.

    2000-04-01

    When the input-output mapping of a one-layered perceptron (OLP) does NOT meet the PLI condition which is the if-and- only-if, or 'IFF, condition that the mapping can be realized by a OLP, then no matter what learning rule we use, a OLP just cannot realize this mapping at all. However, because of the nature of the PLI, one can still construct a parallel- cascaded, two-layered perceptron system to realize this `illegal' mapping. Theory and design example of this novel design will be reported in detail in this paper.

  10. Functional Connectivity of Resting Hemodynamic Signals in Submillimeter Orientation Columns of the Visual Cortex.

    Science.gov (United States)

    Vasireddi, Anil K; Vazquez, Alberto L; Whitney, David E; Fukuda, Mitsuhiro; Kim, Seong-Gi

    2016-09-07

    Resting-state functional magnetic resonance imaging has been increasingly used for examining connectivity across brain regions. The spatial scale by which hemodynamic imaging can resolve functional connections at rest remains unknown. To examine this issue, deoxyhemoglobin-weighted intrinsic optical imaging data were acquired from the visual cortex of lightly anesthetized ferrets. The neural activity of orientation domains, which span a distance of 0.7-0.8 mm, has been shown to be correlated during evoked activity and at rest. We performed separate analyses to assess the degree to which the spatial and temporal characteristics of spontaneous hemodynamic signals depend on the known functional organization of orientation columns. As a control, artificial orientation column maps were generated. Spatially, resting hemodynamic patterns showed a higher spatial resemblance to iso-orientation maps than artificially generated maps. Temporally, a correlation analysis was used to establish whether iso-orientation domains are more correlated than orthogonal orientation domains. After accounting for a significant decrease in correlation as a function of distance, a small but significant temporal correlation between iso-orientation domains was found, which decreased with increasing difference in orientation preference. This dependence was abolished when using artificially synthetized orientation maps. Finally, the temporal correlation coefficient as a function of orientation difference at rest showed a correspondence with that calculated during visual stimulation suggesting that the strength of resting connectivity is related to the strength of the visual stimulation response. Our results suggest that temporal coherence of hemodynamic signals measured by optical imaging of intrinsic signals exists at a submillimeter columnar scale in resting state.

  11. Optimization and simulation of tandem column supercritical fluid chromatography separations using column back pressure as a unique parameter.

    Science.gov (United States)

    Wang, Chunlei; Tymiak, Adrienne A; Zhang, Yingru

    2014-04-15

    Tandem column supercritical fluid chromatography (SFC) has demonstrated to be a useful technique to resolve complex mixtures by serially coupling two columns of different selectivity. The overall selectivity of a tandem column separation is the retention time weighted average of selectivity from each coupled column. Currently, the method development merely relies on extensive screenings and is often a hit-or-miss process. No attention is paid to independently adjust retention and selectivity contributions from individual columns. In this study, we show how tandem column SFC selectivity can be optimized by changing relative dimensions (length or inner diameter) of the coupled columns. Moreover, we apply column back pressure as a unique parameter for SFC optimization. Continuous tuning of tandem column SFC selectivity is illustrated through column back pressure adjustments of the upstream column, for the first time. In addition, we show how and why changing coupling order of the columns can produce dramatically different separations. Using the empirical mathematical equation derived in our previous study, we also demonstrate a simulation of tandem column separations based on a single retention time measurement on each column. The simulation compares well with experimental results and correctly predicts column order and back pressure effects on the separations. Finally, considerations on instrument and column hardware requirements are discussed.

  12. NETRA: A parallel architecture for integrated vision systems 2: Algorithms and performance evaluation

    Science.gov (United States)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.

  13. Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation

    Science.gov (United States)

    Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao

    2015-07-01

    Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.

  14. Continuous fraction collection of gas chromatographic separations with parallel mass spectrometric detection applied to cell-based bioactivity analysis

    NARCIS (Netherlands)

    Jonker, Willem; Zwart, Nick; Stockl, Jan B.; de Koning, Sjaak; Schaap, Jaap; Lamoree, Marja H.; Somsen, Govert W.; Hamers, Timo; Kool, Jeroen

    2017-01-01

    We describe the development and evaluation of a GC-MS fractionation platform that combines high-resolution fraction collection of full chromatograms with parallel MS detection. A y-split at the column divides the effluent towards the MS detector and towards an inverted y-piece where vaporized trap

  15. Gas Chromatograph Method Optimization Trade Study for RESOLVE: 20-meter Column v. 8-meter Column

    Science.gov (United States)

    Huz, Kateryna

    2014-01-01

    RESOLVE is the payload on a Class D mission, Resource Prospector, which will prospect for water and other volatile resources at a lunar pole. The RESOLVE payload's primary scientific purpose includes determining the presence of water on the moon in the lunar regolith. In order to detect the water, a gas chromatograph (GC) will be used in conjunction with a mass spectrometer (MS). The goal of the experiment was to compare two GC column lengths and recommend which would be best for RESOLVE's purposes. Throughout the experiment, an Inficon Fusion GC and an Inficon Micro GC 3000 were used. The Fusion had a 20m long column with 0.25mm internal diameter (Id). The Micro GC 3000 had an 8m long column with a 0.32mm Id. By varying the column temperature and column pressure while holding all other parameters constant, the ideal conditions for testing with each column length in their individual instrument configurations were determined. The criteria used for determining the optimal method parameters included (in no particular order) (1) quickest run time, (2) peak sharpness, and (3) peak separation. After testing numerous combinations of temperature and pressure, the parameters for each column length that resulted in the most optimal data given my three criteria were selected. The ideal temperature and pressure for the 20m column were 95 C and 50psig. At this temperature and pressure, the peaks were separated and the retention times were shorter compared to other combinations. The Inficon Micro GC 3000 operated better at lower temperature mainly due to the shorter 8m column. The optimal column temperature and pressure were 70 C and 30psig. The Inficon Micro GC 3000 8m column had worse separation than the Inficon Fusion 20m column, but was able to separate water within a shorter run time. Therefore, the most significant tradeoff between the two column lengths was peak separation of the sample versus run time. After performing several tests, it was concluded that better

  16. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    Science.gov (United States)

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not

  17. Small Column Ion Exchange

    International Nuclear Information System (INIS)

    Huff, Thomas

    2010-01-01

    Small Column Ion Exchange (SCIX) leverages a suite of technologies developed by DOE across the complex to achieve lifecycle savings. Technologies are applicable to multiple sites. Early testing supported multiple sites. Balance of SRS SCIX testing supports SRS deployment. A forma Systems Engineering Evaluation (SEE) was performed and selected Small Column Ion Exchange columns containing Crystalline Silicotitanate (CST) in a 2-column lead/lag configuration. SEE considered use of Spherical Resorcinol-Formaldehyde (sRF). Advantages of approach at SRS include: (1) no new buildings, (2) low volume of Cs waste in solid form compared to aqueous strip effluent; and availability of downstream processing facilities for immediate processing of spent resin.

  18. Mass transfer model liquid phase catalytic exchange column simulation applicable to any column composition profile

    Energy Technology Data Exchange (ETDEWEB)

    Busigin, A. [NITEK USA Inc., Ocala, FL (United States)

    2015-03-15

    Liquid Phase Catalytic Exchange (LPCE) is a key technology used in water detritiation systems. Rigorous simulation of LPCE is complicated when a column may have both hydrogen and deuterium present in significant concentrations in different sections of the column. This paper presents a general mass transfer model for a homogenous packed bed LPCE column as a set of differential equations describing composition change, and equilibrium equations to define the mass transfer driving force within the column. The model is used to show the effect of deuterium buildup in the bottom of an LPCE column from non-negligible D atom fraction in the bottom feed gas to the column. These types of calculations are important in the design of CECE (Combined Electrolysis and Catalytic Exchange) water detritiation systems.

  19. Characterization of the neutron flux in the Hohlraum of the thermal column of the TRIGA Mark III reactor of the ININ

    International Nuclear Information System (INIS)

    Delfin L, A.; Palacios, J.C.; Alonso, G.

    2006-01-01

    Knowing the magnitude of the neutron flux in the reactor irradiation facilities, is so much importance for the operation of the same one, like for the investigation developing. Particularly, knowing with certain precision the spectrum and the neutron flux in the different positions of irradiation of a reactor, it is essential for the evaluation of the results obtained for a certain irradiation experiment. The TRIGA Mark III reactor account with irradiation facilities designed to carry out experimentation, where the reactor is used like an intense neutron source and gamma radiation, what allows to make irradiations of samples or equipment in radiation fields with components and diverse levels in the different facilities, one of these irradiation facilities is the Thermal Column where the Hohlraum is. In this work it was carried out a characterization of the neutron flux inside the 'Hohlraum' of the irradiation facility Thermal Column of the TRIGA Mark III reactor of the Nuclear Center of Mexico to 1 MW of power. It was determined the sub cadmic neutron flux and the epi cadmic by means of the neutron activation technique of thin sheets of gold. The maps of the distribution of the neutron flux for both energy groups in three different positions inside the 'Hohlraum' are presented, these maps were obtained by means of the irradiation of undressed thin activation sheets of gold and covered with cadmium in arrangements of 10 x 12, located parallel to 11.5 cm, 40.5 cm and 70.5 cm to the internal wall of graphite of the installation in inverse address to the position of the reactor core. Starting from the obtained values of neutron flux it was found that, for the same position of the surface of irradiation of the experimental arrangement, the relative differences among the values of neutron flux can be of 80%, and that the differences among different positions of the irradiation surfaces can vary until in a one order of magnitude. (Author)

  20. A parallel algorithm for 3D dislocation dynamics

    International Nuclear Information System (INIS)

    Wang Zhiqiang; Ghoniem, Nasr; Swaminarayan, Sriram; LeSar, Richard

    2006-01-01

    Dislocation dynamics (DD), a discrete dynamic simulation method in which dislocations are the fundamental entities, is a powerful tool for investigation of plasticity, deformation and fracture of materials at the micron length scale. However, severe computational difficulties arising from complex, long-range interactions between these curvilinear line defects limit the application of DD in the study of large-scale plastic deformation. We present here the development of a parallel algorithm for accelerated computer simulations of DD. By representing dislocations as a 3D set of dislocation particles, we show here that the problem of an interacting ensemble of dislocations can be converted to a problem of a particle ensemble, interacting with a long-range force field. A grid using binary space partitioning is constructed to keep track of node connectivity across domains. We demonstrate the computational efficiency of the parallel micro-plasticity code and discuss how O(N) methods map naturally onto the parallel data structure. Finally, we present results from applications of the parallel code to deformation in single crystal fcc metals

  1. Implementation of a Parallel Protein Structure Alignment Service on Cloud

    Directory of Open Access Journals (Sweden)

    Che-Lun Hung

    2013-01-01

    Full Text Available Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.

  2. Analysis of programming properties and the row-column generation method for 1-norm support vector machines.

    Science.gov (United States)

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper deals with fast methods for training a 1-norm support vector machine (SVM). First, we define a specific class of linear programming with many sparse constraints, i.e., row-column sparse constraint linear programming (RCSC-LP). In nature, the 1-norm SVM is a sort of RCSC-LP. In order to construct subproblems for RCSC-LP and solve them, a family of row-column generation (RCG) methods is introduced. RCG methods belong to a category of decomposition techniques, and perform row and column generations in a parallel fashion. Specially, for the 1-norm SVM, the maximum size of subproblems of RCG is identical with the number of Support Vectors (SVs). We also introduce a semi-deleting rule for RCG methods and prove the convergence of RCG methods when using the semi-deleting rule. Experimental results on toy data and real-world datasets illustrate that it is efficient to use RCG to train the 1-norm SVM, especially in the case of small SVs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Adaptive proxy map server for efficient vector spatial data rendering

    Science.gov (United States)

    Sayar, Ahmet

    2013-01-01

    The rapid transmission of vector map data over the Internet is becoming a bottleneck of spatial data delivery and visualization in web-based environment because of increasing data amount and limited network bandwidth. In order to improve both the transmission and rendering performances of vector spatial data over the Internet, we propose a proxy map server enabling parallel vector data fetching as well as caching to improve the performance of web-based map servers in a dynamic environment. Proxy map server is placed seamlessly anywhere between the client and the final services, intercepting users' requests. It employs an efficient parallelization technique based on spatial proximity and data density in case distributed replica exists for the same spatial data. The effectiveness of the proposed technique is proved at the end of the article by the application of creating map images enriched with earthquake seismic data records.

  4. Two generalizations of column-convex polygons

    International Nuclear Information System (INIS)

    Feretic, Svjetlan; Guttmann, Anthony J

    2009-01-01

    Column-convex polygons were first counted by area several decades ago, and the result was found to be a simple, rational, generating function. In this work we generalize that result. Let a p-column polyomino be a polyomino whose columns can have 1, 2, ..., p connected components. Then column-convex polygons are equivalent to 1-convex polyominoes. The area generating function of even the simplest generalization, namely 2-column polyominoes, is unlikely to be solvable. We therefore define two classes of polyominoes which interpolate between column-convex polygons and 2-column polyominoes. We derive the area generating functions of those two classes, using extensions of existing algorithms. The growth constants of both classes are greater than the growth constant of column-convex polyominoes. Rather tight lower bounds on the growth constants complement a comprehensive asymptotic analysis.

  5. The simplified spherical harmonics (SPL) methodology with space and moment decomposition in parallel environments

    International Nuclear Information System (INIS)

    Gianluca, Longoni; Alireza, Haghighat

    2003-01-01

    In recent years, the SP L (simplified spherical harmonics) equations have received renewed interest for the simulation of nuclear systems. We have derived the SP L equations starting from the even-parity form of the S N equations. The SP L equations form a system of (L+1)/2 second order partial differential equations that can be solved with standard iterative techniques such as the Conjugate Gradient (CG). We discretized the SP L equations with the finite-volume approach in a 3-D Cartesian space. We developed a new 3-D general code, Pensp L (Parallel Environment Neutral-particle SP L ). Pensp L solves both fixed source and criticality eigenvalue problems. In order to optimize the memory management, we implemented a Compressed Diagonal Storage (CDS) to store the SP L matrices. Pensp L includes parallel algorithms for space and moment domain decomposition. The computational load is distributed on different processors, using a mapping function, which maps the 3-D Cartesian space and moments onto processors. The code is written in Fortran 90 using the Message Passing Interface (MPI) libraries for the parallel implementation of the algorithm. The code has been tested on the Pcpen cluster and the parallel performance has been assessed in terms of speed-up and parallel efficiency. (author)

  6. GIS integration of the 1:75,000 Romanian topographic map series from the World War I

    Science.gov (United States)

    Timár, G.; Mugnier, C. J.

    2009-04-01

    During the WWI, the Kingdom of Romania developed a 1:75,000 topographic map series, covering not only the actual territory of the country (the former Danube Principalities and Dobrogea) but also Bessarabia (now the Republic of Moldova), which was under Russian rule. The map sheets were issued between 1914 and 1917. The whole map consists of two zones; Columns A-F are the western zone, while Columns G-Q are belonging to the eastern one. To integrate the scanned map sheets to a geographic information system (GIS), the parameters of the map projection and the geodetic datum should be defined as well as the sheet labelling system. The sheets have no grid lines indicated; most of them have latitude and longitude lines but some of them have no coordinate descriptions. The sheets, however, can be rectified using their four corners as virtual control points, and using the following grid and datum parameters: Eastern zone: • Projection type: Bonne. • Projection center: latitude=46d 30m; longitude=27d 20m 13.35s (from Greenwich). • Base ellipsoid: Bessel 1841 • Datum parameters (from local to WGS84): dX=+875 m; dY=-119 m; dZ=+313 m. • Sheet size: 40*40 kilometers, projection center is the NW corner of the 779 (Column L; Row VII) sheet. Western zone: • Projection type: Bonne. • Projection center: latitude=45d; longitude=26d 6m 41.18s (from Greenwich); • Base ellipsoid: Bessel 1841 • Datum parameters (from local to WGS84): dX=+793 m; dY=+364 m; dZ=+173 m. • Sheet size: 0.6*0.4 grad (new degrees), except Column F, which is wider to east to fill the territory to the zone boundary. In Columns E and F geographic coordinates are indicated in new degrees, with the prime meridian of Bucharest. Apart from the system of columns and rows, each sheet has its own label of three or four digit. The last two digit correspond to the column number (69 for Column A going up to 84 for Column Q) while the first digit(s) refer directly to row number (1-15). During the

  7. Harmonic maps of the bounded symmetric domains

    International Nuclear Information System (INIS)

    Xin, Y.L.

    1994-06-01

    A shrinking property of harmonic maps into R IV (2) is proved which is used to classify complete spacelike surfaces of the parallel mean curvature in R 4 2 with a reasonable condition on the Gauss image. Liouville-type theorems of harmonic maps from the higher dimensional bounded symmetric domains are also established. (author). 25 refs

  8. A Servicewide Benthic Mapping Program for National Parks

    Science.gov (United States)

    Moses, Christopher S.; Nayegandhi, Amar; Beavers, Rebecca; Brock, John

    2010-01-01

    In 2007, the National Park Service (NPS) Inventory and Monitoring Program directed the initiation of a benthic habitat mapping program in ocean and coastal parks in alignment with the NPS Ocean Park Stewardship 2007-2008 Action Plan. With 74 ocean and Great Lakes parks stretching over more than 5,000 miles of coastline across 26 States and territories, this Servicewide Benthic Mapping Program (SBMP) is essential. This program will deliver benthic habitat maps and their associated inventory reports to NPS managers in a consistent, servicewide format to support informed management and protection of 3 million acres of submerged National Park System natural and cultural resources. The NPS and the U.S. Geological Survey (USGS) convened a workshop June 3-5, 2008, in Lakewood, Colo., to discuss the goals and develop the design of the NPS SBMP with an assembly of experts (Moses and others, 2010) who identified park needs and suggested best practices for inventory and mapping of bathymetry, benthic cover, geology, geomorphology, and some water-column properties. The recommended SBMP protocols include servicewide standards (such as gap analysis, minimum accuracy, final products) as well as standards that can be adapted to fit network and park unit needs (for example, minimum mapping unit, mapping priorities). SBMP Mapping Process. The SBMP calls for a multi-step mapping process for each park, beginning with a gap assessment and data mining to determine data resources and needs. An interagency announcement of intent to acquire new data will provide opportunities to leverage partnerships. Prior to new data acquisition, all involved parties should be included in a scoping meeting held at network scale. Data collection will be followed by processing and interpretation, and finally expert review and publication. After publication, all digital materials will be archived in a common format. SBMP Classification Scheme. The SBMP will map using the Coastal and Marine Ecological

  9. In situ quantitative characterisation of the ocean water column using acoustic multibeam backscatter data

    Science.gov (United States)

    Lamarche, G.; Le Gonidec, Y.; Lucieer, V.; Lurton, X.; Greinert, J.; Dupré, S.; Nau, A.; Heffron, E.; Roche, M.; Ladroit, Y.; Urban, P.

    2017-12-01

    Detecting liquid, solid or gaseous features in the ocean is generating considerable interest in the geoscience community, because of their potentially high economic values (oil & gas, mining), their significance for environmental management (oil/gas leakage, biodiversity mapping, greenhouse gas monitoring) as well as their potential cultural and traditional values (food, freshwater). Enhancing people's capability to quantify and manage the natural capital present in the ocean water goes hand in hand with the development of marine acoustic technology, as marine echosounders provide the most reliable and technologically advanced means to develop quantitative studies of water column backscatter data. This is not developed to its full capability because (i) of the complexity of the physics involved in relation to the constantly changing marine environment, and (ii) the rapid technological evolution of high resolution multibeam echosounder (MBES) water-column imaging systems. The Water Column Imaging Working Group is working on a series of multibeam echosounder (MBES) water column datasets acquired in a variety of environments, using a range of frequencies, and imaging a number of water-column features such as gas seeps, oil leaks, suspended particulate matter, vegetation and freshwater springs. Access to data from different acoustic frequencies and ocean dynamics enables us to discuss and test multifrequency approaches which is the most promising means to develop a quantitative analysis of the physical properties of acoustic scatterers, providing rigorous cross calibration of the acoustic devices. In addition, high redundancy of multibeam data, such as is available for some datasets, will allow us to develop data processing techniques, leading to quantitative estimates of water column gas seeps. Each of the datasets has supporting ground-truthing data (underwater videos and photos, physical oceanography measurements) which provide information on the origin and

  10. Sea surface temperature mapping using a thermal infrared scanner

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R; Pandya, R; Mathur, K.M.; Charyulu, R; Rao, L.V.G.

    1 metre water column below the sea surface. A thermal infrared scanner developed by the Space Applications Centre (ISRO), Ahmedabad was operated on board R.V. Gaveshani in April/May 1984 for mapping SST over the eastern Arabian Sea. SST values...

  11. TOMS/Nimbus-7 Total Column Ozone Monthly L3 Global 1x1.25 deg Lat/Lon Grid V008

    Data.gov (United States)

    National Aeronautics and Space Administration — This data product contains TOMS/Nimbus-7 Total Column Ozone Monthly L3 Global 1x1.25 deg Lat/Lon Grid Version 8 data in ASCII format. The Total Ozone Mapping...

  12. TOMS/Nimbus-7 Total Column Ozone Daily L3 Global 1x1.25 deg Lat/Lon Grid V008

    Data.gov (United States)

    National Aeronautics and Space Administration — This data product contains TOMS/Nimbus-7 Total Column Ozone Daily L3 Global 1x1.25 deg Lat/Lon Grid Version 8 data in ASCII format. The Total Ozone Mapping...

  13. Development of spent salt treatment technology by zeolite column system. Performance evaluation of zeolite column

    International Nuclear Information System (INIS)

    Miura, Hidenori; Uozumi, Koichi

    2009-01-01

    At electrorefining process, fission products(FPs) accumulate in molten salt. To avoid influence on heating control by decay heat and enlargement of FP amount in the recovered fuel, FP elements must be removed from the spent salt of the electrorefining process. For the removal of the FPs from the spent salt, we are investigating the availability of zeolite column system. For obtaining the basic data of the column system, such as flow property and ion-exchange performance while high temperature molten salt is passing through the column, and experimental apparatus equipped with fraction collector was developed. By using this apparatus, following results were obtained. 1) We cleared up the flow parameter of column system with zeolite powder, such as flow rate control by argon pressure. 2) Zeolite 4A in the column can absorb cesium that is one of the FP elements in molten salt. From these results, we got perspective on availability of the zeolite column system. (author)

  14. A new all-sky map of Galactic high-velocity clouds from the 21-cm HI4PI survey

    Science.gov (United States)

    Westmeier, Tobias

    2018-02-01

    High-velocity clouds (HVCs) are neutral or ionized gas clouds in the vicinity of the Milky Way that are characterized by high radial velocities inconsistent with participation in the regular rotation of the Galactic disc. Previous attempts to create a homogeneous all-sky H I map of HVCs have been hampered by a combination of poor angular resolution, limited surface brightness sensitivity and suboptimal sampling. Here, a new and improved H I map of Galactic HVCs based on the all-sky HI4PI survey is presented. The new map is fully sampled and provides significantly better angular resolution (16.2 versus 36 arcmin) and column density sensitivity (2.3 versus 3.7 × 1018 cm-2 at the native resolution) than the previously available LAB survey. The new HVC map resolves many of the major HVC complexes in the sky into an intricate network of narrow H I filaments and clumps that were not previously resolved by the LAB survey. The resulting sky coverage fraction of high-velocity H I emission above a column density level of 2 × 1018 cm-2 is approximately 15 per cent, which reduces to about 13 per cent when the Magellanic Clouds and other non-HVC emission are removed. The differential sky coverage fraction as a function of column density obeys a truncated power law with an exponent of -0.93 and a turnover point at about 5 × 1019 cm-2. H I column density and velocity maps of the HVC sky are made publicly available as FITS images for scientific use by the community.

  15. Leveraging Parallel Data Processing Frameworks with Verified Lifting

    Directory of Open Access Journals (Sweden)

    Maaz Bin Safeer Ahmad

    2016-11-01

    Full Text Available Many parallel data frameworks have been proposed in recent years that let sequential programs access parallel processing. To capitalize on the benefits of such frameworks, existing code must often be rewritten to the domain-specific languages that each framework supports. This rewriting–tedious and error-prone–also requires developers to choose the framework that best optimizes performance given a specific workload. This paper describes Casper, a novel compiler that automatically retargets sequential Java code for execution on Hadoop, a parallel data processing framework that implements the MapReduce paradigm. Given a sequential code fragment, Casper uses verified lifting to infer a high-level summary expressed in our program specification language that is then compiled for execution on Hadoop. We demonstrate that Casper automatically translates Java benchmarks into Hadoop. The translated results execute on average 3.3x faster than the sequential implementations and scale better, as well, to larger datasets.

  16. Visualisation of air–water bubbly column flow using array Ultrasonic Velocity Profiler

    Directory of Open Access Journals (Sweden)

    Munkhbat Batsaikhan

    2017-11-01

    Full Text Available In the present work, an experimental study of bubbly two-phase flow in a rectangular bubble column was performed using two ultrasonic array sensors, which can measure the instantaneous velocity of gas bubbles on multiple measurement lines. After the sound pressure distribution of sensors had been evaluated with a needle hydrophone technique, the array sensors were applied to two-phase bubble column. To assess the accuracy of the measurement system with array sensors for one and two-dimensional velocity, a simultaneous measurement was performed with an optical measurement technique called particle image velocimetry (PIV. Experimental results showed that accuracy of the measurement system with array sensors is under 10% for one-dimensional velocity profile measurement compared with PIV technique. The accuracy of the system was estimated to be under 20% along the mean flow direction in the case of two-dimensional vector mapping.

  17. Thermoelectric properties of Ba3Co2O6(CO3)0.7 containing one-dimensional CoO6 octahedral columns

    OpenAIRE

    Iwasaki, Kouta; Yamamoto, Teruhisa; Yamane, Hisanori; Takeda, Takashi; Arai, Shigeo; Miyazaki, Hidetoshi; Tatsumi, Kazuyoshi; Yoshino, Masahito; Ito, Tsuyoshi; Arita, Yuji; Muto, Shunsuke; Nagasaki, Takanori; Matsui, Tsuneo

    2009-01-01

    The thermoelectric properties of Ba3Co2O6(CO3)0.7 have been investigated using prismatic single crystals elongated along the c axis. Ba3Co2O6(CO3)0.7 has a pseudo-one-dimensional structure similar to that of 2H perovskite-type BaCoO3 and contains CoO6 octahedral columns running parallel to the c axis. The prismatic crystals are grown by a flux method using a K2CO3–BaCl2 flux. The electrical conductivity(σ) along the columns (c axis) exhibits a metallic behavior (670–320 S cm−1 in the temperat...

  18. Performance evaluation of a rectifier column using gamma column scanning

    Directory of Open Access Journals (Sweden)

    Aquino Denis D.

    2017-12-01

    Full Text Available Rectifier columns are considered to be a critical component in petroleum refineries and petrochemical processing installations as they are able to affect the overall performance of these facilities. It is deemed necessary to monitor the operational conditions of such vessels to optimize processes and prevent anomalies which could pose undesired consequences on product quality that might lead to huge financial losses. A rectifier column was subjected to gamma scanning using a 10-mCi Co-60 source and a 2-inch-long detector in tandem. Several scans were performed to gather information on the operating conditions of the column under different sets of operating parameters. The scan profiles revealed unexpected decreases in the radiation intensity at vapour levels between trays 2 and 3, and between trays 4 and 5. Flooding also occurred during several scans which could be attributed to parametric settings.

  19. Okeanos Explorer (EX1402L2): Gulf of Mexico Mapping and Exploration

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Transit mapping operations will collect bathymetry, sub-bottom profiles, water column backscatter, and seafloor backscatter over the continental shelf and Claypile...

  20. Single column and two-column H-D-T distillation experiments at TSTA

    International Nuclear Information System (INIS)

    Yamanishi, T.; Yoshida, H.; Hirata, S.; Naito, T.; Naruse, Y.; Sherman, R.H.; Bartlit, J.R.; Anderson, J.L.

    1988-01-01

    Cryogenic distillation experiments were peformed at TSTA with H-D-T system by using a single column and a two-column cascade. In the single column experiment, fundamental engineering data such as the liquid holdup and the HETP were measured under a variety of operational condtions. The liquid holdup in the packed section was about 10 /approximately/ 15% of its superficial volume. The HETP values were from 4 to 6 cm, and increased slightly with the vapor velocity. The reflux ratio had no effect on the HETP. For the wo-colunn experiemnt, dynamic behavior of the cascade was observed. 8 refs., 7 figs., 2 tabs

  1. Stiffness Analysis and Comparison of 3-PPR Planar Parallel Manipulators with Actuation Compliance

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2012-01-01

    In this paper, the stiffness of 3-PPR planar parallel manipulator (PPM) is analyzed with the consideration of nonlinear actuation compliance. The characteristics of the stiffness matrix pertaining to the planar parallel manipulators are analyzed and discussed. Graphic representation of the stiffn...... of the stiffness characteristics by means of translational and rotational stiffness mapping is developed. The developed method is illustrated with an unsymmetrical 3-PPR PPM, being compared with its structure-symmetrical counterpart....

  2. Adiabatic packed column supercritical fluid chromatography using a dual-zone still-air column heater.

    Science.gov (United States)

    Helmueller, Shawn C; Poe, Donald P; Kaczmarski, Krzysztof

    2018-02-02

    An approach to conducting SFC separations under pseudo-adiabatic condition utilizing a dual-zone column heater is described. The heater allows for efficient separations at low pressures above the critical temperature by imposing a temperature profile along the column wall that closely matches that for isenthalpic expansion of the fluid inside the column. As a result, the efficiency loss associated with the formation of radial temperature gradients in this difficult region can be largely avoided in packed analytical scale columns. For elution of n-octadecylbenzene at 60 °C with 5% methanol modifier and a flow rate of 3 mL/min, a 250 × 4.6-mm column packed with 5-micron Kinetex C18 particles began to lose efficiency (8% decrease in the number of theoretical plates) at outlet pressures below 142 bar in a traditional forced air oven. The corresponding outlet pressure for onset of excess efficiency loss was decreased to 121 bar when the column was operated in a commercial HPLC column heater, and to 104 bar in the new dual-zone heater operated in adiabatic mode, with corresponding increases in the retention factor for n-octadecylbenzene from 2.9 to 6.8 and 14, respectively. This approach allows for increased retention and efficient separations of otherwise weakly retained analytes. Applications are described for rapid SFC separation of an alkylbenzene mixture using a pressure ramp, and isobaric separation of a cannabinoid mixture. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Rectification of catalyst separation column at HWP, Thal (Paper No. 5.7)

    International Nuclear Information System (INIS)

    Prakash, R.; Bhaskaran, M.

    1992-01-01

    Heavy Water Plant, Thal is based on the monothermal ammonia hydrogen process. Liquid ammonia containing potassium amide catalyst is contacted with the synthesis gas where-in deuterium from hydrogen gets transferred to liquid phase. There are two parallel streams A and B with a common ammonia synthesis unit. The system is provided with an ammonia cracker and ammonia synthesis for providing the reflux gas and liquid for the enrichment process. The parameters such as steam valve opening, column pressure, reflux, condensate valve opening, cooling water valve position, cracking load of the unit before and after the rectification, etc. are discussed. (author). 2 tabs., 2 figs

  4. Influence of pressure on the properties of chromatographic columns. II. The column hold-up volume.

    Science.gov (United States)

    Gritti, Fabrice; Martin, Michel; Guiochon, Georges

    2005-04-08

    The effect of the local pressure and of the average column pressure on the hold-up column volume was investigated between 1 and 400 bar, from a theoretical and an experimental point of view. Calculations based upon the elasticity of the solids involved (column wall and packing material) and the compressibility of the liquid phase show that the increase of the column hold-up volume with increasing pressure that is observed is correlated with (in order of decreasing importance): (1) the compressibility of the mobile phase (+1 to 5%); (2) in RPLC, the compressibility of the C18-bonded layer on the surface of the silica (+0.5 to 1%); and (3) the expansion of the column tube (columns packed with the pure Resolve silica (0% carbon), the derivatized Resolve-C18 (10% carbon) and the Symmetry-C18 (20% carbon) adsorbents, using water, methanol, or n-pentane as the mobile phase. These solvents have different compressibilities. However, 1% of the relative increase of the column hold-up volume that was observed when the pressure was raised is not accounted for by the compressibilities of either the solvent or the C18-bonded phase. It is due to the influence of the pressure on the retention behavior of thiourea, the compound used as tracer to measure the hold-up volume.

  5. Compact electron beam focusing column

    Science.gov (United States)

    Persaud, Arun; Leung, Ka-Ngo; Reijonen, Jani

    2001-12-01

    A novel design for an electron beam focusing column has been developed at LBNL. The design is based on a low-energy spread multicusp plasma source which is used as a cathode for electron beam production. The focusing column is 10 mm in length. The electron beam is focused by means of electrostatic fields. The column is designed for a maximum voltage of 50 kV. Simulations of the electron trajectories have been performed by using the 2D simulation code IGUN and EGUN. The electron temperature has also been incorporated into the simulations. The electron beam simulations, column design and fabrication will be discussed in this presentation.

  6. Domain decomposition parallel computing for transient two-phase flow of nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)

    2016-05-15

    KAERI (Korea Atomic Energy Research Institute) has been developing a multi-dimensional two-phase flow code named CUPID for multi-physics and multi-scale thermal hydraulics analysis of Light water reactors (LWRs). The CUPID code has been validated against a set of conceptual problems and experimental data. In this work, the CUPID code has been parallelized based on the domain decomposition method with Message passing interface (MPI) library. For domain decomposition, the CUPID code provides both manual and automatic methods with METIS library. For the effective memory management, the Compressed sparse row (CSR) format is adopted, which is one of the methods to represent the sparse asymmetric matrix. CSR format saves only non-zero value and its position (row and column). By performing the verification for the fundamental problem set, the parallelization of the CUPID has been successfully confirmed. Since the scalability of a parallel simulation is generally known to be better for fine mesh system, three different scales of mesh system are considered: 40000 meshes for coarse mesh system, 320000 meshes for mid-size mesh system, and 2560000 meshes for fine mesh system. In the given geometry, both single- and two-phase calculations were conducted. In addition, two types of preconditioners for a matrix solver were compared: Diagonal and incomplete LU preconditioner. In terms of enhancement of the parallel performance, the OpenMP and MPI hybrid parallel computing for a pressure solver was examined. It is revealed that the scalability of hybrid calculation was enhanced for the multi-core parallel computation.

  7. A Method to Analyze the Potential of Optical Remote Sensing for Benthic Habitat Mapping

    Directory of Open Access Journals (Sweden)

    Rodrigo A. Garcia

    2015-10-01

    Full Text Available Quantifying the number and type of benthic classes that are able to be spectrally identified in shallow water remote sensing is important in understanding its potential for habitat mapping. Factors that impact the effectiveness of shallow water habitat mapping include water column turbidity, depth, sensor and environmental noise, spectral resolution of the sensor and spectral variability of the benthic classes. In this paper, we present a simple hierarchical clustering method coupled with a shallow water forward model to generate water-column specific spectral libraries. This technique requires no prior decision on the number of classes to output: the resultant classes are optically separable above the spectral noise introduced by the sensor, image based radiometric corrections, the benthos’ natural spectral variability and the attenuating properties of a variable water column at depth. The modeling reveals the effect reducing the spectral resolution has on the number and type of classes that are optically distinct. We illustrate the potential of this clustering algorithm in an analysis of the conditions, including clustering accuracy, sensor spectral resolution and water column optical properties and depth that enabled the spectral distinction of the seagrass Amphibolis antartica from benthic algae.

  8. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  9. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  10. Admittance Scanning for Whole Column Detection.

    Science.gov (United States)

    Stamos, Brian N; Dasgupta, Purnendu K; Ohira, Shin-Ichi

    2017-07-05

    Whole column detection (WCD) is as old as chromatography itself. WCD requires an ability to interrogate column contents from the outside. Other than the obvious case of optical detection through a transparent column, admittance (often termed contactless conductance) measurements can also sense changes in the column contents (especially ionic content) from the outside without galvanic contact with the solution. We propose here electromechanically scanned admittance imaging and apply this to open tubular (OT) chromatography. The detector scans across the column; the length resolution depends on the scanning velocity and the data acquisition frequency, ultimately limited by the physical step resolution (40 μm in the present setup). Precision equal to this step resolution was observed for locating an interface between two immiscible liquids inside a 21 μm capillary. Mechanically, the maximum scanning speed was 100 mm/s, but at 1 kHz sampling rate and a time constant of 25 ms, the highest practical scan speed (no peak distortion) was 28 mm/s. At scanning speeds of 0, 4, and 28 mm/s, the S/N for 180 pL (zone length of 1.9 mm in a 11 μm i.d. column) of 500 μM KCl injected into water was 6450, 3850, and 1500, respectively. To facilitate constant and reproducible contact with the column regardless of minor variations in outer diameter, a double quadrupole electrode system was developed. Columns of significant length (>1 m) can be readily scanned. We demonstrate its applicability with both OT and commercial packed columns and explore uniformity of retention along a column, increasing S/N by stopped-flow repeat scans, etc. as unique applications.

  11. The effects of carbide column to swelling potential and Atterberg limit on expansive soil with column to soil drainage

    Science.gov (United States)

    Muamar Rifa'i, Alfian; Setiawan, Bambang; Djarwanti, Noegroho

    2017-12-01

    The expansive soil is soil that has a potential for swelling-shrinking due to changes in water content. Such behavior can exert enough force on building above to cause damage. The use of columns filled with additives such as Calcium Carbide is done to reduce the negative impact of expansive soil behavior. This study aims to determine the effect of carbide columns on expansive soil. Observations were made on swelling and spreading of carbides in the soil. 7 Carbide columns with 5 cm diameter and 20 cm height were installed into the soil with an inter-column spacing of 8.75 cm. Wetting is done through a pipe at the center of the carbide column for 20 days. Observations were conducted on expansive soil without carbide columns and expansive soil with carbide columns. The results showed that the addition of carbide column could reduce the percentage of swelling by 4.42%. Wetting through the center of the carbide column can help spread the carbide into the soil. The use of carbide columns can also decrease the rate of soil expansivity. After the addition of carbide column, the plasticity index value decreased from 71.76% to 4.3% and the shrinkage index decreased from 95.72% to 9.2%.

  12. Thermal process of an air column

    International Nuclear Information System (INIS)

    Lee, F.T.

    1994-01-01

    Thermal process of a hot air column is discussed based on laws of thermodynamics. The kinetic motion of the air mass in the column can be used as a power generator. Alternatively, the column can also function as a exhaust/cooler

  13. A new scheduling algorithm for parallel sparse LU factorization with static pivoting

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2002-08-20

    In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.

  14. Solvent extraction columns

    International Nuclear Information System (INIS)

    Middleton, P.; Smith, J.R.

    1979-01-01

    In pulsed columns for use in solvent extraction processes, e.g. the reprocessing of nuclear fuel, the horizontal perforated plates inside the column are separated by interplate spacers manufactured from metallic neutron absorbing material. The spacer may be in the form of a spiral or concentric circles separated by radial limbs, or may be of egg-box construction. Suitable neutron absorbing materials include stainless steel containing boron or gadolinium, hafnium metal or alloys of hafnium. (UK)

  15. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    Science.gov (United States)

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  16. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    Science.gov (United States)

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  17. PULSE COLUMN

    Science.gov (United States)

    Grimmett, E.S.

    1964-01-01

    This patent covers a continuous countercurrent liquidsolids contactor column having a number of contactor states each comprising a perforated plate, a layer of balls, and a downcomer tube; a liquid-pulsing piston; and a solids discharger formed of a conical section at the bottom of the column, and a tubular extension on the lowest downcomer terminating in the conical section. Between the conical section and the downcomer extension is formed a small annular opening, through which solids fall coming through the perforated plate of the lowest contactor stage. This annular opening is small enough that the pressure drop thereacross is greater than the pressure drop upward through the lowest contactor stage. (AEC)

  18. Concurrent and Accurate Short Read Mapping on Multicore Processors.

    Science.gov (United States)

    Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S

    2015-01-01

    We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR.

  19. Performance of RC columns with partial length corrosion

    International Nuclear Information System (INIS)

    Wang Xiaohui; Liang Fayun

    2008-01-01

    Experimental and analytical studies on the load capacity of reinforced concrete (RC) columns with partial length corrosion are presented, where only a fraction of the column length was corroded. Twelve simply supported columns were eccentrically loaded. The primary variables were partial length corrosion in tensile or compressive zone and the corrosion level within this length. The failure of the corroded column occurs in the partial length, mainly developed from or located nearby or merged with the longitudinal corrosion cracks. For RC column with large eccentricity, load capacity of the column is mainly influenced by the partial length corrosion in tensile zone; while for RC column with small eccentricity, load capacity of the column greatly decreases due to the partial length corrosion in compressive zone. The destruction of the longitudinally mechanical integrality of the column in the partial length leads to this great reduction of the load capacity of the RC column

  20. A stochastic view on column efficiency.

    Science.gov (United States)

    Gritti, Fabrice

    2018-03-09

    A stochastic model of transcolumn eddy dispersion along packed beds was derived. It was based on the calculation of the mean travel time of a single analyte molecule from one radial position to another. The exchange mechanism between two radial positions was governed by the transverse dispersion of the analyte across the column. The radial velocity distribution was obtained by flow simulations in a focused-ion-beam scanning electron microscopy (FIB-SEM) based 3D reconstruction from a 2.1 mm × 50 mm column packed with 2 μm BEH-C 18 particles. Accordingly, the packed bed was divided into three coaxial and uniform zones: (1) a 1.4 particle diameter wide, ordered, and loose packing at the column wall (velocity u w ), (2) an intermediate 130 μm wide, random, and dense packing (velocity u i ), and (3) the bulk packing in the center of the column (velocity u c ). First, the validity of this proposed stochastic model was tested by adjusting the predicted to the observed reduced van Deemter plots of a 2.1 mm × 50 mm column packed with 2 μm BEH-C 18 fully porous particles (FPPs). An excellent agreement was found for u i  = 0.93u c , a result fully consistent with the FIB-SEM observation (u i  = 0.95u c ). Next, the model was used to measure u i  = 0.94u c for 2.1 mm × 100 mm column packed with 1.6 μm Cortecs-C 18 superficially porous particles (SPPs). The relative velocity bias across columns packed with SPPs is then barely smaller than that observed in columns packed with FPPs (+6% versus + 7%). u w =1.8u i is measured for a 75 μm × 1 m capillary column packed with 2 μm BEH-C 18 particles. Despite this large wall-to-center velocity bias (+80%), the presence of the thin and ordered wall packing layer has no negative impact on the kinetic performance of capillary columns. Finally, the stochastic model of long-range eddy dispersion explains why analytical (2.1-4.6 mm i.d.) and capillary (columns can all be

  1. The life of the cortical column: opening the domain of functional architecture of the cortex (1955-1981).

    Science.gov (United States)

    Haueis, Philipp

    2016-09-01

    The concept of the cortical column refers to vertical cell bands with similar response properties, which were initially observed by Vernon Mountcastle's mapping of single cell recordings in the cat somatic cortex. It has subsequently guided over 50 years of neuroscientific research, in which fundamental questions about the modularity of the cortex and basic principles of sensory information processing were empirically investigated. Nevertheless, the status of the column remains controversial today, as skeptical commentators proclaim that the vertical cell bands are a functionally insignificant by-product of ontogenetic development. This paper inquires how the column came to be viewed as an elementary unit of the cortex from Mountcastle's discovery in 1955 until David Hubel and Torsten Wiesel's reception of the Nobel Prize in 1981. I first argue that Mountcastle's vertical electrode recordings served as criteria for applying the column concept to electrophysiological data. In contrast to previous authors, I claim that this move from electrophysiological data to the phenomenon of columnar responses was concept-laden, but not theory-laden. In the second part of the paper, I argue that Mountcastle's criteria provided Hubel Wiesel with a conceptual outlook, i.e. it allowed them to anticipate columnar patterns in the cat and macaque visual cortex. I argue that in the late 1970s, this outlook only briefly took a form that one could call a 'theory' of the cerebral cortex, before new experimental techniques started to diversify column research. I end by showing how this account of early column research fits into a larger project that follows the conceptual development of the column into the present.

  2. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  3. 29 CFR 1926.755 - Column anchorage.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 8 2010-07-01 2010-07-01 false Column anchorage. 1926.755 Section 1926.755 Labor... (CONTINUED) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Steel Erection § 1926.755 Column anchorage. (a) General requirements for erection stability. (1) All columns shall be anchored by a minimum of 4 anchor...

  4. Adsorption columns for use in radioimmunoassays

    International Nuclear Information System (INIS)

    1976-01-01

    Adsorption columns are provided which can be utilized in radioimmunoassay systems such as those involving the separation of antibody-antigen complexes from free antigens. The preparation of the columns includes the treatment of retaining substrate material to render it hydrophilic, preparation and degassing of the separation material and loading the column

  5. Seep Detection using E/V Nautilus Integrated Seafloor Mapping and Remotely Operated Vehicles on the United States West Coast

    Science.gov (United States)

    Gee, L. J.; Raineault, N.; Kane, R.; Saunders, M.; Heffron, E.; Embley, R. W.; Merle, S. G.

    2017-12-01

    Exploration Vessel (E/V) Nautilus has been mapping the seafloor off the west coast of the United States, from Washington to California, for the past three years with a Kongsberg EM302 multibeam sonar. This system simultaneously collects bathymetry, seafloor and water column backscatter data, allowing an integrated approach to mapping to more completely characterize a region, and has identified over 1,000 seafloor seeps. Hydrographic multibeam sonars like the EM302 were designed for mapping the bathymetry. It is only in the last decade that major mapping projects included an integrated approach that utilizes the seabed and water column backscatter information in addition to the bathymetry. Nautilus mapping in the Eastern Pacific over the past three years has included a number of seep-specific expeditions, and utilized and adapted the preliminary mapping guidelines that have emerged from research. The likelihood of seep detection is affected by many factors: the environment: seabed geomorphology, surficial sediment, seep location/depth, regional oceanography and biology, the nature of the seeps themselves: size variation, varying flux, depth, and transience, the detection system: design of hydrographic multibeam sonars limits use for water column detection, the platform: variations in the vessel and operations such as noise, speed, and swath overlap. Nautilus integrated seafloor mapping provided multiple indicators of seep locations, but it remains difficult to assess the probability of seep detection. Even when seeps were detected, they have not always been located during ROV dives. However, the presence of associated features (methane hydrate and bacterial mats) serve as evidence of potential seep activity and reinforce the transient nature of the seeps. Not detecting a seep in the water column data does not necessarily indicate that there is not a seep at a given location, but with multiple passes over an area and by the use of other contextual data, an area may

  6. Performance Comparison of OpenMP, MPI, and MapReduce in Practical Problems

    Directory of Open Access Journals (Sweden)

    Sol Ji Kang

    2015-01-01

    Full Text Available With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.

  7. Fine structure of Galactic foreground ISM towards high-redshift AGN - utilizing Herschel PACS and SPIRE data

    Science.gov (United States)

    Perger, K.; Pinter, S.; Frey, S.; Tóth, L. V.

    2018-05-01

    One of the most certain ways to determine star formation rate in galaxies is based on far infrared (FIR) measurements. To decide the origin of the observed FIR emission, subtracting the Galactic foreground is a crucial step. We utilized Herschel photometric data to determine the hydrogen column densities in three galactic latitude regions, at b = 27°, 50° and -80°. We applied a pixel-by-pixel fit to the spectral energy distribution (SED) for the images aquired from parallel PACS-SPIRE observations in all three sky areas. We determined the column densities with resolutions 45'' and 6', and compared the results with values estimated from the IRAS dust maps. Column densities at 27° and 50° galactic latitudes determined from the Herschel data are in a good agreement with the literature values. However, at the highest galactic latitude we found that the column densities from the Herschel data exceed those derived from the IRAS dust map.

  8. NMFS Water Column Sonar Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Water column sonar data are an important component of fishery independent surveys, habitat studies and other research. NMFS water column sonar data are archived here.

  9. Task mapping for non-contiguous allocations.

    Energy Technology Data Exchange (ETDEWEB)

    Leung, Vitus Joseph; Bunde, David P.; Ebbers, Johnathan; Price, Nicholas W.; Swank, Matthew.; Feer, Stefan P.; Rhodes, Zachary D.

    2013-02-01

    This paper examines task mapping algorithms for non-contiguously allocated parallel jobs. Several studies have shown that task placement affects job running time for both contiguously and non-contiguously allocated jobs. Traditionally, work on task mapping either uses a very general model where the job has an arbitrary communication pattern or assumes that jobs are allocated contiguously, making them completely isolated from each other. A middle ground between these two cases is the mapping problem for non-contiguous jobs having a specific communication pattern. We propose several task mapping algorithms for jobs with a stencil communication pattern and evaluate them using experiments and simulations. Our strategies improve the running time of a MiniApp by as much as 30% over a baseline strategy. Furthermore, this improvement increases markedly with the job size, demonstrating the importance of task mapping as systems grow toward exascale.

  10. Investigating the Effect of Column Geometry on Separation Efficiency using 3D Printed Liquid Chromatographic Columns Containing Polymer Monolithic Phases.

    Science.gov (United States)

    Gupta, Vipul; Beirne, Stephen; Nesterenko, Pavel N; Paull, Brett

    2018-01-16

    Effect of column geometry on the liquid chromatographic separations using 3D printed liquid chromatographic columns with in-column polymerized monoliths has been studied. Three different liquid chromatographic columns were designed and 3D printed in titanium as 2D serpentine, 3D spiral, and 3D serpentine columns, of equal length and i.d. Successful in-column thermal polymerization of mechanically stable poly(BuMA-co-EDMA) monoliths was achieved within each design without any significant structural differences between phases. Van Deemter plots indicated higher efficiencies for the 3D serpentine chromatographic columns with higher aspect ratio turns at higher linear velocities and smaller analysis times as compared to their counterpart columns with lower aspect ratio turns. Computational fluid dynamic simulations of a basic monolithic structure indicated 44%, 90%, 100%, and 118% higher flow through narrow channels in the curved monolithic configuration as compared to the straight monolithic configuration at linear velocities of 1, 2.5, 5, and 10 mm s -1 , respectively. Isocratic RPLC separations with the 3D serpentine column resulted in an average 23% and 245% (8 solutes) increase in the number of theoretical plates as compared to the 3D spiral and 2D serpentine columns, respectively. Gradient RPLC separations with the 3D serpentine column resulted in an average 15% and 82% (8 solutes) increase in the peak capacity as compared to the 3D spiral and 2D serpentine columns, respectively. Use of the 3D serpentine column at a higher flow rate, as compared to the 3D spiral column, provided a 58% reduction in the analysis time and 74% increase in the peak capacity for the isocratic separations of the small molecules and the gradient separations of proteins, respectively.

  11. Evaluation of Packed Distillation Columns I - Atmospheric Pressure

    National Research Council Canada - National Science Library

    Reynolds, Thaine

    1951-01-01

    .... Four column-packing combinations of the glass columns and four column-packing combinations of the steel columns were investigated at atmospheric pressure using a test mixture of methylcyclohexane...

  12. Center column design of the PLT

    International Nuclear Information System (INIS)

    Citrolo, J.; Frankenberg, J.

    1975-01-01

    The center column of the PLT machine is a secondary support member for the toroidal field coils. Its purpose is to decrease the bending moment at the nose of the coils. The center column design was to have been a stainless steel casting with the toroidal field coils grouped around the casting at installation, trapping it in place. However, the castings developed cracks during fabrication and were unsuitable for use. Installation of the coils proceeded without the center column. It then became necessary to redesign a center column which would be capable of installation with the toroidal field coils in place. The final design consists of three A-286 forgings. This paper discusses the final center column design and the influence that new knowledge, obtained during the power tests, had on the new design

  13. The handedness of historiated spiral columns.

    Science.gov (United States)

    Couzin, Robert

    2017-09-01

    Trajan's Column in Rome (AD 113) was the model for a modest number of other spiral columns decorated with figural, narrative imagery from antiquity to the present day. Most of these wind upwards to the right, often with a congruent spiral staircase within. A brief introductory consideration of antique screw direction in mechanical devices and fluted columns suggests that the former may have been affected by the handedness of designers and the latter by a preference for symmetry. However, for the historiated columns that are the main focus of this article, the determining factor was likely script direction. The manner in which this operated is considered, as well as competing mechanisms that might explain exceptions. A related phenomenon is the reversal of the spiral in a non-trivial number of reproductions of the antique columns, from Roman coinage to Renaissance and baroque drawings and engravings. Finally, the consistent inattention in academic literature to the spiral direction of historiated columns and the repeated publication of erroneous earlier reproductions warrants further consideration.

  14. Column-oriented database management systems

    OpenAIRE

    Možina, David

    2013-01-01

    In the following thesis I will present column-oriented database. Among other things, I will answer on a question why there is a need for a column-oriented database. In recent years there have been a lot of attention regarding a column-oriented database, even if the existence of a columnar database management systems dates back in the early seventies of the last century. I will compare both systems for a database management – a colum-oriented database system and a row-oriented database system ...

  15. Mapping urban geology of the city of Girona, Catalonia

    Science.gov (United States)

    Vilà, Miquel; Torrades, Pau; Pi, Roser; Monleon, Ona

    2016-04-01

    lines of the top of the pre-Quaternary basement surface. The most representative complementary maps are the quaternary map, the subsurface bedrock map and the isopach map of thickness of superficial deposits (Quaternary and anthropogenic). The map sheets also include charts and tables of relevant physic-chemical parameters of the geological materials, harmonized downhole lithological columns from selected boreholes, stratigraphic columns, and, photographs and figures illustrating the geology of the mapped area and how urbanization has changed the natural environment. The development of systematic urban geological mapping projects, such as the example of Girona's case, which provides valuable resources to address targeted studies related to urban planning, geoengineering works, soil pollution and other important environmental issues that society should deal with in the future.

  16. Radiotracer Imaging of Sediment Columns

    Science.gov (United States)

    Moses, W. W.; O'Neil, J. P.; Boutchko, R.; Nico, P. S.; Druhan, J. L.; Vandehey, N. T.

    2010-12-01

    Nuclear medical PET and SPECT cameras routinely image radioactivity concentration of gamma ray emitting isotopes (PET - 511 keV; SPECT - 75-300 keV). We have used nuclear medical imaging technology to study contaminant transport in sediment columns. Specifically, we use Tc-99m (T1/2 = 6 h, Eγ = 140 keV) and a SPECT camera to image the bacteria mediated reduction of pertechnetate, [Tc(VII)O4]- + Fe(II) → Tc(IV)O2 + Fe(III). A 45 mL bolus of Tc-99m (32 mCi) labeled sodium pertechnetate was infused into a column (35cm x 10cm Ø) containing uranium-contaminated subsurface sediment from the Rifle, CO site. A flow rate of 1.25 ml/min of artificial groundwater was maintained in the column. Using a GE Millennium VG camera, we imaged the column for 12 hours, acquiring 44 frames. As the microbes in the sediment were inactive, we expected most of the iron to be Fe(III). The images were consistent with this hypothesis, and the Tc-99m pertechnetate acted like a conservative tracer. Virtually no binding of the Tc-99m was observed, and while the bolus of activity propagated fairly uniformly through the column, some inhomogeneity attributed to sediment packing was observed. We expect that after augmentation by acetate, the bacteria will metabolically reduce Fe(III) to Fe(II), leading to significant Tc-99m binding. Imaging sediment columns using nuclear medicine techniques has many attractive features. Trace quantities of the radiolabeled compounds are used (micro- to nano- molar) and the half-lives of many of these tracers are short (Image of Tc-99m distribution in a column containing Rifle sediment at four times.

  17. Thermally stable dexsil-400 glass capillary columns

    International Nuclear Information System (INIS)

    Maskarinec, M.P.; Olerich, G.

    1980-01-01

    The factors affecting efficiency, thermal stability, and reproducibility of Dexsil-400 glass capillary columns for gas chromatography in general, and for polycyclic aromatic hydrocarbons (PAHs) in particular were investigated. Columns were drawn from Kimble KG-6 (soda-lime) glass or Kimox (borosilicate) glass. All silylation was carried out at 200 0 C. Columns were coated according to the static method. Freshly prepared, degassed solutions of Dexsil-400 in pentane or methylene chloride were used. Thermal stability of the Dexsil 400 columns with respect to gas chromatography/mass spectrometry (GC/MS) were tested. Column-to-column variability is a function of each step in the fabrication of the columns. The degree of etching, extent of silylation, and stationary phase film thickness must be carefully controlled. The variability in two Dexsil-400 capillary column prepared by etching, silylation with solution of hexa methyl disilazone (HMDS), and static coating is shown and also indicates the excellent selectivity of Dexsil-400 for the separation of alkylated aromatic compounds. The wide temperature range of Dexsil-400 and the high efficiency of the capillary columns also allow the analysis of complex mixtures with minimal prefractionation. Direct injection of a coal liquefaction product is given. Analysis by GC/MS indicated the presence of parent PAHs, alkylated PAHs, nitrogen and sulfur heterocycles, and their alkylated derivatives. 4 figures

  18. The simplified spherical harmonics (SP{sub L}) methodology with space and moment decomposition in parallel environments

    Energy Technology Data Exchange (ETDEWEB)

    Gianluca, Longoni; Alireza, Haghighat [Florida University, Nuclear and Radiological Engineering Department, Gainesville, FL (United States)

    2003-07-01

    In recent years, the SP{sub L} (simplified spherical harmonics) equations have received renewed interest for the simulation of nuclear systems. We have derived the SP{sub L} equations starting from the even-parity form of the S{sub N} equations. The SP{sub L} equations form a system of (L+1)/2 second order partial differential equations that can be solved with standard iterative techniques such as the Conjugate Gradient (CG). We discretized the SP{sub L} equations with the finite-volume approach in a 3-D Cartesian space. We developed a new 3-D general code, Pensp{sub L} (Parallel Environment Neutral-particle SP{sub L}). Pensp{sub L} solves both fixed source and criticality eigenvalue problems. In order to optimize the memory management, we implemented a Compressed Diagonal Storage (CDS) to store the SP{sub L} matrices. Pensp{sub L} includes parallel algorithms for space and moment domain decomposition. The computational load is distributed on different processors, using a mapping function, which maps the 3-D Cartesian space and moments onto processors. The code is written in Fortran 90 using the Message Passing Interface (MPI) libraries for the parallel implementation of the algorithm. The code has been tested on the Pcpen cluster and the parallel performance has been assessed in terms of speed-up and parallel efficiency. (author)

  19. InterMap3D: predicting and visualizing co-evolving protein residues

    DEFF Research Database (Denmark)

    Oliveira, Rodrigo Gouveia; Roque, francisco jose sousa simôes almeida; Wernersson, Rasmus

    2009-01-01

    InterMap3D predicts co-evolving protein residues and plots them on the 3D protein structure. Starting with a single protein sequence, InterMap3D automatically finds a set of homologous sequences, generates an alignment and fetches the most similar 3D structure from the Protein Data Bank (PDB......). It can also accept a user-generated alignment. Based on the alignment, co-evolving residues are then predicted using three different methods: Row and Column Weighing of Mutual Information, Mutual Information/Entropy and Dependency. Finally, InterMap3D generates high-quality images of the protein...

  20. Accounting for surface reflectance in the derivation of vertical column densities of NO2 from airborne imaging DOAS

    Science.gov (United States)

    Meier, Andreas Carlos; Schönhardt, Anja; Richter, Andreas; Bösch, Tim; Seyler, André; Constantin, Daniel Eduard; Shaiganfar, Reza; Merlaud, Alexis; Ruhtz, Thomas; Wagner, Thomas; van Roozendael, Michel; Burrows, John. P.

    2016-04-01

    Nitrogen oxides, NOx (NOx = NO + NO2) play a key role in tropospheric chemistry. In addition to their directly harmful effects on the respiratory system of living organisms, they influence the levels of tropospheric ozone and contribute to acid rain and eutrophication of ecosystems. As they are produced in combustion processes, they can serve as an indicator for anthropogenic air pollution. In the late summers of 2014 and 2015, two extensive measurement campaigns were conducted in Romania by several European research institutes, with financial support from ESA. The AROMAT / AROMAT-2 campaigns (Airborne ROmanian Measurements of Aerosols and Trace gases) were dedicated to measurements of air quality parameters utilizing newly developed instrumentation at state-of-the-art. The experiences gained will help to calibrate and validate the measurements taken by the upcoming Sentinel-S5p mission scheduled for launch in 2016. The IUP Bremen contributed to these campaigns with its airborne imaging DOAS (Differential Optical Absorption Spectroscopy) instrument AirMAP (Airborne imaging DOAS instrument for Measurements of Atmospheric Pollution). AirMAP allows retrieving spatial distributions of trace gas columns densities in a stripe below the aircraft. The measurements have a high spatial resolution of approximately 30 x 80 m2 (along x across track) at a typical flight altitude of 3000 m. Supported by the instrumental setup and the large swath, gapless maps of trace gas distributions above a large city, like Bucharest or Berlin, can be acquired within a time window of approximately two hours. These properties make AirMAP a valuable tool for the validation of trace gas measurements from space. DOAS retrievals yield the density of absorbers integrated along the light path of the measurement. The light path is altered with a changing surface reflectance, leading to enhanced / reduced slant column densities of NO2 depending on surface properties. This effect must be considered in

  1. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  2. Safety barriers and lighting columns.

    NARCIS (Netherlands)

    Schreuder, D.A.

    1972-01-01

    Problems arising from the sitting of lighting columns on the central reserve are reviewed, and remedial measures such as break-away lighting supports and installation of safety fences on the central reserve on both sides of the lighting columns are examined.

  3. Temperature-assisted On-column Solute Focusing: A General Method to Reduce Pre-column Dispersion in Capillary High Performance Liquid Chromatography

    Science.gov (United States)

    Groskreutz, Stephen R.; Weber, Stephen G.

    2014-01-01

    Solvent-based on-column focusing is a powerful and well known approach for reducingthe impact of pre-column dispersion in liquid chromatography. Here we describe an orthogonal temperature-based approach to focusing called temperature-assisted on-column solute focusing (TASF). TASF is founded on the same principles as the more commonly used solvent-based method wherein transient conditions are created thatlead to high solute retention at the column inlet. Combining the low thermal mass of capillary columns and the temperature dependence of solute retentionTASF is used effectivelyto compress injection bands at the head of the column through the transient reduction in column temperature to 5 °C for a defined 7 mm segment of a 6 cm long 150 μm I.D. column. Following the 30 second focusing time, the column temperature is increased rapidly to the separation temperature of 60 °C releasing the focused band of analytes. We developed a model tosimulate TASF separations based on solute retention enthalpies, focusing temperature, focusing time, and column parameters. This model guides the systematic study of the influence of sample injection volume on column performance.All samples have solvent compositions matching the mobile phase. Over the 45 to 1050 nL injection volume range evaluated, TASF reducesthe peak width for all soluteswith k’ greater than or equal to 2.5, relative to controls. Peak widths resulting from injection volumes up to 1.3 times the column fluid volume with TASF are less than 5% larger than peak widths from a 45 nL injection without TASF (0.07 times the column liquid volume). The TASF approach reduced concentration detection limits by a factor of 12.5 relative to a small volume injection for low concentration samples. TASF is orthogonal to the solvent focusing method. Thus, it canbe used where on-column focusing is required, but where implementation of solvent-based focusing is difficult. PMID:24973805

  4. Quantifying Methane Flux from a Prominent Seafloor Crater with Water Column Imagery Filtering and Bubble Quantification Techniques

    Science.gov (United States)

    Mitchell, G. A.; Gharib, J. J.; Doolittle, D. F.

    2015-12-01

    Methane gas flux from the seafloor to atmosphere is an important variable for global carbon cycle and climate models, yet is poorly constrained. Methodologies used to estimate seafloor gas flux commonly employ a combination of acoustic and optical techniques. These techniques often use hull-mounted multibeam echosounders (MBES) to quickly ensonify large volumes of the water column for acoustic backscatter anomalies indicative of gas bubble plumes. Detection of these water column anomalies with a MBES provides information on the lateral distribution of the plumes, the midwater dimensions of the plumes, and their positions on the seafloor. Seafloor plume locations are targeted for visual investigations using a remotely operated vehicle (ROV) to determine bubble emission rates, venting behaviors, bubble sizes, and ascent velocities. Once these variables are measured in-situ, an extrapolation of gas flux is made over the survey area using the number of remotely-mapped flares. This methodology was applied to a geophysical survey conducted in 2013 over a large seafloor crater that developed in response to an oil well blowout in 1983 offshore Papua New Guinea. The site was investigated by multibeam and sidescan mapping, sub-bottom profiling, 2-D high-resolution multi-channel seismic reflection, and ROV video and coring operations. Numerous water column plumes were detected in the data suggesting vigorously active vents within and near the seafloor crater (Figure 1). This study uses dual-frequency MBES datasets (Reson 7125, 200/400 kHz) and ROV video imagery of the active hydrocarbon seeps to estimate total gas flux from the crater. Plumes of bubbles were extracted from the water column data using threshold filtering techniques. Analysis of video images of the seep emission sites within the crater provided estimates on bubble size, expulsion frequency, and ascent velocity. The average gas flux characteristics made from ROV video observations is extrapolated over the number

  5. Heat Transfer Analysis for a Fixed CST Column

    International Nuclear Information System (INIS)

    Lee, S.Y.

    2004-01-01

    In support of a small column ion exchange (SCIX) process for the Savannah River Site waste processing program, a transient two-dimensional heat transfer model that includes the conduction process neglecting the convection cooling mechanism inside the crystalline silicotitanate (CST) column has been constructed and heat transfer calculations made for the present design configurations. For this situation, a no process flow condition through the column was assumed as one of the reference conditions for the simulation of a loss-of-flow accident. A series of the modeling calculations has been performed using a computational heat transfer approach. Results for the baseline model indicate that transit times to reach 130 degrees Celsius maximum temperature of the CST-salt solution column are about 96 hours when the 20-in CST column with 300 Ci/liter heat generation source and 25 degrees Celsius initial column temperature is cooled by natural convection of external air as a primary heat transfer mechanism. The modeling results for the 28-in column equipped with water jacket systems on the external wall surface of the column and water coolant pipe at the center of the CST column demonstrate that the column loaded with 300 Ci/liter heat source can be maintained non-boiling indefinitely. Sensitivity calculations for several alternate column sizes, heat loads of the packed column, engineered cooling systems, and various ambient conditions at the exterior wall of the column have been performed under the reference conditions of the CST-salt solution to assess the impact of those parameters on the peak temperatures of the packed column for a given transient time. The results indicate that a water-coolant pipe at the center of the CST column filled with salt solution is the most effective one among the potential design parameters related to the thermal energy dissipation of decay heat load. It is noted that the cooling mechanism at the wall boundary of the column has significant

  6. NON-LINEAR ANALYSIS OF AN EXPERIMENTAL JOINT OF COLUMN AND BEAMS OF ARMED CONCRETE-STEEL COLUMN FOR FRAME

    Directory of Open Access Journals (Sweden)

    Nelson López

    2017-12-01

    Full Text Available In this research, the nonlinear behavior of a real-scale experimental joint (node is studied, consisting of three reinforced concrete elements, one column and two beams joined to a structural steel column at the upper level. In the numerical analysis the model of the union was analyzed in the inelastic range, this model was elaborated with the finite element program based on fibers, SeismoStruct to analyze as a function of time, the traction and compression efforts in the confined area and not confined area of the concrete column and in the longitudinal reinforcement steel, as well as verification of the design of the base plate that joins the two columns. The results showed that tensile stresses in the unconfined zone surpassed the concrete breaking point, with cracking occurring just below the lower edge of the beams; in the confined area the traction efforts were much lower, with cracks occurring later than in the non-confined area. The concrete column-steel column joint behaved as a rigid node, so the elastic design was consistent with the calculation methodology of base plates for steel columns.

  7. A fast image encryption algorithm based on chaotic map

    Science.gov (United States)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  8. Fabrication of a micrometer Ni–Cu alloy column coupled with a Cu micro-column for thermal measurement

    International Nuclear Information System (INIS)

    Lin, J C; Chang, T K; Yang, J H; Jeng, J H; Lee, D L; Jiang, S B

    2009-01-01

    Micrometer Ni–Cu alloy columns have been fabricated by the micro-anode-guided electroplating (MAGE) process in the citrate bath. The surface morphology and chemical composition of the micro-columns were determined by copper concentration in the bath and by the electrical bias of MAGE. When fabricated in a bath of dilute copper (i.e. 4 mM) at lower voltages (e.g. 3.8 and 4.0 V), the alloy micro-columns revealed uniform diameter and smooth appearance. The alloy composition demonstrated an increase in the wt% ratio of Ni/Cu from 75/25, 80/20, 83/17 to 87/13 with increasing electrical bias from 3.8, 4.0, 4.2 to 4.4 V. However, it decreases from 75/25, 57/43 to 47/53 with increasing copper concentration from 4, 8 to 12 mM in the bath. Citrate plays a role in forming complexes with nickel and copper at similar reduction potentials, thus reducing simultaneously to Ni–Cu alloy. The mechanism for fabricating alloy micro-columns could be delineated on the basis of cathodic polarization of the complexes. A couple of micro-columns were fabricated using MAGE in constructing a pure copper micro-column on the top of a Ni/Cu (at 47/53) alloy micro-column. This micro-thermocouple provides a satisfactory measurement with good sensitivity and precision

  9. Mapping Hurricane Rita inland storm tide

    Science.gov (United States)

    Berenbrock, Charles; Mason, Jr., Robert R.; Blanchard, Stephen F.; Simonovic, Slobodan P.

    2009-01-01

    Flood-inundation data are most useful for decision makers when presented in the context of maps of effected communities and (or) areas. But because the data are scarce and rarely cover the full extent of the flooding, interpolation and extrapolation of the information are needed. Many geographic information systems (GIS) provide various interpolation tools, but these tools often ignore the effects of the topographic and hydraulic features that influence flooding. A barrier mapping method was developed to improve maps of storm tide produced by Hurricane Rita. Maps were developed for the maximum storm tide and at 3-hour intervals from midnight (0000 hour) through noon (1200 hour) on September 24, 2005. The improved maps depict storm-tide elevations and the extent of flooding. The extent of storm-tide inundation from the improved maximum storm-tide map was compared to the extent of flood-inundation from a map prepared by the Federal Emergency Management Agency (FEMA). The boundaries from these two maps generally compared quite well especially along the Calcasieu River. Also a cross-section profile that parallels the Louisiana coast was developed from the maximum storm-tide map and included FEMA high-water marks.

  10. Development of a fully automated open-column chemical-separation system—COLUMNSPIDER—and its application to Sr-Nd-Pb isotope analyses of igneous rock samples

    Science.gov (United States)

    Miyazaki, Takashi; Vaglarov, Bogdan Stefanov; Takei, Masakazu; Suzuki, Masahiro; Suzuki, Hiroaki; Ohsawa, Kouzou; Chang, Qing; Takahashi, Toshiro; Hirahara, Yuka; Hanyu, Takeshi; Kimura, Jun-Ichi; Tatsumi, Yoshiyuki

    A fully automated open-column resin-bed chemical-separation system, named COLUMNSPIDER, has been developed. The system consists of a programmable micropipetting robot that dispenses chemical reagents and sample solutions into an open-column resin bed for elemental separation. After the initial set up of resin columns, chemical reagents, and beakers for the separated chemical components, all separation procedures are automated. As many as ten samples can be eluted in parallel in a single automated run. Many separation procedures, such as radiogenic isotope ratio analyses for Sr and Nd, involve the use of multiple column separations with different resin columns, chemical reagents, and beakers of various volumes. COLUMNSPIDER completes these separations using multiple runs. Programmable functions, including the positioning of the micropipetter, reagent volume, and elution time, enable flexible operation. Optimized movements for solution take-up and high-efficiency column flushing allow the system to perform as precisely as when carried out manually by a skilled operator. Procedural blanks, examined for COLUMNSPIDER separations of Sr, Nd, and Pb, are low and negligible. The measured Sr, Nd, and Pb isotope ratios for JB-2 and Nd isotope ratios for JB-3 and BCR-2 rock standards all fall within the ranges reported previously in high-accuracy analyses. COLUMNSPIDER is a versatile tool for the efficient elemental separation of igneous rock samples, a process that is both labor intensive and time consuming.

  11. Vectoring of parallel synthetic jets: A parametric study

    Science.gov (United States)

    Berk, Tim; Gomit, Guillaume; Ganapathisubramani, Bharathram

    2016-11-01

    The vectoring of a pair of parallel synthetic jets can be described using five dimensionless parameters: the aspect ratio of the slots, the Strouhal number, the Reynolds number, the phase difference between the jets and the spacing between the slots. In the present study, the influence of the latter four on the vectoring behaviour of the jets is examined experimentally using particle image velocimetry. Time-averaged velocity maps are used to study the variations in vectoring behaviour for a parametric sweep of each of the four parameters independently. A topological map is constructed for the full four-dimensional parameter space. The vectoring behaviour is described both qualitatively and quantitatively. A vectoring mechanism is proposed, based on measured vortex positions. We acknowledge the financial support from the European Research Council (ERC Grant Agreement No. 277472).

  12. Benthic Habitat Mapping by Combining Lyzenga’s Optical Model and Relative Water Depth Model in Lintea Island, Southeast Sulawesi

    Science.gov (United States)

    Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.

    2017-12-01

    Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in

  13. Texture mapping in a distributed environment

    NARCIS (Netherlands)

    Nicolae, Goga; Racovita, Zoea; Telea, Alexandru

    2003-01-01

    This paper presents a tool for texture mapping in a distributed environment. A parallelization method based on the master-slave model is described. The purpose of this work is to lower the image generation time in the complex 3D scenes synthesis process. The experimental results concerning the

  14. Accelerating Lattice QCD Multigrid on GPUs Using Fine-Grained Parallelization

    Energy Technology Data Exchange (ETDEWEB)

    Clark, M. A. [NVIDIA Corp., Santa Clara; Joó, Bálint [Jefferson Lab; Strelchenko, Alexei [Fermilab; Cheng, Michael [Boston U., Ctr. Comp. Sci.; Gambhir, Arjun [William-Mary Coll.; Brower, Richard [Boston U.

    2016-12-22

    The past decade has witnessed a dramatic acceleration of lattice quantum chromodynamics calculations in nuclear and particle physics. This has been due to both significant progress in accelerating the iterative linear solvers using multi-grid algorithms, and due to the throughput improvements brought by GPUs. Deploying hierarchical algorithms optimally on GPUs is non-trivial owing to the lack of parallelism on the coarse grids, and as such, these advances have not proved multiplicative. Using the QUDA library, we demonstrate that by exposing all sources of parallelism that the underlying stencil problem possesses, and through appropriate mapping of this parallelism to the GPU architecture, we can achieve high efficiency even for the coarsest of grids. Results are presented for the Wilson-Clover discretization, where we demonstrate up to 10x speedup over present state-of-the-art GPU-accelerated methods on Titan. Finally, we look to the future, and consider the software implications of our findings.

  15. Physics Structure Analysis of Parallel Waves Concept of Physics Teacher Candidate

    International Nuclear Information System (INIS)

    Sarwi, S; Linuwih, S; Supardi, K I

    2017-01-01

    The aim of this research was to find a parallel structure concept of wave physics and the factors that influence on the formation of parallel conceptions of physics teacher candidates. The method used qualitative research which types of cross-sectional design. These subjects were five of the third semester of basic physics and six of the fifth semester of wave course students. Data collection techniques used think aloud and written tests. Quantitative data were analysed with descriptive technique-percentage. The data analysis technique for belief and be aware of answers uses an explanatory analysis. Results of the research include: 1) the structure of the concept can be displayed through the illustration of a map containing the theoretical core, supplements the theory and phenomena that occur daily; 2) the trend of parallel conception of wave physics have been identified on the stationary waves, resonance of the sound and the propagation of transverse electromagnetic waves; 3) the influence on the parallel conception that reading textbooks less comprehensive and knowledge is partial understanding as forming the structure of the theory. (paper)

  16. Behavior of reinforced concrete columns strenghtened by partial jacketing

    Directory of Open Access Journals (Sweden)

    D. B. FERREIRA

    Full Text Available This article presents the study of reinforced concrete columns strengthened using a partial jacket consisting of a 35mm self-compacting concrete layer added to its most compressed face and tested in combined compression and uniaxial bending until rupture. Wedge bolt connectors were used to increase bond at the interface between the two concrete layers of different ages. Seven 2000 mm long columns were tested. Two columns were cast monolithically and named PO (original column e PR (reference column. The other five columns were strengthened using a new 35 mm thick self-compacting concrete layer attached to the column face subjected to highest compressive stresses. Column PO had a 120mm by 250 mm rectangular cross section and other columns had a 155 mm by 250mm cross section after the strengthening procedure. Results show that the ultimate resistance of the strengthened columns was more than three times the ultimate resistance of the original column PO, indicating the effectiveness of the strengthening procedure. Detachment of the new concrete layer with concrete crushing and steel yielding occurred in the strengthened columns.

  17. Theory for the alignment of cortical feature maps during development.

    Science.gov (United States)

    Bressloff, Paul C; Oster, Andrew M

    2010-08-01

    We present a developmental model of ocular dominance column formation that takes into account the existence of an array of intrinsically specified cytochrome oxidase blobs. We assume that there is some molecular substrate for the blobs early in development, which generates a spatially periodic modulation of experience-dependent plasticity. We determine the effects of such a modulation on a competitive Hebbian mechanism for the modification of the feedforward afferents from the left and right eyes. We show how alternating left and right eye dominated columns can develop, in which the blobs are aligned with the centers of the ocular dominance columns and receive a greater density of feedforward connections, thus becoming defined extrinsically. More generally, our results suggest that the presence of periodically distributed anatomical markers early in development could provide a mechanism for the alignment of cortical feature maps.

  18. The ETLMR MapReduce-Based ETL Framework

    DEFF Research Database (Denmark)

    Xiufeng, Liu; Thomsen, Christian; Pedersen, Torben Bach

    2011-01-01

    This paper presents ETLMR, a parallel Extract--Transform--Load (ETL) programming framework based on MapReduce. It has built-in support for high-level ETL-specific constructs including star schemas, snowflake schemas, and slowly changing dimensions (SCDs). ETLMR gives both high programming...

  19. Recent advances in column switching sample preparation in bioanalysis.

    Science.gov (United States)

    Kataoka, Hiroyuki; Saito, Keita

    2012-04-01

    Column switching techniques, using two or more stationary phase columns, are useful for trace enrichment and online automated sample preparation. Target fractions from the first column are transferred online to a second column with different properties for further separation. Column switching techniques can be used to determine the analytes in a complex matrix by direct sample injection or by simple sample treatment. Online column switching sample preparation is usually performed in combination with HPLC or capillary electrophoresis. SPE or turbulent flow chromatography using a cartridge column and in-tube solid-phase microextraction using a capillary column have been developed for convenient column switching sample preparation. Furthermore, various micro-/nano-sample preparation devices using new polymer-coating materials have been developed to improve extraction efficiency. This review describes current developments and future trends in novel column switching sample preparation in bioanalysis, focusing on innovative column switching techniques using new extraction devices and materials.

  20. Intro to Google Maps and Google Earth

    Directory of Open Access Journals (Sweden)

    Jim Clifford

    2013-12-01

    Full Text Available Google My Maps and Google Earth provide an easy way to start creating digital maps. With a Google Account you can create and edit personal maps by clicking on My Places. In My Maps you can choose between several different base maps (including the standard satellite, terrain, or standard maps and add points, lines and polygons. It is also possible to import data from a spreadsheet, if you have columns with geographical information (i.e. longitudes and latitudes or place names. This automates a formerly complex task known as geocoding. Not only is this one of the easiest ways to begin plotting your historical data on a map, but it also has the power of Google’s search engine. As you read about unfamiliar places in historical documents, journal articles or books, you can search for them using Google Maps. It is then possible to mark numerous locations and explore how they relate to each other geographically. Your personal maps are saved by Google (in their cloud, meaning you can access them from any computer with an internet connection. You can keep them private or embed them in your website or blog. Finally, you can export your points, lines, and polygons as KML files and open them in Google Earth or Quantum GIS.

  1. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  2. Chromatographic properties PLOT multicapillary columns.

    Science.gov (United States)

    Nikolaeva, O A; Patrushev, Y V; Sidelnikov, V N

    2017-03-10

    Multicapillary columns (MCCs) for gas chromatography make it possible to perform high-speed analysis of the mixtures of gaseous and volatile substances at a relatively large amount of the loaded sample. The study was performed using PLOT MCCs for gas-solid chromatography (GSC) with different stationary phases (SP) based on alumina, silica and poly-(1-trimethylsilyl-1-propyne) (PTMSP) polymer as well as porous polymers divinylbenzene-styrene (DVB-St), divinylbenzene-vinylimidazole (DVB-VIm) and divinylbenzene-ethylene glycol dimethacrylate (DVB-EGD). These MCCs have the efficiency of 4000-10000 theoretical plates per meter (TP/m) and at a column length of 25-30cm can separate within 10-20s multicomponent mixtures of substances belonging to different classes of chemical compounds. The sample amount not overloading the column is 0.03-1μg and depends on the features of a porous layer. Examples of separations on some of the studied columns are considered. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Development of immobilized membrane-based affinity columns for use in the online characterization of membrane bound proteins and for targeted affinity isolations

    International Nuclear Information System (INIS)

    Moaddel, Ruin; Wainer, Irving W.

    2006-01-01

    Membranes obtained from cell lines that express or do not express a target membrane bound protein have been immobilized on a silica-based liquid chromatographic support or on the surface of an activated glass capillary. The resulting chromatographic columns have been placed in liquid chromatographic systems and used to characterize the target proteins and to identify small molecules that bind to the target. Membranes containing ligand gated ion channels, G-protein coupled receptors and drug transporters have been prepared and characterized. If a marker ligand has been identified for the target protein, frontal or zonal displacement chromatographic techniques can be used to determine binding affinities (K d values) and non-linear chromatography can be used to assess the association (k on ) and dissociation (k off ) rate constants and the thermodynamics of the binding process. Membrane-based affinity columns have been created using membranes from a cell line that does not express the target protein (control) and the same cell line that expresses the target protein (experimental) after genomic transfection. The resulting columns can be placed in a parallel chromatography system and the differential retention between the control and experimental columns can be used to identify small molecules and protein that bind to the target protein. These applications will be illustrated using columns created using cellular membranes containing nicotinic acetylcholine receptors and the drug transporter P-glycoprotein

  4. Development of immobilized membrane-based affinity columns for use in the online characterization of membrane bound proteins and for targeted affinity isolations

    Energy Technology Data Exchange (ETDEWEB)

    Moaddel, Ruin [Gerontology Research Center, National Institute on Aging, National Institutes of Health, 5600 Nathan Shock Drive, Baltimore, MD 21224-6825 (United States); Wainer, Irving W. [Gerontology Research Center, National Institute on Aging, National Institutes of Health, 5600 Nathan Shock Drive, Baltimore, MD 21224-6825 (United States)]. E-mail: Wainerir@grc.nia.nih.gov

    2006-03-30

    Membranes obtained from cell lines that express or do not express a target membrane bound protein have been immobilized on a silica-based liquid chromatographic support or on the surface of an activated glass capillary. The resulting chromatographic columns have been placed in liquid chromatographic systems and used to characterize the target proteins and to identify small molecules that bind to the target. Membranes containing ligand gated ion channels, G-protein coupled receptors and drug transporters have been prepared and characterized. If a marker ligand has been identified for the target protein, frontal or zonal displacement chromatographic techniques can be used to determine binding affinities (K {sub d} values) and non-linear chromatography can be used to assess the association (k {sub on}) and dissociation (k {sub off}) rate constants and the thermodynamics of the binding process. Membrane-based affinity columns have been created using membranes from a cell line that does not express the target protein (control) and the same cell line that expresses the target protein (experimental) after genomic transfection. The resulting columns can be placed in a parallel chromatography system and the differential retention between the control and experimental columns can be used to identify small molecules and protein that bind to the target protein. These applications will be illustrated using columns created using cellular membranes containing nicotinic acetylcholine receptors and the drug transporter P-glycoprotein.

  5. Biaxial bending of slender HSC columns and tubes filled with concrete under short- and long-term loads: I Theory

    Directory of Open Access Journals (Sweden)

    Jose A. Rodríguez-Gutiérrez

    2014-05-01

    Full Text Available An analytical method that calculates both the short- and long-term response of slender columns made of high-strength concrete (HSC and tubes filled with concrete with generalized end conditions and subjected to transverse loads along the span and axial load at the ends (causing a single or double curvature under uniaxial or biaxial bending is presented. The proposed method, which is an extension of a method previously developed by the authors, is capable of predicting not only the complete load-rotation and load-deflection curves (both the ascending and descending parts but also the maximum load capacity. The columns that can be analyzed include solid and hollow (rectangular, circular, oval, C-, T-, L-, or any arbitrary shape cross sections and columns made of circular and rectangular steel tubes filled with HSC. The fiber method is used to calculate the moment-curvature diagrams at different levels of the applied axial load (i.e., the M-P-φ curves, and the Gauss method of integration (for the sum of the contributions of the fibers parallel to the neutral axis is used to calculate the lateral rotations and deflections along the column span. Long-term effects, such as creep and shrinkage of the concrete, are also included. However, the effects of the shear deformations and torsion along the member are not included. The validity of the proposed method is presented in a companion paper and compared against the experimental results for over seventy column specimens reported in the technical literature by different researchers.

  6. Using parallel factor analysis modeling (PARAFAC) and self-organizing maps to track senescence-induced patterns in leaf litter leachate

    Science.gov (United States)

    Wheeler, K. I.; Levia, D. F., Jr.; Hudson, J. E.

    2017-12-01

    As trees undergo autumnal processes such as resorption, senescence, and leaf abscission, the dissolved organic matter (DOM) contribution of leaf litter leachate to streams changes. However, little research has investigated how the fluorescent DOM (FDOM) changes throughout the autumn and how this differs inter- and intraspecifically. Two of the major impacts of global climate change on forested ecosystems include altering phenology and causing forest community species and subspecies composition restructuring. We examined changes in FDOM in leachate from American beech (Fagus grandifolia Ehrh.) leaves in Maryland, Rhode Island, Vermont, and North Carolina and yellow poplar (Liriodendron tulipifera L.) leaves from Maryland throughout three different phenophases: green, senescing, and freshly abscissed. Beech leaves from Maryland and Rhode Island have previously been identified as belonging to the same distinct genetic cluster and beech trees from Vermont and the study site in North Carolina from the other. FDOM in samples was characterized using excitation-emission matrices (EEMs) and a six-component parallel factor analysis (PARAFAC) model was created to identify components. Self-organizing maps (SOMs) were used to visualize variation and patterns in the PARAFAC component proportions of the leachate samples. Phenophase and species had the greatest influence on determining where a sample mapped on the SOM when compared to genetic clusters and geographic origin. Throughout senescence, FDOM from all the trees transitioned from more protein-like components to more humic-like ones. Percent greenness of the sampled leaves and the proportion of the tyrosine-like component 1 were found to significantly differ between the two genetic beech clusters. This suggests possible differences in photosynthesis and resorption between the two genetic clusters of beech. The use of SOMs to visualize differences in patterns of senescence between the different species and genetic

  7. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    Science.gov (United States)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  8. GRAPES: a software for parallel searching on biological graphs targeting multi-core architectures.

    Directory of Open Access Journals (Sweden)

    Rosalba Giugno

    Full Text Available Biological applications, from genomics to ecology, deal with graphs that represents the structure of interactions. Analyzing such data requires searching for subgraphs in collections of graphs. This task is computationally expensive. Even though multicore architectures, from commodity computers to more advanced symmetric multiprocessing (SMP, offer scalable computing power, currently published software implementations for indexing and graph matching are fundamentally sequential. As a consequence, such software implementations (i do not fully exploit available parallel computing power and (ii they do not scale with respect to the size of graphs in the database. We present GRAPES, software for parallel searching on databases of large biological graphs. GRAPES implements a parallel version of well-established graph searching algorithms, and introduces new strategies which naturally lead to a faster parallel searching system especially for large graphs. GRAPES decomposes graphs into subcomponents that can be efficiently searched in parallel. We show the performance of GRAPES on representative biological datasets containing antiviral chemical compounds, DNA, RNA, proteins, protein contact maps and protein interactions networks.

  9. Circum-North Pacific tectonostratigraphic terrane map

    Science.gov (United States)

    Nokleberg, Warren J.; Parfenov, Leonid M.; Monger, James W.H.; Baranov, Boris B.; Byalobzhesky, Stanislav G.; Bundtzen, Thomas K.; Feeney, Tracey D.; Fujita, Kazuya; Gordey, Steven P.; Grantz, Arthur; Khanchuk, Alexander I.; Natal'in, Boris A.; Natapov, Lev M.; Norton, Ian O.; Patton, William W.; Plafker, George; Scholl, David W.; Sokolov, Sergei D.; Sosunov, Gleb M.; Stone, David B.; Tabor, Rowland W.; Tsukanov, Nickolai V.; Vallier, Tracy L.; Wakita, Koji

    1994-01-01

    The companion tectonostratigraphic terrane and overlap assemblage of map the Circum-North Pacific presents a modern description of the major geologic and tectonic units of the region. The map illustrates both the onshore terranes and overlap volcanic assemblages of the region, and the major offshore geologic features. The map is the first collaborative compilation of the geology of the region at a scale of 1:5,000,000 by geologists of the Russian Far East, Japanese, Alaskan, Canadian, and U.S.A. Pacific Northwest. The map is designed to be a source of geologic information for all scientists interested in the region, and is designed to be used for several purposes, including regional tectonic analyses, mineral resource and metallogenic analyses (Nokleberg and others, 1993, 1994a), petroleum analyses, neotectonic analyses, and analyses of seismic hazards and volcanic hazards. This text contains an introduction, tectonic definitions, acknowledgments, descriptions of postaccretion stratified rock units, descriptions and stratigraphic columns for tectonostratigraphic terranes in onshore areas, and references for the companion map (Sheets 1 to 5). This map is the result of extensive geologic mapping and associated tectonic studies in the Russian Far East, Hokkaido Island of Japan, Alaska, the Canadian Cordillera, and the U.S.A. Pacific Northwest in the last few decades. Geologic mapping suggests that most of this region can be interpreted as a collage of fault-bounded tectonostratigraphic terranes that were accreted onto continental margins around the Circum-

  10. Experiments to Distribute Map Generalization Processes

    Science.gov (United States)

    Berli, Justin; Touya, Guillaume; Lokhat, Imran; Regnauld, Nicolas

    2018-05-01

    Automatic map generalization requires the use of computationally intensive processes often unable to deal with large datasets. Distributing the generalization process is the only way to make them scalable and usable in practice. But map generalization is a highly contextual process, and the surroundings of a generalized map feature needs to be known to generalize the feature, which is a problem as distribution might partition the dataset and parallelize the processing of each part. This paper proposes experiments to evaluate the past propositions to distribute map generalization, and to identify the main remaining issues. The past propositions to distribute map generalization are first discussed, and then the experiment hypotheses and apparatus are described. The experiments confirmed that regular partitioning was the quickest strategy, but also the less effective in taking context into account. The geographical partitioning, though less effective for now, is quite promising regarding the quality of the results as it better integrates the geographical context.

  11. SharkDB: an in-memory column-oriented storage for trajectory analysis

    KAUST Repository

    Zheng, Bolong; Wang, Haozhou; Zheng, Kai; Su, Han; Liu, Kuien; Shang, Shuo

    2017-01-01

    The last decade has witnessed the prevalence of sensor and GPS technologies that produce a high volume of trajectory data representing the motion history of moving objects. However some characteristics of trajectories such as variable lengths and asynchronous sampling rates make it difficult to fit into traditional database systems that are disk-based and tuple-oriented. Motivated by the success of column store and recent development of in-memory databases, we try to explore the potential opportunities of boosting the performance of trajectory data processing by designing a novel trajectory storage within main memory. In contrast to most existing trajectory indexing methods that keep consecutive samples of the same trajectory in the same disk page, we partition the database into frames in which the positions of all moving objects at the same time instant are stored together and aligned in main memory. We found this column-wise storage to be surprisingly well suited for in-memory computing since most frames can be stored in highly compressed form, which is pivotal for increasing the memory throughput and reducing CPU-cache miss. The independence between frames also makes them natural working units when parallelizing data processing on a multi-core environment. Lastly we run a variety of common trajectory queries on both real and synthetic datasets in order to demonstrate advantages and study the limitations of our proposed storage.

  12. SharkDB: an in-memory column-oriented storage for trajectory analysis

    KAUST Repository

    Zheng, Bolong

    2017-05-05

    The last decade has witnessed the prevalence of sensor and GPS technologies that produce a high volume of trajectory data representing the motion history of moving objects. However some characteristics of trajectories such as variable lengths and asynchronous sampling rates make it difficult to fit into traditional database systems that are disk-based and tuple-oriented. Motivated by the success of column store and recent development of in-memory databases, we try to explore the potential opportunities of boosting the performance of trajectory data processing by designing a novel trajectory storage within main memory. In contrast to most existing trajectory indexing methods that keep consecutive samples of the same trajectory in the same disk page, we partition the database into frames in which the positions of all moving objects at the same time instant are stored together and aligned in main memory. We found this column-wise storage to be surprisingly well suited for in-memory computing since most frames can be stored in highly compressed form, which is pivotal for increasing the memory throughput and reducing CPU-cache miss. The independence between frames also makes them natural working units when parallelizing data processing on a multi-core environment. Lastly we run a variety of common trajectory queries on both real and synthetic datasets in order to demonstrate advantages and study the limitations of our proposed storage.

  13. PRTR ion exchange vault column sampling

    International Nuclear Information System (INIS)

    Cornwell, B.C.

    1995-01-01

    This report documents ion exchange column sampling and Non Destructive Assay (NDA) results from activities in 1994, for the Plutonium Recycle Test Reactor (PRTR) ion exchange vault. The objective was to obtain sufficient information to prepare disposal documentation for the ion exchange columns found in the PRTR Ion exchange vault. This activity also allowed for the monitoring of the liquid level in the lower vault. The sampling activity contained five separate activities: (1) Sampling an ion exchange column and analyzing the ion exchange media for purpose of waste disposal; (2) Gamma and neutron NDA testing on ion exchange columns located in the upper vault; (3) Lower vault liquid level measurement; (4) Radiological survey of the upper vault; and (5) Secure the vault pending waste disposal

  14. Implementation of QR up- and downdating on a massively parallel |computer

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Hansen, Per Christian; Madsen, Kaj

    1995-01-01

    We describe an implementation of QR up- and downdating on a massively parallel computer (the Connection Machine CM-200) and show that the algorithm maps well onto the computer. In particular, we show how the use of corrected semi-normal equations for downdating can be efficiently implemented. We...... also illustrate the use of our algorithms in a new LP algorithm....

  15. Laser surface wakefield in a plasma column

    International Nuclear Information System (INIS)

    Gorbunov, L.M.; Mora, P.; Ramazashvili, R.R.

    2003-01-01

    The structure of the wakefield in a plasma column, produced by a short intense laser pulse, propagating through a gas affected by tunneling ionization is investigated. It is shown that besides the usual plasma waves in the bulk part of the plasma column [see Andreev et al., Phys. Plasmas 9, 3999 (2002)], the laser pulse also generates electromagnetic surface waves propagating along the column boundary. The length of the surface wake wave substantially exceeds the length of the plasma wake wave and its electromagnetic field extends far outside the plasma column

  16. Field Applications of Gamma Column Scanning Technology

    International Nuclear Information System (INIS)

    Aquino, Denis D.; Mallilin, Janice P.; Nuñez, Ivy Angelica A.; Bulos, Adelina DM.

    2015-01-01

    The Isotope Techniques Section (ITS) under the Nuclear Service Division (NSD) of the Philippine Nuclear Research Institute (PNRI) conducts services, research and development on radioisotope and sealed source application in the industry. This aims to benefit the manufacturing industries such as petroleum, petrochemical, chemical, energy, waste, column treatment plant, etc. through on line inspection and troubleshooting of a process vessel, column or pipe that could optimize the process operation and increase production efficiency. One of the most common sealed source techniques for industrial applications is the gamma column scanning technology. Gamma column scanning technology is an established technique for inspection, analysis and diagnosis of industrial columns for process optimization, solving operational malfunctions and management of resources. It is a convenient non-intrusive, cost effective and cost-efficient technique to examine inner details of an industrial process vessel such as a distillation column while it is in operation. The Philippine Nuclear Research Institute (PNRI) recognize the importance and benefits of this technology and has implemented activities to make gamma column scanning locally available to benefit the Philippine industries. Continuous effort for capacity building is being pursued thru the implementation of in-house and on-the-job training abroad and upgrading of equipment. (author)

  17. A Self Consistent Multiprocessor Space Charge Algorithm that is Almost Embarrassingly Parallel

    International Nuclear Information System (INIS)

    Nissen, Edward; Erdelyi, B.; Manikonda, S.L.

    2012-01-01

    We present a space charge code that is self consistent, massively parallelizeable, and requires very little communication between computer nodes; making the calculation almost embarrassingly parallel. This method is implemented in the code COSY Infinity where the differential algebras used in this code are important to the algorithm's proper functioning. The method works by calculating the self consistent space charge distribution using the statistical moments of the test particles, and converting them into polynomial series coefficients. These coefficients are combined with differential algebraic integrals to form the potential, and electric fields. The result is a map which contains the effects of space charge. This method allows for massive parallelization since its statistics based solver doesn't require any binning of particles, and only requires a vector containing the partial sums of the statistical moments for the different nodes to be passed. All other calculations are done independently. The resulting maps can be used to analyze the system using normal form analysis, as well as advance particles in numbers and at speeds that were previously impossible.

  18. Theory for the alignment of cortical feature maps during development

    KAUST Repository

    Bressloff, Paul C.

    2010-08-23

    We present a developmental model of ocular dominance column formation that takes into account the existence of an array of intrinsically specified cytochrome oxidase blobs. We assume that there is some molecular substrate for the blobs early in development, which generates a spatially periodic modulation of experience-dependent plasticity. We determine the effects of such a modulation on a competitive Hebbian mechanism for the modification of the feedforward afferents from the left and right eyes. We show how alternating left and right eye dominated columns can develop, in which the blobs are aligned with the centers of the ocular dominance columns and receive a greater density of feedforward connections, thus becoming defined extrinsically. More generally, our results suggest that the presence of periodically distributed anatomical markers early in development could provide a mechanism for the alignment of cortical feature maps. © 2010 The American Physical Society.

  19. Comparing MapReduce and Pipeline Implementations for Counting Triangles

    Directory of Open Access Journals (Sweden)

    Edelmira Pasarella

    2017-01-01

    Full Text Available A common method to define a parallel solution for a computational problem consists in finding a way to use the Divide and Conquer paradigm in order to have processors acting on its own data and scheduled in a parallel fashion. MapReduce is a programming model that follows this paradigm, and allows for the definition of efficient solutions by both decomposing a problem into steps on subsets of the input data and combining the results of each step to produce final results. Albeit used for the implementation of a wide variety of computational problems, MapReduce performance can be negatively affected whenever the replication factor grows or the size of the input is larger than the resources available at each processor. In this paper we show an alternative approach to implement the Divide and Conquer paradigm, named dynamic pipeline. The main features of dynamic pipelines are illustrated on a parallel implementation of the well-known problem of counting triangles in a graph. This problem is especially interesting either when the input graph does not fit in memory or is dynamically generated. To evaluate the properties of pipeline, a dynamic pipeline of processes and an ad-hoc version of MapReduce are implemented in the language Go, exploiting its ability to deal with channels and spawned processes. An empirical evaluation is conducted on graphs of different topologies, sizes, and densities. Observed results suggest that dynamic pipelines allows for an efficient implementation of the problem of counting triangles in a graph, particularly, in dense and large graphs, drastically reducing the execution time with respect to the MapReduce implementation.

  20. Particle simulation on a distributed memory highly parallel processor

    International Nuclear Information System (INIS)

    Sato, Hiroyuki; Ikesaka, Morio

    1990-01-01

    This paper describes parallel molecular dynamics simulation of atoms governed by local force interaction. The space in the model is divided into cubic subspaces and mapped to the processor array of the CAP-256, a distributed memory, highly parallel processor developed at Fujitsu Labs. We developed a new technique to avoid redundant calculation of forces between atoms in different processors. Experiments showed the communication overhead was less than 5%, and the idle time due to load imbalance was less than 11% for two model problems which contain 11,532 and 46,128 argon atoms. From the software simulation, the CAP-II which is under development is estimated to be about 45 times faster than CAP-256 and will be able to run the same problem about 40 times faster than Fujitsu's M-380 mainframe when 256 processors are used. (author)

  1. Fraud Detection in Credit Card Transactions; Using Parallel Processing of Anomalies in Big Data

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Taghva

    2016-10-01

    Full Text Available In parallel to the increasing use of electronic cards, especially in the banking industry, the volume of transactions using these cards has grown rapidly. Moreover, the financial nature of these cards has led to the desirability of fraud in this area. The present study with Map Reduce approach and parallel processing, applied the Kohonen neural network model to detect abnormalities in bank card transactions. For this purpose, firstly it was proposed to classify all transactions into the fraudulent and legal which showed better performance compared with other methods. In the next step, we transformed the Kohonen model into the form of parallel task which demonstrated appropriate performance in terms of time; as expected to be well implemented in transactions with Big Data assumptions.

  2. Parallel beam dynamics simulation of linear accelerators

    International Nuclear Information System (INIS)

    Qiang, Ji; Ryne, Robert D.

    2002-01-01

    In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies

  3. Picobubble enhanced column flotation of fine coal

    Energy Technology Data Exchange (ETDEWEB)

    Tao, D.; Yu, S.; Parekh, B.K. [University of Kentucky, Lexington, KY (United States). Mining Engineering

    2006-07-01

    The purpose is to study the effectiveness of picobubbles in the column flotation of -28 mesh fine coal particles. A flotation column with a picobubble generator was developed and tested for enhancing the recovery of ultrafine coal particles. The picobubble generator was designed using the hydrodynamic cavitation principle. A metallurgical and a steam coal were tested in the apparatus. The results show that the use of picobubbles in a 2in. flotation column increased the recovery of fine coal by 10 to 30%. The recovery rate varied with feed rate, collector dosage, and other column conditions. 40 refs., 8 figs., 2 tabs.

  4. Calibrationless Parallel Magnetic Resonance Imaging: A Joint Sparsity Model

    Directory of Open Access Journals (Sweden)

    Angshul Majumdar

    2013-12-01

    Full Text Available State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets—eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used—Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods—CS SENSE and l1SPIRiT and two calibration free techniques—Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  5. Attenuation of pyrite oxidation with a fly ash pre-barrier: Reactive transport modelling of column experiments

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Lopez, R.; Cama, J.; Nieto, J.M.; Ayora, C.; Saaltink, M.W. [University of Huelva, Huelva (Spain). Dept. of Geology

    2009-09-15

    Conventional permeable reactive barriers (PRBs) for passive treatment of groundwater contaminated by acid mine drainage (AMD) use limestone as reactive material that neutralizes water acidity. However, the limestone-alkalinity potential ceases as inevitable precipitation of secondary metal-phases on grain surfaces occurs, limiting its efficiency. In the present study, fly ash derived from coal combustion is investigated as an alternative alkalinity generating material for the passive treatment of AMD using solution-saturated column experiments. Unlike conventional systems, the utilization of fly ash in a pre-barrier to intercept the non-polluted recharge water before this water reacts with pyrite-rich wastes is proposed. Chemical variation in the columns was interpreted with the reactive transport code RETRASO. In parallel, kinetics of fly ash dissolution at alkaline pH were studied using flow-through experiments and incorporated into the model. In a saturated column filled solely with pyritic sludge-quartz sand (1: 10), oxidation took place at acidic conditions (pH 3.7). According to SO{sub 4}{sup 2-} release and pH, pyrite dissolution occurred favourably in the solution-saturated porous medium until dissolved O{sub 2} was totally consumed. In a second saturated column, pyrite oxidation took place at alkaline conditions (pH 10.45) as acidity was neutralized by fly ash dissolution in a previous level. At this pH Fe release from pyrite dissolution was immediately depleted as Fe-oxy(hydroxide) phases that precipitated on the pyrite grains, forming Fe-coatings (microencapsulation). With time, pyrite microencapsulation inhibited oxidation in practically 97% of the pyritic sludge. Rapid pyrite-surface passivation decreased its reactivity, preventing AMD production in the relatively short term.

  6. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    Science.gov (United States)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  7. Exploring the effect of mesopore size reduction on the column performance of silica-based open tubular capillary columns.

    Science.gov (United States)

    Hara, Takeshi; Futagami, Shunta; De Malsche, Wim; Baron, Gino V; Desmet, Gert

    2018-06-01

    We report on a modification in the hydrothermal treatment process of monolithic silica layers used in porous-layered open tubular (PLOT) columns. Lowering the temperature from the customary 95 °C to 80 °C, the size of the mesopores reduced by approximately about 35% from 12-13.5 nm to 7.5-9 nm, while the specific pore volume essentially remains unaltered. This led to an increase of the specific surface area (SA) of about 40%, quasi-independent of the porous layer thickness. The increased surface area provided a corresponding increase in retention, somewhat more (48%) than expected based on the increase in SA for the thin layer columns, and somewhat less than expected (34%) for the thick layer columns. The recipes were applied in 5 μm i.d.-capillaries with a length of 60 cm. Efficiencies under retained conditions amounted up to N = 137,000 for the PLOT column with a layer thickness (d f ) of 300 nm and to N = 109,000 for the PLOT column with d f  = 550 nm. Working under conditions of similar retention, the narrow pore/high SA columns produced with the new 80 °C recipe generated the same number of theoretical plates as the wide pore size/low SA columns produced with the 95 °C recipe. This shows the 80 °C-hydrothermal treatment process allows for an increase in the phase ratio of the PLOT columns without affecting their intrinsic mass transfer properties and separation kinetics. This is further corroborated by the fact that the plate height curves generated with the new and former recipe can both be well-fitted with the Golay-Aris equation without having to change the intra-layer diffusion coefficient. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Construction of a digital elevation model: methods and parallelization

    International Nuclear Information System (INIS)

    Mazzoni, Christophe

    1995-01-01

    The aim of this work is to reduce the computation time needed to produce the Digital Elevation Models (DEM) by using a parallel machine. It is made in collaboration between the French 'Institut Geographique National' (IGN) and the Laboratoire d'Electronique de Technologie et d'Instrumentation (LETI) of the French Atomic Energy Commission (CEA). The IGN has developed a system which provides DEM that is used to produce topographic maps. The kernel of this system is the correlator, a software which automatically matches pairs of homologous points of a stereo-pair of photographs. Nevertheless the correlator is expensive In computing time. In order to reduce computation time and to produce the DEM with same accuracy that the actual system, we have parallelized the IGN's correlator on the OPENVISION system. This hardware solution uses a SIMD (Single Instruction Multiple Data) parallel machine SYMPATI-2, developed by the LETI that is involved in parallel architecture and image processing. Our analysis of the implementation has demonstrated the difficulty of efficient coupling between scalar and parallel structure. So we propose solutions to reinforce this coupling. In order to accelerate more the processing we evaluate SYMPHONIE, a SIMD calculator, successor of SYMPATI-2. On an other hand, we developed a multi-agent approach for what a MIMD (Multiple Instruction, Multiple Data) architecture is available. At last, we describe a Multi-SIMD architecture that conciliates our two approaches. This architecture offers a capacity to apprehend efficiently multi-level treatment image. It is flexible by its modularity, and its communication network supplies reliability that interest sensible systems. (author) [fr

  9. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  10. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  11. [Online enrichment ability of restricted-access column coupled with high performance liquid chromatography by column switching technique for benazepril hydrochloride].

    Science.gov (United States)

    Zhang, Xiaohui; Wang, Rong; Xie, Hua; Yin, Qiang; Li, Xiaoyun; Jia, Zhengping; Wu, Xiaoyu; Zhang, Juanhong; Li, Wenbin

    2013-05-01

    The online enrichment ability of the restricted-access media (RAM) column coupled with high performance liquid chromatography by column switching technique for benazepril hydrochloride in plasma was studied. The RAM-HPLC system consisted of an RAM column as enrichment column and a C18 column as analytical column coupled via the column switching technique. The effects of the injection volume on the peak area and the systematic pressure were studied. When the injection volume was less than 100 microL, the peak area increased with the increase of the injection volume. However, when the injection volume was more than 80 microL, the pressure of whole system increased obviously. In order to protect the whole system, 80 microL was chosen as the maximum injection volume. The peak areas of ordinary injection and the large volume injection showed a good linear relationship. The enrichment ability of RAM-HPLC system was satisfactory. The system was successfully used for the separation and detection of the trace benazepril hydrochloride in rat plasma after its administration. The sensitivity of HPLC can be improved by RAM pre-enrichment. It is a simple and economic measurement method.

  12. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  13. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  14. Response of steel box columns in fire conditions

    Directory of Open Access Journals (Sweden)

    Mahmood Yahyai

    2017-05-01

    Full Text Available Effect of elevated temperatures on the mechanical properties of steel, brings the importance of investigating the effect of fire on the steel structures anxiously. Columns, as the main load-carrying part of a structure, can be highly vulnerable to the fire. In this study, the behavior of steel gravity columns with box cross section exposed to fire has been investigated. These kinds of columns are widely used in common steel structures design in Iran. In current study, the behavior of such columns in fire conditions is investigated through the finite element method. To perform this, the finite element model of a steel column which has been previously tested under fire condition, was prepared. Experimental loading and boundary conditions were considered in the model and was analyzed. Results were validated by experimental data and various specimens of gravity box columns were designed according to the Iran’s steel buildings code, and modeled and analyzed using Abaqus software. The effect of width to thickness ratio of column plates, the load ratio and slenderness on the ultimate strength of the column was investigated, and the endurance time was estimated under ISO 834 standard fire curve. The results revealed that an increase in width to thickness ratio and load ratio leads to reduction of endurance time and the effect of width to thickness ratio on the ultimate strength of the column decreases with temperature increase.

  15. Ductility of reinforced concrete columns confined with stapled strips

    International Nuclear Information System (INIS)

    Tahir, M.F.; Khan, Q.U.Z.; Shabbir, F.; Sharif, M.B.; Ijaz, N.

    2015-01-01

    Response of three 150x150x450mm short reinforced concrete (RC) columns confined with different types of confining steel was investigated. Standard stirrups, strips and stapled strips, each having same cross-sectional area, were employed as confining steel around four comer column bars. Experimental work was aimed at probing into the affect of stapled strip confinement on post elastic behavior and ductility level under cyclic axial load. Ductility ratios, strength enhancement factor and core concrete strengths were compared to study the affect of confinement. Results indicate that strength enhancement in RC columns due to strip and stapled strip confinement was not remarkable as compared to stirrup confined column. It was found that as compared to stirrup confined column, stapled strip confinement enhanced the ductility of RC column by 183% and observed axial capacity of stapled strip confined columns was 41 % higher than the strip confined columns. (author)

  16. Retrofit of distillation columns in biodiesel production plants

    International Nuclear Information System (INIS)

    Nguyen, Nghi; Demirel, Yasar

    2010-01-01

    Column grand composite curves and the exergy loss profiles produced by the Column-Targeting Tool of the Aspen Plus simulator are used to assess the performance of the existing distillation columns, and reduce the costs of operation by appropriate retrofits in a biodiesel production plant. Effectiveness of the retrofits is assessed by means of thermodynamics and economic improvements. We have considered a biodiesel plant utilizing three distillation columns to purify biodiesel (fatty acid methyl ester) and byproduct glycerol as well as reduce the waste. The assessments of the base case simulation have indicated the need for modifications for the distillation columns. For column T202, the retrofits consisting of a feed preheating and reflux ratio modification have reduced the total exergy loss by 47%, while T301 and T302 columns exergy losses decreased by 61% and 52%, respectively. After the retrofits, the overall exergy loss for the three columns has decreased from 7491.86 kW to 3627.97 kW. The retrofits required a fixed capital cost of approximately $239,900 and saved approximately $1,900,000/year worth of electricity. The retrofits have reduced the consumption of energy considerably, and leaded to a more environmentally friendly operation for the biodiesel plant considered.

  17. Benchmark of 6D SLAM (6D Simultaneous Localisation and Mapping Algorithms with Robotic Mobile Mapping Systems

    Directory of Open Access Journals (Sweden)

    Bedkowski Janusz

    2017-09-01

    Full Text Available This work concerns the study of 6DSLAM algorithms with an application of robotic mobile mapping systems. The architecture of the 6DSLAM algorithm is designed for evaluation of different data registration strategies. The algorithm is composed of the iterative registration component, thus ICP (Iterative Closest Point, ICP (point to projection, ICP with semantic discrimination of points, LS3D (Least Square Surface Matching, NDT (Normal Distribution Transform can be chosen. Loop closing is based on LUM and LS3D. The main research goal was to investigate the semantic discrimination of measured points that improve the accuracy of final map especially in demanding scenarios such as multi-level maps (e.g., climbing stairs. The parallel programming based nearest neighborhood search implementation such as point to point, point to projection, semantic discrimination of points is used. The 6DSLAM framework is based on modified 3DTK and PCL open source libraries and parallel programming techniques using NVIDIA CUDA. The paper shows experiments that are demonstrating advantages of proposed approach in relation to practical applications. The major added value of presented research is the qualitative and quantitative evaluation based on realistic scenarios including ground truth data obtained by geodetic survey. The research novelty looking from mobile robotics is the evaluation of LS3D algorithm well known in geodesy.

  18. Automatic Thread-Level Parallelization in the Chombo AMR Library

    Energy Technology Data Exchange (ETDEWEB)

    Christen, Matthias; Keen, Noel; Ligocki, Terry; Oliker, Leonid; Shalf, John; Van Straalen, Brian; Williams, Samuel

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number of existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.

  19. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  20. Interpretation of the lime column penetration test

    International Nuclear Information System (INIS)

    Liyanapathirana, D S; Kelly, R B

    2010-01-01

    Dry soil mix (DSM) columns are used to reduce the settlement and to improve the stability of embankments constructed on soft clays. During construction the shear strength of the columns needs to be confirmed for compliance with technical assumptions. A specialized blade shaped penetrometer known as the lime column probe, has been developed for testing DSM columns. This test can be carried out as a pull out resistance test (PORT) or a push in resistance test (PIRT). The test is considered to be more representative of average column shear strength than methods that test only a limited area of the column. Both PORT and PIRT tests require empirical correlations of measured resistance to an absolute measure of shear strength, in a similar manner to the cone penetration test. In this paper, finite element method is used to assess the probe factor, N, for the PORT test. Due to the large soil deformations around the probe, an Arbitrary Lagrangian Eulerian (ALE) based finite element formulation has been used. Variation of N with rigidity index and the friction at the probe-soil interface are investigated to establish a range for the probe factor.

  1. Behaviour of FRP confined concrete in square columns

    OpenAIRE

    Diego Villalón, Ana de; Arteaga Iriarte, Ángel; Fernandez Gomez, Jaime Antonio; Perera Velamazán, Ricardo; Cisneros, Daniel

    2015-01-01

    A significant amount of research has been conducted on FRP-confined circular columns, but much less is known about rectangular/square columns in which the effectiveness of confinement is much reduced. This paper presents the results of experimental investigations on low strength square concrete columns confined with FRP. Axial compression tests were performed on ten intermediate size columns. The tests results indicate that FRP composites can significantly improve the bearing capacity and duc...

  2. Numerical Simulations of Settlement of Jet Grouting Columns

    Directory of Open Access Journals (Sweden)

    Juzwa Anna

    2016-03-01

    Full Text Available The paper presents the comparison of results of numerical analyses of interaction between group of jet grouting columns and subsoil. The analyses were conducted for single column and groups of three, seven and nine columns. The simulations are based on experimental research in real scale which were carried out by authors. The final goal for the research is an estimation of an influence of interaction between columns working in a group.

  3. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  4. Strengthening of Steel Columns under Load: Torsional-Flexural Buckling

    Directory of Open Access Journals (Sweden)

    Martin Vild

    2016-01-01

    Full Text Available The paper presents experimental and numerical research into the strengthening of steel columns under load using welded plates. So far, the experimental research in this field has been limited mostly to flexural buckling of columns and the preload had low effect on the column load resistance. This paper focuses on the local buckling and torsional-flexural buckling of columns. Three sets of three columns each were tested. Two sets corresponding to the base section (D and strengthened section (E were tested without preloading and were used for comparison. Columns from set (F were first preloaded to the load corresponding to the half of the load resistance of the base section (D. Then the columns were strengthened and after they cooled, they were loaded to failure. The columns strengthened under load (F had similar average resistance as the columns welded without preloading (E, meaning the preload affects even members susceptible to local buckling and torsional-flexural buckling only slightly. This is the same behaviour as of the tested columns from previous research into flexural buckling. The study includes results gained from finite element models of the problem created in ANSYS software. The results obtained from the experiments and numerical simulations were compared.

  5. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  6. Precision Column CO2 Measurement from Space Using Broad Band LIDAR

    Science.gov (United States)

    Heaps, William S.

    2009-01-01

    In order to better understand the budget of carbon dioxide in the Earth's atmosphere it is necessary to develop a global high precision understanding of the carbon dioxide column. To uncover the missing sink" that is responsible for the large discrepancies in the budget as we presently understand it, calculation has indicated that measurement accuracy of 1 ppm is necessary. Because typical column average CO2 has now reached 380 ppm this represents a precision on the order of 0.25% for these column measurements. No species has ever been measured from space at such a precision. In recognition of the importance of understanding the CO2 budget to evaluate its impact on global warming the National Research Council in its decadal survey report to NASA recommended planning for a laser based total CO2 mapping mission in the near future. The extreme measurement accuracy requirements on this mission places very strong constraints on the laser system used for the measurement. This work presents an overview of the characteristics necessary in a laser system used to make this measurement. Consideration is given to the temperature dependence, pressure broadening, and pressure shift of the CO2 lines themselves and how these impact the laser system characteristics. We are examining the possibility of making precise measurements of atmospheric carbon dioxide using a broad band source of radiation. This means that many of the difficulties in wavelength control can be treated in the detector portion of the system rather than the laser source. It also greatly reduces the number of individual lasers required to make a measurement. Simplifications such as these are extremely desirable for systems designed to operate from space.

  7. Study on two phase flow characteristics in annular pulsed extraction column with different ratio of annular width to column diameter

    International Nuclear Information System (INIS)

    Qin Wei; Dai Youyuan; Wang Jiading

    1994-01-01

    Annular pulsed extraction column can successfully provide large throughput and can be made critically safe for fuel reprocessing. This investigation is to study the two phase flow characteristics in annular pulsed extraction column with four different annular width. 30% TBP (in kerosene)-water is used (water as continuous phase). Results show that modified Pratt correlation is valid under the experimental operation conditions for the annular pulsed extraction column. The characteristic velocity U K decreased with the increase of energy input and increased with the increase of the ratio of annular width to column diameter. Flooding velocity correlation is suggested. The deviation of the calculated values from the experimental data is within +20% for four annular width in a pulsed extraction column

  8. Collapse of tall granular columns in fluid

    Science.gov (United States)

    Kumar, Krishna; Soga, Kenichi; Delenne, Jean-Yves

    2017-06-01

    Avalanches, landslides, and debris flows are geophysical hazards, which involve rapid mass movement of granular solids, water, and air as a multi-phase system. In order to describe the mechanism of immersed granular flows, it is important to consider both the dynamics of the solid phase and the role of the ambient fluid. In the present study, the collapse of a granular column in fluid is studied using 2D LBM - DEM. The flow kinematics are compared with the dry and buoyant granular collapse to understand the influence of hydrodynamic forces and lubrication on the run-out. In the case of tall columns, the amount of material destabilised above the failure plane is larger than that of short columns. Therefore, the surface area of the mobilised mass that interacts with the surrounding fluid in tall columns is significantly higher than the short columns. This increase in the area of soil - fluid interaction results in an increase in the formation of turbulent vortices thereby altering the deposit morphology. It is observed that the vortices result in the formation of heaps that significantly affects the distribution of mass in the flow. In order to understand the behaviour of tall columns, the run-out behaviour of a dense granular column with an initial aspect ratio of 6 is studied. The collapse behaviour is analysed for different slope angles: 0°, 2.5°, 5° and 7.5°.

  9. Parallel eigenanalysis of finite element models in a completely connected architecture

    Science.gov (United States)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  10. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Directory of Open Access Journals (Sweden)

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  11. NOx retention in scrubbing column

    International Nuclear Information System (INIS)

    Nakazone, A.K.; Costa, R.E.; Lobao, A.S.T.; Matsuda, H.T.; Araujo, B.F.

    1988-07-01

    During the UO 2 dissolution in nitric acid, some different species of NO x are released. The off gas can either be refluxed to the dissolver or be released and retained on special columns. The final composition of the solution is the main parameter to take in account. A process for nitrous gases retention using scubber columns containing H 2 O or diluted HNO 3 is presented. Chemiluminescence measurement was employed to NO x evalution before and after scrubbing. Gas flow, temperature, residence time are the main parameters considered in this paper. For the dissolution of 100g UO 2 in 8M nitric acid, a 6NL/h O 2 flow was the best condition for the NO/NO 2 oxidation with maximum adsorption in the scrubber columns. (author) [pt

  12. Mapping of synchronous dataflow graphs on MPSoCs based on parallelism enhancement

    NARCIS (Netherlands)

    Tang, Q.; Basten, T.; Geilen, M.; Stuijk, S.; Wei, J.B.

    2017-01-01

    Multi-processor systems-on-chips are widely adopted in implementing modern streaming applications to satisfy the ever increasing computation requirements. To take advantage of this kind of platform, it is necessary to map tasks of the application properly to different processors, so as to fully

  13. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    Science.gov (United States)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  14. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.

    Science.gov (United States)

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-12-01

    Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed

  15. Modalization in the Political Column of Tempo Magazine

    OpenAIRE

    Rahmah, Maria Betti Sinaga and

    2017-01-01

    The study focuses on analyzing the use of modalization in the Political Column of Tempo Magazine. The objectives were to find out the type of modalization and to describe the use of modalization in the Political Column of Tempo magazine. The data were taken from Political Column of Tempo magazine published in June and July 2017. The source of data was Political Column in Tempo magazine. The data analysis applied descriptive qualitative research. There were 135 clauses which contained Modaliza...

  16. Column, particularly extraction column, for fission and/or breeder materials

    International Nuclear Information System (INIS)

    Vietzke, H.; Pirk, H.

    1980-01-01

    An absorber rod with a B 4 C insert is situated in the long extraction column for a uranyl nitrate solution or a plutonium nitrate solution. The geometrical dimensions are designed for a high throughput with little corrosion. (DG) [de

  17. Posterior column reconstruction improves fusion rates at the level of osteotomy in three-column posterior-based osteotomies.

    Science.gov (United States)

    Lewis, Stephen J; Mohanty, Chandan; Gazendam, Aaron M; Kato, So; Keshen, Sam G; Lewis, Noah D; Magana, Sofia P; Perlmutter, David; Cape, Jennifer

    2018-03-01

    To determine the incidence of pseudarthrosis at the osteotomy site after three-column spinal osteotomies (3-COs) with posterior column reconstruction. 82 consecutive adult 3-COs (66 patients) with a minimum of 2-year follow-up were retrospectively reviewed. All cases underwent posterior 3-COs with two-rod constructs. The inferior facets of the proximal level were reduced to the superior facets of the distal level. If that was not possible, a structural piece of bone graft either from the local resection or a local rib was slotted in the posterior column defect to re-establish continual structural posterior bone across the lateral margins of the resection. No interbody cages were used at the level of the osteotomy. There were 34 thoracic osteotomies, 47 lumbar osteotomies and one sacral osteotomy with a mean follow-up of 52 (24-126) months. All cases underwent posterior column reconstructions described above and the addition of interbody support or additional posterior rods was not performed for fusion at the osteotomy level. Among them, 29 patients underwent one or more revision surgeries. There were three definite cases of pseudarthrosis at the osteotomy site (4%). Six revisions were also performed for pseudarthrosis at other levels. Restoration of the structural integrity of the posterior column in three-column posterior-based osteotomies was associated with > 95% fusion rate at the level of the osteotomy. Pseudarthrosis at other levels was the second most common reason for revision following adjacent segment disease in the long-term follow-up.

  18. Effect of pre- and post-column band broadening on the performance of high-speed chromatography columns under isocratic and gradient conditions.

    Science.gov (United States)

    Vanderlinden, Kim; Broeckhoven, Ken; Vanderheyden, Yoachim; Desmet, Gert

    2016-04-15

    We report on the results of an experimental and theoretical study of the effect of the extra-column band broadening (ECBB) on the performance of narrow-bore columns filled with the smallest particles that are currently commercially available. Emphasis is on the difference between the effect of ECBB under gradient and isocratic conditions, as well as on the ability to model and predict the ECBB effects using well-established band broadening expressions available from the theory of chromatography. The fine details and assumptions that need to be taken into account when using these expressions are discussed. The experiments showed that, the steeper the gradient, the more pronounced the extra-column band broadening losses become. Whereas the pre-column band broadening can in both isocratic and gradient elution be avoided by playing on the possibilities to focus the analytes on top of the column (e.g. by using the POISe injection method when running isocratic separations), the post-column extra-column band broadening is inescapable in both cases. Inducing extra-column band broadening by changing the inner diameter of the post-column tubing from 65 to 250 μm, we found that all peaks in the chromatogram are strongly affected (around a factor of 1.9 increase in relative peak width) when running steep gradients, while usually only the first eluting peak was affected in the isocratic mode or when running shallow gradients (factor 1.6-1.8 increase in relative peak width for the first eluting analyte). Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for Column CO2 Measurements

    Science.gov (United States)

    Campbell, J. F.; Lin, B.; Obland, M. D.; Liu, Z.; Kooi, S. A.; Fan, T. F.; Nehrir, A. R.; Meadows, B.; Browell, E. V.

    2016-12-01

    Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for Column CO2 MeasurementsJoel F. Campbell1, Bing Lin1, Michael D. Obland1, Zhaoyan Liu1, Susan Kooi2, Tai-Fang Fan2, Amin R. Nehrir1, Byron Meadows1, Edward V. Browell31NASA Langley Research Center, Hampton, VA 23681 2SSAI, NASA Langley Research Center, Hampton, VA 23681 3STARSS-II Affiliate, NASA Langley Research Center, Hampton, VA 23681 AbstractGlobal and regional atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission and the Atmospheric Carbon and Transport (ACT) - America project are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space and airborne platforms to meet the ASCENDS and ACT-America science measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud returns. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby minimizing bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new sub-meter hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. These techniques are used in a new data processing

  20. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  1. Column Number Density Expressions Through M = 0 and M = 1 Point Source Plumes Along Any Straight Path

    Science.gov (United States)

    Woronowicz, Michael

    2016-01-01

    Analytical expressions for column number density (CND) are developed for optical line of sight paths through a variety of steady free molecule point source models including directionally-constrained effusion (Mach number M = 0) and flow from a sonic orifice (M = 1). Sonic orifice solutions are approximate, developed using a fair simulacrum fitted to the free molecule solution. Expressions are also developed for a spherically-symmetric thermal expansion (M = 0). CND solutions are found for the most general paths relative to these sources and briefly explored. It is determined that the maximum CND from a distant location through directed effusion and sonic orifice cases occurs along the path parallel to the source plane that intersects the plume axis. For the effusive case this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plume's axis. For sonic plumes this ratio is reduced to about 4/3. For high Mach number cases the maximum CND will be found along the axial centerline path. Keywords: column number density, plume flows, outgassing, free molecule flow.

  2. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  3. Water hammer with column separation : a historical review

    NARCIS (Netherlands)

    Bergant, A.; Simpson, A.R.; Tijsseling, A.S.

    2006-01-01

    Column separation refers to the breaking of liquid columns in fully filled pipelines. This may occur in a water-hammer event when the pressure in a pipeline drops to the vapor pressure at specific locations such as closed ends, high points or knees (changes in pipe slope). The liquid columns are

  4. Reconstruction of multiple line source attenuation maps

    International Nuclear Information System (INIS)

    Celler, A.; Sitek, A.; Harrop, R.

    1996-01-01

    A simple configuration for a transmission source for the single photon emission computed tomography (SPECT) was proposed, which utilizes a series of collimated line sources parallel to the axis of rotation of a camera. The detector is equipped with a standard parallel hole collimator. We have demonstrated that this type of source configuration can be used to generate sufficient data for the reconstruction of the attenuation map when using 8-10 line sources spaced by 3.5-4.5 cm for a 30 x 40cm detector at 65cm distance from the sources. Transmission data for a nonuniform thorax phantom was simulated, then binned and reconstructed using filtered backprojection (FBP) and iterative methods. The optimum maps are obtained with data binned into 2-3 bins and FBP reconstruction. The activity in the source was investigated for uniform and exponential activity distributions, as well as the effect of gaps and overlaps of the neighboring fan beams. A prototype of the line source has been built and the experimental verification of the technique has started

  5. Mixed Map Labeling

    Directory of Open Access Journals (Sweden)

    Maarten Löffler

    2016-12-01

    Full Text Available Point feature map labeling is a geometric visualization problem, in which a set of input points must be labeled with a set of disjoint rectangles (the bounding boxes of the label texts. It is predominantly motivated by label placement in maps but it also has other visualization applications. Typically, labeling models either use internal labels, which must touch their feature point, or external (boundary labels, which are placed outside the input image and which are connected to their feature points by crossing-free leader lines. In this paper we study polynomial-time algorithms for maximizing the number of internal labels in a mixed labeling model that combines internal and external labels. The model requires that all leaders are parallel to a given orientation θ ∈ [0, 2π, the value of which influences the geometric properties and hence the running times of our algorithms.

  6. HPLC-CUPRAC post-column derivatization method for the determination of antioxidants: a performance comparison between porous silica and core-shell column packing.

    Science.gov (United States)

    Haque, Syed A; Cañete, Socrates Jose P

    2018-01-01

    An HPLC method employing a post-column derivatization strategy using the cupric reducing antioxidant capacity reagent (CUPRAC reagent) for the determining antioxidants in plant-based materials leverages the separation capability of regular HPLC approaches while allowing for detection specificity for antioxidants. Three different column types, namely core-shell and porous silica including two chemically different core-shell materials (namely phenyl-hexyl and C18), were evaluated to assess potential improvements that could be attained by changing from a porous silica matrix to a core-shell matrix. Tea extracts were used as sample matrices for the evaluation specifically looking at catechin and epigallocatechin gallate (EGCG). Both the C18 and phenyl-hexyl core-shell columns showed better performance compared to the C18 porous silica one in terms of separation, peak shape, and retention time. Among the two core-shell materials, the phenyl-hexyl column showed better resolving power compared to the C18 column. The CUPRAC post-column derivatization method can be improved using core-shell columns and suitable for quantifying antioxidants, exemplified by catechin and EGCG, in tea samples.

  7. Parallel deposition, sorting, and reordering methods in the Hybrid Ordered Plasma Simulation (HOPS) code

    International Nuclear Information System (INIS)

    Anderson, D.V.; Shumaker, D.E.

    1993-01-01

    From a computational standpoint, particle simulation calculations for plasmas have not adapted well to the transitions from scalar to vector processing nor from serial to parallel environments. They have suffered from inordinate and excessive accessing of computer memory and have been hobbled by relatively inefficient gather-scatter constructs resulting from the use of indirect indexing. Lastly, the many-to-one mapping characteristic of the deposition phase has made it difficult to perform this in parallel. The authors' code sorts and reorders the particles in a spatial order. This allows them to greatly reduce the memory references, to run in directly indexed vector mode, and to employ domain decomposition to achieve parallelization. In this hybrid simulation the electrons are modeled as a fluid and the field equations solved are obtained from the electron momentum equation together with the pre-Maxwell equations (displacement current neglected). Either zero or finite electron mass can be used in the electron model. The resulting field equations are solved with an iteratively explicit procedure which is thus trivial to parallelize. Likewise, the field interpolations and the particle pushing is simple to parallelize. The deposition, sorting, and reordering phases are less simple and it is for these that the authors present detailed algorithms. They have now successfully tested the parallel version of HOPS in serial mode and it is now being readied for parallel execution on the Cray C-90. They will then port HOPS to a massively parallel computer, in the next year

  8. Eigensolution of finite element problems in a completely connected parallel architecture

    Science.gov (United States)

    Akl, Fred A.; Morel, Michael R.

    1989-01-01

    A parallel algorithm for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi)=(M)(phi)(omega), where (K) and (M) are of order N, and (omega) is of order q is presented. The parallel algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm has been successfully implemented on a tightly coupled multiple-instruction-multiple-data (MIMD) parallel processing computer, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor, or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macro-tasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18 and 3.61 are achieved on two, four, six and eight processors, respectively.

  9. Parallel optical control of spatiotemporal neuronal spike activity using high-frequency digital light processingtechnology

    Directory of Open Access Journals (Sweden)

    Jason eJerome

    2011-08-01

    Full Text Available Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses Digital-Light-Processing (DLP technology to generate 2-dimensional (2D stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 µm and temporal (>13kHz resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 x 2.07 mm2 of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  10. Application of a Fast Separation Method for Anti-diabetics in Pharmaceuticals Using Monolithic Column: Comparative Study With Silica Based C-18 Particle Packed Column.

    Science.gov (United States)

    Hemdan, A; Abdel-Aziz, Omar

    2018-04-01

    Run time is a predominant factor in HPLC for quality control laboratories especially if there is large number of samples have to be analyzed. Working at high flow rates cannot be attained with silica based particle packed column due to elevated backpressure issues. The use of monolithic column as an alternative to traditional C-18 column was tested for fast separation of pharmaceuticals, where the results were very competitive. The performance comparison of both columns was tested for separation of anti-diabetic combination containing Metformin, Pioglitazone and Glimepiride using Gliclazide as an internal standard. Working at high flow rates with less significant backpressure was obtained with the monolithic column where the run time was reduced from 6 min in traditional column to only 1 min in monolithic column with accepted resolution. The structure of the monolith contains many pores which can adapt the high flow rate of the mobile phase. Moreover, peak symmetry and equilibration time were more efficient with monolithic column.

  11. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    Science.gov (United States)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  12. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  13. NOAA JPSS Ozone Mapping and Profiler Suite (OMPS) Nadir Profile Science Sensor Data Record (SDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ozone Mapping and Profiler Suite (OMPS) onboard the Suomi-NPP satellite monitors ozone from space. OMPS will collect total column and vertical profile ozone data...

  14. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  15. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  16. Preparation and evaluation of 400μm I.D. polymer-based hydrophilic interaction chromatography monolithic columns with high column efficiency.

    Science.gov (United States)

    Liu, Chusheng; Li, Haibin; Wang, Qiqin; Crommen, Jacques; Zhou, Haibo; Jiang, Zhengjin

    2017-08-04

    The quest for higher column efficiency is one of the major research areas in polymer-based monolithic column fabrication. In this research, two novel polymer-based HILIC monolithic columns with 400μm I.D.×800μm O.D. were prepared based on the thermally initiated co-polymerization of N,N-dimethyl-N-(3-methacrylamidopropyl)-N-(3-sulfopropyl) ammonium betaine (SPP) and ethylene glycol dimethacrylate (EDMA) or N,N'-methylenebisacrylamide (MBA). In order to obtain a satisfactory performance in terms of column permeability, mechanical stability, efficiency and selectivity, the polymerization parameters were systematically optimized. Column efficiencies as high as 142, 000 plates/m and 120, 000 plates/m were observed for the analysis of neutral compounds at 0.6mm/s on the poly(SPP-co-MBA) and poly(SPP-co-EDMA) monoliths, respectively. Furthermore, the Van Deemter plots for thiourea on the two monoliths were compared with that on a commercial silica based ZIC-HILIC column (3.5μm, 200Å, 150mm×300μm I.D.) using ACN/H 2 O (90/10, v/v) as the mobile phase at room temperature. It was noticeable that the Van Deemter curves for both monoliths, particularly the poly(SPP-co-MBA) monolith, are significantly flatter than that obtained for the ZIC-HILIC column, which indicates that in spite of their larger internal diameters, they yield better overall efficiency, with less peak dispersion, across a much wider range of usable linear velocities. A clearly better separation performance was also observed for nucleobases, nucleosides, nucleotides and small peptides on the poly(SPP-co-MBA) monolith compared to the ZIC-HILIC column. It is particularly worth mentioning that these 400μm I.D. polymer-based HILIC monolithic columns exhibit enhanced mechanical strength owing to the thicker capillary wall of the fused-silica capillaries. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Easy 3D Mapping for Indoor Navigation of Micro UAVs

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Totu, Luminita Cristiana; La Cour-Harbo, Anders

    2017-01-01

    Indoor operation of micro air vehicles (UAS or UAV) is significantly simplified with the availability of some means for indoor localization as well as a sufficiently precise 3D map of the facility. Creation of 3D maps based on the available architectural information should on the one hand provide...... a map of sufficient precision and on the other limit complexity to a manageable level. This paper presents a box based approach for easy generation 3D maps to serve as the basis for indoor navigation of UAS. The basic building block employed is a 3D axis parallel box (APB). Unions of APBs constitute...... with arguments for pivotal design choices and a selection of examples....

  18. HEAT TRANSFER ANALYSIS FOR FIXED CST AND RF COLUMNS

    International Nuclear Information System (INIS)

    Lee, S

    2007-01-01

    In support of a small column ion exchange (SCIX) process for the Savannah River Site waste processing program, transient and steady state two-dimensional heat transfer models have been constructed for columns loaded with cesium-saturated crystalline silicotitanate (CST) or spherical Resorcinol-Formaldehyde (RF) beads and 6 molar sodium tank waste supernate. Radiolytic decay of sorbed cesium results in heat generation within the columns. The models consider conductive heat transfer only with no convective cooling and no process flow within the columns (assumed column geometry: 27.375 in ID with a 6.625 in OD center-line cooling pipe). Heat transfer at the column walls was assumed to occur by natural convection cooling with 35 C air. A number of modeling calculations were performed using this computational heat transfer approach. Minimal additional calculations were also conducted to predict temperature increases expected for salt solution processed through columns of various heights at the slowest expected operational flow rate of 5 gpm. Results for the bounding model with no process flow and no active cooling indicate that the time required to reach the boiling point of ∼130 C for a CST-salt solution mixture containing 257 Ci/liter of Cs-137 heat source (maximum expected loading for SCIX applications) at 35 C initial temperature is about 6 days. Modeling results for a column actively cooled with external wall jackets and the internal coolant pipe (inlet coolant water temperature: 25 C) indicate that the CST column can be maintained non-boiling under these conditions indefinitely. The results also show that the maximum temperature of an RF-salt solution column containing 133 Ci/liter of Cs-137 (maximum expected loading) will never reach boiling under any conditions (maximum predicted temperature without cooling: 88 C). The results indicate that a 6-in cooling pipe at the center of the column provides the most effective cooling mechanism for reducing the maximum

  19. Novel field emission SEM column with beam deceleration technology

    International Nuclear Information System (INIS)

    Jiruše, Jaroslav; Havelka, Miloslav; Lopour, Filip

    2014-01-01

    A novel field-emission SEM column has been developed that features Beam Deceleration Mode, high-probe current and ultra-fast scanning. New detection system in the column is introduced to detect true secondary electron signal. The resolution power at low energy was doubled for conventional SEM optics and moderately improved for immersion optics. Application examples at low landing energies include change of contrast, imaging of non-conductive samples and thin layers. - Highlights: • A novel field-emission SEM column has been developed. • Implemented beam deceleration improves the SEM resolution at 1 keV two times. • New column maintains high analytical potential and wide field of view. • Detectors integrated in the column allow gaining true SE and BE signal separately. • Performance of the column is demonstrated on low energy applications

  20. Novel field emission SEM column with beam deceleration technology

    Energy Technology Data Exchange (ETDEWEB)

    Jiruše, Jaroslav; Havelka, Miloslav; Lopour, Filip

    2014-11-15

    A novel field-emission SEM column has been developed that features Beam Deceleration Mode, high-probe current and ultra-fast scanning. New detection system in the column is introduced to detect true secondary electron signal. The resolution power at low energy was doubled for conventional SEM optics and moderately improved for immersion optics. Application examples at low landing energies include change of contrast, imaging of non-conductive samples and thin layers. - Highlights: • A novel field-emission SEM column has been developed. • Implemented beam deceleration improves the SEM resolution at 1 keV two times. • New column maintains high analytical potential and wide field of view. • Detectors integrated in the column allow gaining true SE and BE signal separately. • Performance of the column is demonstrated on low energy applications.

  1. The design of a new concept chromatography column.

    Science.gov (United States)

    Camenzuli, Michelle; Ritchie, Harald J; Ladine, James R; Shalliker, R Andrew

    2011-12-21

    Active Flow Management is a new separation technique whereby the flow of mobile phase and the injection of sample are introduced to the column in a manner that allows migration according to the principles of the infinite diameter column. A segmented flow outlet fitting allows for the separation of solvent or solute that elutes along the central radial section of the column from that of the sample or solvent that elutes along the wall region of the column. Separation efficiency on the analytical scale is increased by 25% with an increase in sensitivity by as much as 52% compared to conventional separations.

  2. Inert carriers for column extraction chromatography

    International Nuclear Information System (INIS)

    Katykhin, G.S.

    1978-01-01

    Inert carriers used in column extraction chromatography are reviewed. Such carriers are devided into two large groups: hydrophilic carriers which possess high surface energy and are well wetted only with strongly polar liquids (kieselguhrs, silica gels, glasses, cellulose, Al 2 O 3 ) and water-repellent carriers which possess low surface energy and are well wetted with various organic solvents (polyethylene, polytetrafluorethylene polytrifluorochlorethylene). Properties of various carriers are presented: structure, chemical and radiation stability, adsorption properties, extracting agent capacity. The effect of structure and sizes of particles on the efficiency of chromatography columns is considered. Ways of immovable phase deposition on the carrier and the latter's regeneration. Peculiarities of column packing for preparative and continuous chromatography are discussed

  3. Representing and computing regular languages on massively parallel networks

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.I.; O' Sullivan, J.A. (Electronic Systems and Research Lab., of Electrical Engineering, Washington Univ., St. Louis, MO (US)); Boysam, B. (Dept. of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Inst., Troy, NY (US)); Smith, K.R. (Dept. of Electrical Engineering, Southern Illinois Univ., Edwardsville, IL (US))

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.

  4. CloudAligner: A fast and full-featured MapReduce based tool for sequence mapping

    Directory of Open Access Journals (Sweden)

    Shi Weisong

    2011-06-01

    Full Text Available Abstract Background Research in genetics has developed rapidly recently due to the aid of next generation sequencing (NGS. However, massively-parallel NGS produces enormous amounts of data, which leads to storage, compatibility, scalability, and performance issues. The Cloud Computing and MapReduce framework, which utilizes hundreds or thousands of shared computers to map sequencing reads quickly and efficiently to reference genome sequences, appears to be a very promising solution for these issues. Consequently, it has been adopted by many organizations recently, and the initial results are very promising. However, since these are only initial steps toward this trend, the developed software does not provide adequate primary functions like bisulfite, pair-end mapping, etc., in on-site software such as RMAP or BS Seeker. In addition, existing MapReduce-based applications were not designed to process the long reads produced by the most recent second-generation and third-generation NGS instruments and, therefore, are inefficient. Last, it is difficult for a majority of biologists untrained in programming skills to use these tools because most were developed on Linux with a command line interface. Results To urge the trend of using Cloud technologies in genomics and prepare for advances in second- and third-generation DNA sequencing, we have built a Hadoop MapReduce-based application, CloudAligner, which achieves higher performance, covers most primary features, is more accurate, and has a user-friendly interface. It was also designed to be able to deal with long sequences. The performance gain of CloudAligner over Cloud-based counterparts (35 to 80% mainly comes from the omission of the reduce phase. In comparison to local-based approaches, the performance gain of CloudAligner is from the partition and parallel processing of the huge reference genome as well as the reads. The source code of CloudAligner is available at http

  5. CloudAligner: A fast and full-featured MapReduce based tool for sequence mapping.

    Science.gov (United States)

    Nguyen, Tung; Shi, Weisong; Ruden, Douglas

    2011-06-06

    Research in genetics has developed rapidly recently due to the aid of next generation sequencing (NGS). However, massively-parallel NGS produces enormous amounts of data, which leads to storage, compatibility, scalability, and performance issues. The Cloud Computing and MapReduce framework, which utilizes hundreds or thousands of shared computers to map sequencing reads quickly and efficiently to reference genome sequences, appears to be a very promising solution for these issues. Consequently, it has been adopted by many organizations recently, and the initial results are very promising. However, since these are only initial steps toward this trend, the developed software does not provide adequate primary functions like bisulfite, pair-end mapping, etc., in on-site software such as RMAP or BS Seeker. In addition, existing MapReduce-based applications were not designed to process the long reads produced by the most recent second-generation and third-generation NGS instruments and, therefore, are inefficient. Last, it is difficult for a majority of biologists untrained in programming skills to use these tools because most were developed on Linux with a command line interface. To urge the trend of using Cloud technologies in genomics and prepare for advances in second- and third-generation DNA sequencing, we have built a Hadoop MapReduce-based application, CloudAligner, which achieves higher performance, covers most primary features, is more accurate, and has a user-friendly interface. It was also designed to be able to deal with long sequences. The performance gain of CloudAligner over Cloud-based counterparts (35 to 80%) mainly comes from the omission of the reduce phase. In comparison to local-based approaches, the performance gain of CloudAligner is from the partition and parallel processing of the huge reference genome as well as the reads. The source code of CloudAligner is available at http://cloudaligner.sourceforge.net/ and its web version is at http

  6. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    International Nuclear Information System (INIS)

    BAER, THOMAS A.; SACKINGER, PHILIP A.; SUBIA, SAMUEL R.

    1999-01-01

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance

  7. Monitoring changes in soil water content on adjustable soil slopes of a soil column using time domain reflectometry (TDR) techniques

    International Nuclear Information System (INIS)

    Wan Zakaria Wan Muhd Tahir; Lakam Anak Mejus; Johari Abdul Latif

    2004-01-01

    Time Domain Reflectometry (TDR) is one of non-destructive methods and widely used in hydrology and soil science for accurate and flexible measurement of soil water content The TDR technique is based on measuring the dielectric constant of soil from the propagation of an electromagnetic pulse traveling along installed probe rods (parallel wire transmission line). An adjustable soil column i.e., 80 cm (L) x 35 cm (H) x 44 cm (W) instrumented with six pairs of vertically installed CS615 reflectometer probes (TDR rods) was developed and wetted under a laboratory simulated rainfall and their sub-surface moisture variations as the slope changes were monitored using TDR method Soil samples for gravimetric determination of water content, converted to a volume basis were taken at selected times and locations after the final TDR reading for every slope change made of the soil column Comparisons of water contents by TDR with those from grawmetric samples at different slopes of soil column were examined. The accuracy was found to be comparable and to some extent dependent upon the variability of the soil. This study also suggests that the response of slope (above 20 degrees) to the gradual increase in water content profile may cause soil saturation faster and increased overland flow (runoff especially on weak soil conditions

  8. Dynamic effects of diabatization in distillation columns

    DEFF Research Database (Denmark)

    Bisgaard, Thomas; Huusom, Jakob Kjøbsted; Abildskov, Jens

    2013-01-01

    The dynamic effects of diabatization in distillation columns are investigated in simulation emphasizing the heat-integrated distillation column (HIDiC). A generic, dynamic, first-principle model has been formulated, which is flexible enough to describe various diabatic distillation configurations....... Dynamic Relative Gain Array and Singular Value Analysis have been applied in a comparative study of a conventional distillation column and a HIDiC. The study showed increased input-output coupling due to diabatization. Feasible SISO control structures for the HIDiC were also found and control...

  9. Dynamic Effects of Diabatization in Distillation Columns

    DEFF Research Database (Denmark)

    Bisgaard, Thomas; Huusom, Jakob Kjøbsted; Abildskov, Jens

    2012-01-01

    The dynamic eects of diabatization in distillation columns are investigated in simulation with primary focus on the heat-integrated distillation column (HIDiC). A generic, dynamic, rst-principle model has been formulated, which is exible to describe various diabatic distillation congurations....... Dynamic Relative Gain Array and Singular Value Analysis have been applied in a comparative study of a conventional distillation column and a HIDiC. The study showed increased input-output coupling due to diabatization. Feasible SISO control structures for the HIDiC were also found. Control...

  10. Fringing-field effects in acceleration columns

    International Nuclear Information System (INIS)

    Yavor, M.I.; Weick, H.; Wollnik, H.

    1999-01-01

    Fringing-field effects in acceleration columns are investigated, based on the fringing-field integral method. Transfer matrices at the effective boundaries of the acceleration column are obtained, as well as the general transfer matrix of the region separating two homogeneous electrostatic fields with different field strengths. The accuracy of the fringing-field integral method is investigated

  11. The First Global Geological Map of Mercury

    Science.gov (United States)

    Prockter, L. M.; Head, J. W., III; Byrne, P. K.; Denevi, B. W.; Kinczyk, M. J.; Fassett, C.; Whitten, J. L.; Thomas, R.; Ernst, C. M.

    2015-12-01

    Geological maps are tools with which to understand the distribution and age relationships of surface geological units and structural features on planetary surfaces. Regional and limited global mapping of Mercury has already yielded valuable science results, elucidating the history and distribution of several types of units and features, such as regional plains, tectonic structures, and pyroclastic deposits. To date, however, no global geological map of Mercury exists, and there is currently no commonly accepted set of standardized unit descriptions and nomenclature. With MESSENGER monochrome image data, we are undertaking the global geological mapping of Mercury at the 1:15M scale applying standard U.S. Geological Survey mapping guidelines. This map will enable the development of the first global stratigraphic column of Mercury, will facilitate comparisons among surface units distributed discontinuously across the planet, and will provide guidelines for mappers so that future mapping efforts will be consistent and broadly interpretable by the scientific community. To date we have incorporated three major datasets into the global geological map: smooth plains units, tectonic structures, and impact craters and basins >20 km in diameter. We have classified most of these craters by relative age on the basis of the state of preservation of morphological features and standard classification schemes first applied to Mercury by the Mariner 10 imaging team. Additional datasets to be incorporated include intercrater plains units and crater ejecta deposits. In some regions MESSENGER color data is used to supplement the monochrome data, to help elucidate different plains units. The final map will be published online, together with a peer-reviewed publication. Further, a digital version of the map, containing individual map layers, will be made publicly available for use within geographic information systems (GISs).

  12. Column properties and flow profiles of a flat, wide column for high-pressure liquid chromatography.

    Science.gov (United States)

    Mriziq, Khaled S; Guiochon, Georges

    2008-04-11

    The design and the construction of a pressurized, flat, wide column for high-performance liquid chromatography (HPLC) are described. This apparatus, which is derived from instruments that implement over-pressured thin layer chromatography, can carry out only uni-dimensional chromatographic separations. However, it is intended to be the first step in the development of more powerful instruments that will be able to carry out two-dimensional chromatographic separations, in which case, the first separation would be a space-based separation, LC(x), taking place along one side of the bed and the second separation would be a time-based separation, LC(t), as in classical HPLC but proceeding along the flat column, not along a tube. The apparatus described consists of a pressurization chamber made of a Plexiglas block and a column chamber made of stainless steel. These two chambers are separated by a thin Mylar membrane. The column chamber is a cavity which is filled with a thick layer (ca. 1mm) of the stationary phase. Suitable solvent inlet and outlet ports are located on two opposite sides of the sorbent layer. The design allows the preparation of a homogenous sorbent layer suitable to be used as a chromatographic column, the achievement of effective seals of the stationary phase layer against the chamber edges, and the homogenous flow of the mobile phase along the chamber. The entire width of the sorbent layer area can be used to develop separations or elute samples. The reproducible performance of the apparatus is demonstrated by the chromatographic separations of different dyes. This instrument is essentially designed for testing detector arrays to be used in a two-dimensional LC(x) x LC(t) instrument. The further development of two-dimension separation chromatographs based on the apparatus described is sketched.

  13. Cesium ion exchange using actual waste: Column size considerations

    International Nuclear Information System (INIS)

    Brooks, K.P.

    1996-04-01

    It is presently planned to remove cesium from Hanford tank waste supernates and sludge wash solutions using ion exchange. To support the development of a cesium ion exchange process, laboratory experiments produced column breakthrough curves using wastes simulants in 200 mL columns. To verify the validity of the simulant tests, column runs with actual supernatants are being planned. The purpose of these actual waste tests is two-fold. First, the tests will verify that use of the simulant accurately reflects the equilibrium and rate behavior of the resin compared to actual wastes. Batch tests and column tests will be used to compare equilibrium behaviors and rate behaviors, respectively. Second, the tests will assist in clarifying the negative interactions between the actual waste and the ion exchange resin, which cannot be effectively tested with simulant. Such interactions include organic fouling of the resin and salt precipitation in the column. These effects may affect the shape of the column breakthrough curve. The reduction in column size also may change the shape of the curve, making the individual effects even more difficult to sort out. To simplify the evaluation, the changes due to column size must be either understood or eliminated. This report describes the determination of the column size for actual waste testing that best minimizes the effect of scale-down. This evaluation will provide a theoretical basis for the dimensions of the column. Experimental testing is still required before the final decision can be made. This evaluation will be confined to the study of CS-100 and R-F resins with NCAW simulant and to a limited extent DSSF waste simulant. Only the cesium loading phase has been considered

  14. A novel gridding algorithm to create regional trace gas maps from satellite observations

    Science.gov (United States)

    Kuhlmann, G.; Hartl, A.; Cheung, H. M.; Lam, Y. F.; Wenig, M. O.

    2014-02-01

    The recent increase in spatial resolution for satellite instruments has made it feasible to study distributions of trace gas column densities on a regional scale. For this application a new gridding algorithm was developed to map measurements from the instrument's frame of reference (level 2) onto a longitude-latitude grid (level 3). The algorithm is designed for the Ozone Monitoring Instrument (OMI) and can easily be employed for similar instruments - for example, the upcoming TROPOspheric Monitoring Instrument (TROPOMI). Trace gas distributions are reconstructed by a continuous parabolic spline surface. The algorithm explicitly considers the spatially varying sensitivity of the sensor resulting from the instrument function. At the swath edge, the inverse problem of computing the spline coefficients is very sensitive to measurement errors and is regularised by a second-order difference matrix. Since this regularisation corresponds to the penalty term for smoothing splines, it similarly attenuates the effect of measurement noise over the entire swath width. Monte Carlo simulations are conducted to study the performance of the algorithm for different distributions of trace gas column densities. The optimal weight of the penalty term is found to be proportional to the measurement uncertainty and the width of the instrument function. A comparison with an established gridding algorithm shows improved performance for small to moderate measurement errors due to better parametrisation of the distribution. The resulting maps are smoother and extreme values are more accurately reconstructed. The performance improvement is further illustrated with high-resolution distributions obtained from a regional chemistry model. The new algorithm is applied to tropospheric NO2 column densities measured by OMI. Examples of regional NO2 maps are shown for densely populated areas in China, Europe and the United States of America. This work demonstrates that the newly developed gridding

  15. A novel gridding algorithm to create regional trace gas maps from satellite observations

    Directory of Open Access Journals (Sweden)

    G. Kuhlmann

    2014-02-01

    Full Text Available The recent increase in spatial resolution for satellite instruments has made it feasible to study distributions of trace gas column densities on a regional scale. For this application a new gridding algorithm was developed to map measurements from the instrument's frame of reference (level 2 onto a longitude–latitude grid (level 3. The algorithm is designed for the Ozone Monitoring Instrument (OMI and can easily be employed for similar instruments – for example, the upcoming TROPOspheric Monitoring Instrument (TROPOMI. Trace gas distributions are reconstructed by a continuous parabolic spline surface. The algorithm explicitly considers the spatially varying sensitivity of the sensor resulting from the instrument function. At the swath edge, the inverse problem of computing the spline coefficients is very sensitive to measurement errors and is regularised by a second-order difference matrix. Since this regularisation corresponds to the penalty term for smoothing splines, it similarly attenuates the effect of measurement noise over the entire swath width. Monte Carlo simulations are conducted to study the performance of the algorithm for different distributions of trace gas column densities. The optimal weight of the penalty term is found to be proportional to the measurement uncertainty and the width of the instrument function. A comparison with an established gridding algorithm shows improved performance for small to moderate measurement errors due to better parametrisation of the distribution. The resulting maps are smoother and extreme values are more accurately reconstructed. The performance improvement is further illustrated with high-resolution distributions obtained from a regional chemistry model. The new algorithm is applied to tropospheric NO2 column densities measured by OMI. Examples of regional NO2 maps are shown for densely populated areas in China, Europe and the United States of America. This work demonstrates that the newly

  16. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  17. Stability of embankments over cement deep soil mixing columns

    International Nuclear Information System (INIS)

    Morilla Moar, P.; Melentijevic, S.

    2014-01-01

    The deep soil mixing (DSM) is one of the ground improvement methods used for the construction of embankments over soft soils. DSM column-supported embankments are constructed over soft soils to accelerate its construction, improve embankment stability, increase bearing capacity and control of total and differential settlements. There are two traditional design methods, the Japanese (rigid columns) and the scandinavian (soft and semi-rigid columns). Based on Laboratory analysis and numerical analysis these traditional approaches have been questioned by several authors due to its overestimation of the embankment stability considering that the most common failures types are not assumed. This paper presents a brief review of traditional design methods for embankments on DSM columns constructed in soft soils, studies carried out determine the most likely failure types of DSM columns, methods to decrease the overestimation when using limit equilibrium methods and numerical analysis methods that permit detect appropriate failure modes in DSM columns. Finally a case study was assessed using both limited equilibrium and finite element methods which confirmed the overestimation in the factors of safety on embankment stability over DSM columns. (Author)

  18. Vertebral Column Resection for Rigid Spinal Deformity.

    Science.gov (United States)

    Saifi, Comron; Laratta, Joseph L; Petridis, Petros; Shillingford, Jamal N; Lehman, Ronald A; Lenke, Lawrence G

    2017-05-01

    Broad narrative review. To review the evolution, operative technique, outcomes, and complications associated with posterior vertebral column resection. A literature review of posterior vertebral column resection was performed. The authors' surgical technique is outlined in detail. The authors' experience and the literature regarding vertebral column resection are discussed at length. Treatment of severe, rigid coronal and/or sagittal malalignment with posterior vertebral column resection results in approximately 50-70% correction depending on the type of deformity. Surgical site infection rates range from 2.9% to 9.7%. Transient and permanent neurologic injury rates range from 0% to 13.8% and 0% to 6.3%, respectively. Although there are significant variations in EBL throughout the literature, it can be minimized by utilizing tranexamic acid intraoperatively. The ability to correct a rigid deformity in the spine relies on osteotomies. Each osteotomy is associated with a particular magnitude of correction at a single level. Posterior vertebral column resection is the most powerful posterior osteotomy method providing a successful correction of fixed complex deformities. Despite meticulous surgical technique and precision, this robust osteotomy technique can be associated with significant morbidity even in the most experienced hands.

  19. Effect of backmixing on pulse column performance

    International Nuclear Information System (INIS)

    Miao, Y.W.

    1979-05-01

    A critical survey of the published literature concerning dispersed phase holdup and longitudinal mixing in pulsed sieve-plate extraction columns has been made to assess the present state-of-the-art in predicting these two parameters, both of which are of critical importance in the development of an accurate mathematical model of the pulse column. Although there are many conflicting correlations of these variables as a function of column geometry, operating conditions, and physical properties of the liquid systems involved it has been possible to develop new correlations which appear to be useful and which are consistent with much of the available data over the limited range of variables most likely to be encountered in plant sized equipment. The correlations developed were used in a stagewise model of the pulse column to predict product concentrations, solute inventory, and concentration profiles in a column for which limited experimental data were available. Reasonable agreement was obtained between the mathematical model and the experimental data. Complete agreement, however, can only be obtained after a correlation for the extraction efficiency has been developed. The correlation of extraction efficiency was beyond the scope of this work

  20. Interaction diagrams for composite columns exposed to fire

    Directory of Open Access Journals (Sweden)

    Milanović Milivoje

    2014-01-01

    Full Text Available The bearing capacity of the cross section of composite column in fire conditions through changes in the interaction diagram 'bending moment-axialforce' were analyzed in this paper. The M-N interaction diagram presents the relationship between the intensities of the bending moment and the axial force as actions on the column cross section, or the relationship between the design value of the plastic resistance to axial compression of the total cross-section Npl, Rd and the design value of the bending moment resistance Mpl, Rd. It is well known that the temperature increase causes decrease of the load-bearing characteristics of the constitutive materials. This effect directly reflects on the reduction of the axial force and the bending moment that could be accepted by the column cross section. Interaction diagrams for three different types of column cross sections for five different maximal temperatures developed during the fire action were defined. For that purpose the software package SAFIR was used. The columns, materials and load characteristics, as well as all other terms and conditions, were taken in accordance with the relevant Eurocodes and the theory of composite columns.

  1. Recovery of deuterium from H-D gas mixture by thermal diffusion in a multi-concentric-tube column device of fixed total sum of column heights with transverse sampling streams

    International Nuclear Information System (INIS)

    Yeh, H.-M.

    2010-01-01

    The effect of the increment in the number of concentric-tube thermal diffusion columns on the recovery of deuterium from H 2 -HD-D 2 system with fixed total sum of column heights, has been investigated. The equations for predicting the degrees of separation in single-column, double-column and triple-column devices have been derived. Considerable improvement in recovery can be achieved if a multi-column device with larger number of column is employed, instead of a single-column device with column height equal to the same total sum of column heights, especially for the case of higher flow-rate operation and larger total sum of column heights.

  2. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  3. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  4. Fast Mapping Across Time: Memory Processes Support Children's Retention of Learned Words

    Directory of Open Access Journals (Sweden)

    Haley eVlach

    2012-02-01

    Full Text Available Children's remarkable ability to map linguistic labels to objects in the world is referred to as fast mapping. The current study examined children's (N = 216 and adults’ (N = 54 retention of fast-mapped words over time (immediately, after a 1 week delay, and after a 1 month delay. The fast mapping literature often characterizes children's retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain general memory processes. Memory processes are critical to children's word learning and the role of one such process, forgetting, is discussed in detail—forgetting supports both word mapping and the generalization of words and categories.

  5. Fast Mapping Across Time: Memory Processes Support Children's Retention of Learned Words.

    Science.gov (United States)

    Vlach, Haley A; Sandhofer, Catherine M

    2012-01-01

    Children's remarkable ability to map linguistic labels to referents in the world is commonly called fast mapping. The current study examined children's (N = 216) and adults' (N = 54) retention of fast-mapped words over time (immediately, after a 1-week delay, and after a 1-month delay). The fast mapping literature often characterizes children's retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain-general memory processes. Memory processes are critical to children's word learning and the role of one such process, forgetting, is discussed in detail - forgetting supports extended mapping by promoting the memory and generalization of words and categories.

  6. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    Science.gov (United States)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  7. Enrichment of heavy water in thermal-diffusion columns connected in series

    International Nuclear Information System (INIS)

    Yeh, Ho-Ming; Chen, Liu Yi

    2009-01-01

    The separation equations for enrichment of heavy water from water isotope mixture by thermal diffusion in multiple columns connected in series, have been derived based on one column design developed in previous work. The improvement in separation is achievable by operating in a double-column device, instead of in a single-column device, with the same total column length. It is also found that further improvement in separation is obtainable if a triple-column device is employed, except for operating under small total column length and low flow rate.

  8. Multi-Column Experimental Test Bed for Xe/Kr Separation

    International Nuclear Information System (INIS)

    Greenhalgh, Mitchell Randy; Garn, Troy Gerry; Welty, Amy Keil; Lyon, Kevin Lawrence; Watson, Tony Leroy

    2015-01-01

    Previous research studies have shown that INL-developed engineered form sorbents are capable of capturing both Kr and Xe from various composite gas streams. The previous experimental test bed provided single column testing for capacity evaluations over a broad temperature range. To advance research capabilities, the employment of an additional column to study selective capture of target species to provide a defined final gas composition for waste storage was warranted. The second column addition also allows for compositional analyses of the final gas product to provide for final storage determinations. The INL krypton capture system was modified by adding an additional adsorption column in order to create a multi-column test bed. The purpose of this modification was to investigate the separation of xenon from krypton supplied as a mixed gas feed. The extra column was placed in a Stirling Ultra-low Temperature Cooler, capable of controlling temperatures between 190 and 253K. Additional piping and valves were incorporated into the system to allow for a variety of flow path configurations. The new column was filled with the AgZ-PAN sorbent which was utilized as the capture medium for xenon while allowing the krypton to pass through. The xenon-free gas stream was then routed to the cryostat filled with the HZ-PAN sorbent to capture the krypton at 191K. Selectivities of xenon over krypton were determined using the new column to verify the system performance and to establish the operating conditions required for multi-column testing. Results of these evaluations verified that the system was operating as designed and also demonstrated that AgZ-PAN exhibits excellent selectivity for xenon over krypton in air at or near room temperature. Two separation tests were performed utilizing a feed gas consisting of 1000 ppmv xenon and 150 ppmv krypton with the balance being made up of air. The AgZ-PAN temperature was held at 295 or 253K while the HZ-PAN was held at 191K for both

  9. Distributed Parallel Endmember Extraction of Hyperspectral Data Based on Spark

    Directory of Open Access Journals (Sweden)

    Zebin Wu

    2016-01-01

    Full Text Available Due to the increasing dimensionality and volume of remotely sensed hyperspectral data, the development of acceleration techniques for massive hyperspectral image analysis approaches is a very important challenge. Cloud computing offers many possibilities of distributed processing of hyperspectral datasets. This paper proposes a novel distributed parallel endmember extraction method based on iterative error analysis that utilizes cloud computing principles to efficiently process massive hyperspectral data. The proposed method takes advantage of technologies including MapReduce programming model, Hadoop Distributed File System (HDFS, and Apache Spark to realize distributed parallel implementation for hyperspectral endmember extraction, which significantly accelerates the computation of hyperspectral processing and provides high throughput access to large hyperspectral data. The experimental results, which are obtained by extracting endmembers of hyperspectral datasets on a cloud computing platform built on a cluster, demonstrate the effectiveness and computational efficiency of the proposed method.

  10. Mapping brain structure and function: cellular resolution, global perspective.

    Science.gov (United States)

    Zupanc, Günther K H

    2017-04-01

    A comprehensive understanding of the brain requires analysis, although from a global perspective, with cellular, and even subcellular, resolution. An important step towards this goal involves the establishment of three-dimensional high-resolution brain maps, incorporating brain-wide information about the cells and their connections, as well as the chemical architecture. The progress made in such anatomical brain mapping in recent years has been paralleled by the development of physiological techniques that enable investigators to generate global neural activity maps, also with cellular resolution, while simultaneously recording the organism's behavioral activity. Combination of the high-resolution anatomical and physiological maps, followed by theoretical systems analysis of the deduced network, will offer unprecedented opportunities for a better understanding of how the brain, as a whole, processes sensory information and generates behavior.

  11. Global Dimming and Brightening Versus Atmospheric Column Transparency, Europe 1906-2007

    Energy Technology Data Exchange (ETDEWEB)

    Ohvril, H.; Teral, H.; Neiman, L.; Kannel, Martin; Uustare, M.; Tee, M.; Russak, V.; Okulov, O.; Joeveer, A.; Kallis, A.; Ohvril, Tiiu; Terez, E.; Terez, G.; Gushchin, G.; Abakumova, G. M.; Gorbarenko, Ekaterina V.; Tsvetkov, Anatoly V.; Laulainen, Nels S.

    2009-05-09

    Multiannual changes in atmospheric column transparency based on measurements of direct solar radiation allow us to assess various tendencies in climatic changes. Variability of the atmospheric integral (broadband) transparency coefficient, calculated according to the Bouguer-Lambert law and transformed to a solar elevation of 30°, is used for two Russian locations, Pavlovsk and Moscow, one Ukrainian location, Feodosiya, and three Estonian locations, Tartu, Tõravere, and Tiirikoja, covering together a 102-year period, 1906–2007. The comparison of time series revealed significant parallelism. Multiannual trends demonstrate decrease in transparency during the postwar period until 1983/1984. The trend ends with a steep decline of transparency after a series of four volcanic eruptions of Soufriere (1979), Saint Helens (1980), Alaid (1981), and El Chichón (1982). From 1984/1985 to 1990 the atmosphere remarkably restored its clarity, which almost reached again the level of the 1960s. Following the eruption of Mount Pinatubo (June 1991), there was the most significant reduction in column transparency of the postwar period. However, from the end of 1990s, the atmosphere in all considered locations is characterized with high values of transparency. The clearing of the atmosphere (from 1993) evidently indicates a decrease in the content of aerosol particles and, besides the decline of volcanic activity, may therefore be also traced to environmentally oriented changes in technology (pollution prevention), to general industrial and agricultural decline in the territory of the former USSR and Eastern Europe after deep political changes in 1991, and in part to migration of some industries out of Europe.

  12. Global dimming and brightening versus atmospheric column transparency, Europe, 1906-2007

    Science.gov (United States)

    Ohvril, Hanno; Teral, Hilda; Neiman, Lennart; Kannel, Martin; Uustare, Marika; Tee, Mati; Russak, Viivi; Okulov, Oleg; Jõeveer, Anne; Kallis, Ain; Ohvril, Tiiu; Terez, Edward I.; Terez, Galina A.; Gushchin, Gennady K.; Abakumova, Galina M.; Gorbarenko, Ekaterina V.; Tsvetkov, Anatoly V.; Laulainen, Nels

    2009-05-01

    Multiannual changes in atmospheric column transparency based on measurements of direct solar radiation allow us to assess various tendencies in climatic changes. Variability of the atmospheric integral (broadband) transparency coefficient, calculated according to the Bouguer-Lambert law and transformed to a solar elevation of 30°, is used for two Russian locations, Pavlovsk and Moscow, one Ukrainian location, Feodosiya, and three Estonian locations, Tartu, Tõravere, and Tiirikoja, covering together a 102-year period, 1906-2007. The comparison of time series revealed significant parallelism. Multiannual trends demonstrate decrease in transparency during the postwar period until 1983/1984. The trend ends with a steep decline of transparency after a series of four volcanic eruptions of Soufriere (1979), Saint Helens (1980), Alaid (1981), and El Chichón (1982). From 1984/1985 to 1990 the atmosphere remarkably restored its clarity, which almost reached again the level of the 1960s. Following the eruption of Mount Pinatubo (June 1991), there was the most significant reduction in column transparency of the postwar period. However, from the end of 1990s, the atmosphere in all considered locations is characterized with high values of transparency. The clearing of the atmosphere (from 1993) evidently indicates a decrease in the content of aerosol particles and, besides the decline of volcanic activity, may therefore be also traced to environmentally oriented changes in technology (pollution prevention), to general industrial and agricultural decline in the territory of the former USSR and Eastern Europe after deep political changes in 1991, and in part to migration of some industries out of Europe.

  13. Mapping caribou habitat north of the 51st parallel in Québec using Landsat imagery

    Directory of Open Access Journals (Sweden)

    Stéphanie Chalifoux

    2003-04-01

    Full Text Available A methodology using Landsat Thematic Mapper (TM images and vegetation typology, based on lichens as the principal component of caribou winter diet, was developed to map caribou habitat over a large and diversified area of Northern Québec. This approach includes field validation by aerial surveys (helicopter, classification of vegetation types, image enhancement, visual interpretation and computer assisted mapping. Measurements from more than 1500 field sites collected over six field campaigns from 1989 to 1996 represented the data analysed in this study. As the study progressed, 14 vegetation classes were defined and retained for analyses. Vegetation classes denoting important caribou habitat included six classes of upland lichen communities (Lichen, Lichen-Shrub, Shrub-Lichen, Lichen-Graminoid-Shrub, Lichen-Woodland, Lichen-Shrub-Woodland. Two classes (Burnt-over area, Regenerating burnt-over area are related to forest fire, and as they develop towards lichen communities, will become important for caribou. The last six classes are retained to depict remaining vegetation cover types. A total of 37 Landsat TM scenes were geocoded and enhanced using two methods: the Taylor method and the false colour composite method (bands combination and stretching. Visual inter¬pretation was chosen as the most efficient and reliable method to map vegetation types related to caribou habitat. The 43 maps produced at the scale of 1:250 000 and the synthesis map (1:2 000 000 provide a regional perspective of caribou habitat over 1200 000 km2 covering the entire range of the George river herd. The numerical nature of the data allows rapid spatial analysis and map updating.

  14. An iterative algorithm for solving the multidimensional neutron diffusion nodal method equations on parallel computers

    International Nuclear Information System (INIS)

    Kirk, B.L.; Azmy, Y.Y.

    1992-01-01

    In this paper the one-group, steady-state neutron diffusion equation in two-dimensional Cartesian geometry is solved using the nodal integral method. The discrete variable equations comprise loosely coupled sets of equations representing the nodal balance of neutrons, as well as neutron current continuity along rows or columns of computational cells. An iterative algorithm that is more suitable for solving large problems concurrently is derived based on the decomposition of the spatial domain and is accelerated using successive overrelaxation. This algorithm is very well suited for parallel computers, especially since the spatial domain decomposition occurs naturally, so that the number of iterations required for convergence does not depend on the number of processors participating in the calculation. Implementation of the authors' algorithm on the Intel iPSC/2 hypercube and Sequent Balance 8000 parallel computer is presented, and measured speedup and efficiency for test problems are reported. The results suggest that the efficiency of the hypercube quickly deteriorates when many processors are used, while the Sequent Balance retains very high efficiency for a comparable number of participating processors. This leads to the conjecture that message-passing parallel computers are not as well suited for this algorithm as shared-memory machines

  15. Permanent magnet finger-size scanning electron microscope columns

    Energy Technology Data Exchange (ETDEWEB)

    Nelliyan, K., E-mail: elenk@nus.edu.sg [Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore 117576 (Singapore); Khursheed, A. [Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore 117576 (Singapore)

    2011-07-21

    This paper presents permanent magnet scanning electron microscope (SEM) designs for both tungsten and field emission guns. Each column makes use of permanent magnet technology and operates at a fixed primary beam voltage. A prototype column operating at a beam voltage of 15 kV was made and tested inside the specimen chamber of a conventional SEM. A small electrostatic stigmator unit and dedicated scanning coils were integrated into the column. The scan coils were wound directly around the objective lens iron core in order to reduce its size. Preliminary experimental images of a test grid specimen were obtained through the prototype finger-size column, demonstrating that it is in principle feasible.

  16. Permanent magnet finger-size scanning electron microscope columns

    International Nuclear Information System (INIS)

    Nelliyan, K.; Khursheed, A.

    2011-01-01

    This paper presents permanent magnet scanning electron microscope (SEM) designs for both tungsten and field emission guns. Each column makes use of permanent magnet technology and operates at a fixed primary beam voltage. A prototype column operating at a beam voltage of 15 kV was made and tested inside the specimen chamber of a conventional SEM. A small electrostatic stigmator unit and dedicated scanning coils were integrated into the column. The scan coils were wound directly around the objective lens iron core in order to reduce its size. Preliminary experimental images of a test grid specimen were obtained through the prototype finger-size column, demonstrating that it is in principle feasible.

  17. Dynamic effects on cyclotron scattering in pulsar accretion columns

    International Nuclear Information System (INIS)

    Brainerd, J.J.; Meszaros, P.

    1991-01-01

    A resonant scattering model for photon reprocessing in a pulsar accretion column is presented. The accretion column is optically thin to Thomson scattering and optically thick to resonant scattering at the cyclotron frequency. Radiation from the neutron star surface propagates freely through the column until the photon energy equals the local cyclotron frequency, at which point the radiation is scattered, much of it back toward the star. The radiation pressure in this regime is insufficient to stop the infall. Some of the scattered radiation heats the stellar surface around the base of the column, which adds a softer component to the spectrum. The partial blocking by the accretion column of X-rays from the surface produces a fan beam emission pattern. X-rays above the surface cyclotron frequency freely escape and are characterized by a pencil beam. Gravitational light bending produces a pencil beam pattern of column-scattered radiation in the antipodal direction, resulting in a strongly angle-dependent cyclotron feature. 31 refs

  18. Aspartic acid incorporated monolithic columns for affinity glycoprotein purification.

    Science.gov (United States)

    Armutcu, Canan; Bereli, Nilay; Bayram, Engin; Uzun, Lokman; Say, Rıdvan; Denizli, Adil

    2014-02-01

    Novel aspartic acid incorporated monolithic columns were prepared to efficiently affinity purify immunoglobulin G (IgG) from human plasma. The monolithic columns were synthesised in a stainless steel HPLC column (20 cm × 5 mm id) by in situ bulk polymerisation of N-methacryloyl-L-aspartic acid (MAAsp), a polymerisable derivative of L-aspartic acid, and 2-hydroxyethyl methacrylate (HEMA). Monolithic columns [poly(2-hydroxyethyl methacrylate-N-methacryloyl-L-aspartic acid) (PHEMAsp)] were characterised by swelling studies, Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM). The monolithic columns were used for IgG adsorption/desorption from aqueous solutions and human plasma. The IgG adsorption depended on the buffer type, and the maximum IgG adsorption from aqueous solution in phosphate buffer was 0.085 mg/g at pH 6.0. The monolithic columns allowed for one-step IgG purification with a negligible capacity decrease after ten adsorption-desorption cycles. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  20. Fast mapping of the local environment of an autonomous mobile robot

    International Nuclear Information System (INIS)

    Fanton, Herve

    1989-01-01

    The construction of a map of the local world for the navigation of an autonomous mobile robot leads to the following problem: how to extract among the sensor data information accurate an reliable enough to plan a path, in a way that enables a reasonable displacement speed. Choice has been made not to tele-operate the vehicle nor to design any custom architecture. So the only way to match the computational cost is to look for the most efficient sensor-algorithms-architecture combination. A good solution is described in this study, using a laser range-finder, a grid model of the world and both SIMD and MIMD parallel processors. A short review of some possible approaches is made first; the mapping algorithms are then described as also the parallel implementations with the corresponding speedup and efficiency factors. (author) [fr

  1. Hydrogen peroxide stabilization in one-dimensional flow columns

    Science.gov (United States)

    Schmidt, Jeremy T.; Ahmad, Mushtaque; Teel, Amy L.; Watts, Richard J.

    2011-09-01

    Rapid hydrogen peroxide decomposition is the primary limitation of catalyzed H 2O 2 propagations in situ chemical oxidation (CHP ISCO) remediation of the subsurface. Two stabilizers of hydrogen peroxide, citrate and phytate, were investigated for their effectiveness in one-dimensional columns of iron oxide-coated and manganese oxide-coated sand. Hydrogen peroxide (5%) with and without 25 mM citrate or phytate was applied to the columns and samples were collected at 8 ports spaced 13 cm apart. Citrate was not an effective stabilizer for hydrogen peroxide in iron-coated sand; however, phytate was highly effective, increasing hydrogen peroxide residuals two orders of magnitude over unstabilized hydrogen peroxide. Both citrate and phytate were effective stabilizers for manganese-coated sand, increasing hydrogen peroxide residuals by four-fold over unstabilized hydrogen peroxide. Phytate and citrate did not degrade and were not retarded in the sand columns; furthermore, the addition of the stabilizers increased column flow rates relative to unstabilized columns. These results demonstrate that citrate and phytate are effective stabilizers of hydrogen peroxide under the dynamic conditions of one-dimensional columns, and suggest that citrate and phytate can be added to hydrogen peroxide before injection to the subsurface as an effective means for increasing the radius of influence of CHP ISCO.

  2. Optimizing Crawler4j using MapReduce Programming Model

    Science.gov (United States)

    Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.

    2017-06-01

    World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.

  3. Practical enhancement factor model based on GM for multiple parallel reactions: Piperazine (PZ) CO2 capture

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Fosbøl, Philip Loldrup

    2017-01-01

    Reactive absorption is a key process for gas separation and purification and it is the main technology for CO2 capture. Thus, reliable and simple mathematical models for mass transfer rate calculation are essential. Models which apply to parallel interacting and non-interacting reactions, for all......, desorption and pinch conditions.In this work, we apply the GM model to multiple parallel reactions. We deduce the model for piperazine (PZ) CO2 capture and we validate it against wetted-wall column measurements using 2, 5 and 8 molal PZ for temperatures between 40 °C and 100 °C and CO2 loadings between 0.......23 and 0.41 mol CO2/2 mol PZ. We show that overall second order kinetics describes well the reaction between CO2 and PZ accounting for the carbamate and bicarbamate reactions. Here we prove the GM model for piperazine and MEA but we expect that this practical approach is applicable for various amines...

  4. Mathematical modeling of alcohol distillation columns

    Directory of Open Access Journals (Sweden)

    Ones Osney Pérez

    2011-04-01

    Full Text Available New evaluation modules are proposed to extend the scope of a modular simulator oriented to the sugar cane industry, called STA 4.0, in a way that it can be used to carry out x calculation and analysis in ethanol distilleries. Calculation modules were developed for the simulation of the columns that are combined in the distillation area. Mathematical models were supported on materials and energy balances, equilibrium relations and thermodynamic properties of the ethanol-water system. Ponchon-Savarit method was used for the evaluation of the theoretical stages in the columns. A comparison between the results using Ponchon- Savarit method and those obtained applying McCabe-Thiele method was done for a distillation column. These calculation modules for ethanol distilleries were applied to a real case for validation.

  5. Hollow-Core FRP–Concrete–Steel Bridge Columns under Torsional Loading

    Directory of Open Access Journals (Sweden)

    Sujith Anumolu

    2017-11-01

    Full Text Available This paper presents the behavior of hollow-core fiber-reinforced polymer–concrete–steel (HC-FCS columns under cyclic torsional loading combined with constant axial load. The HC-FCS consists of an outer fiber-reinforced polymer (FRP tube and an inner steel tube, with a concrete shell sandwiched between the two tubes. The FRP tube was stopped at the surface of the footing, and provided confinement to the concrete shell from the outer direction. The steel tube was embedded into the footing to a length of 1.8 times the diameter of the steel tube. The longitudinal and transversal reinforcements of the column were provided by the steel tube only. A large-scale HC-FCS column with a diameter of 24 in. (610 mm and applied load height of 96 in. (2438 mm with an aspect ratio of four was investigated during this study. The study revealed that the torsional behavior of the HC-FCS column mainly depended on the stiffness of the steel tube and the interactions among the column components (concrete shell, steel tube, and FRP tube. A brief comparison of torsional behavior was made between the conventional reinforced concrete columns and the HC-FCS column. The comparison illustrated that both column types showed high initial stiffness under torsional loading. However, the HC-FCS column maintained the torsion strength until a high twist angle, while the conventional reinforced concrete column did not.

  6. Early construction and operation of the highly contaminated water treatment system in Fukushima Daiichi Nuclear Power Station (3). A unique simulation code to evaluate time-dependent Cs adsorption/desorption behavior in column system

    International Nuclear Information System (INIS)

    Inagaki, Kenta; Hijikata, Takatoshi; Tsukada, Takeshi; Koyama, Tadafumi; Ishikawa, Keiji; Ono, Shoichi; Suzuki, Shunichi

    2014-01-01

    A simulation code was developed to evaluate the performance of the cesium adsorption instrument operating in Fukushima Daiichi Nuclear Power Station. Since contaminated water contains seawater whose salinity is not constant, a new model was introduced to the conventional zeolite column simulation code to deal with the variable salinity of the seawater. Another feature of the cesium adsorption instrument is that it consists of several columns arranged in both series and parallel. The spent columns are replaced in a unique manner using a merry-go-round system. The code is designed by taking those factors into account. Consequently, it enables the evaluation of the performance characteristics of the cesium adsorption instrument, such as the time history of the decontamination factor, the cesium adsorption amount in each column, and the axial distribution of the adsorbed cesium in the spent columns. The simulation is conducted for different operation patterns and its results are given to Tokyo Electric Power Company (TEPCO) to support the optimization of the operation schedule. The code is also used to investigate the cause of some events that actually occurred in the operation of the cesium adsorption instrument. (author)

  7. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    Directory of Open Access Journals (Sweden)

    Cieślik Marcin

    2011-02-01

    Full Text Available Abstract Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'. A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption. An add-on module ('NuBio' facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures and functionality (e.g., to parse/write standard file formats. Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and

  8. The general packed column : an analytical solution

    NARCIS (Netherlands)

    Gielen, J.L.W.

    2000-01-01

    The transient behaviour of a packed column is considered. The column, uniformly packed on a macroscopic scale, is multi-structured on the microscopic level: the solid phase consists of particles, which may differ in incidence, shape or size, and other relevant physical properties. Transport in the

  9. Revised Thermal Analysis of LANL Ion Exchange Column

    Energy Technology Data Exchange (ETDEWEB)

    Laurinat, J

    2006-04-11

    This document updates a previous calculation of the temperature distributions in a Los Alamos National Laboratory (LANL) ion exchange column.1 LANL operates two laboratory-scale anion exchange columns, in series, to extract Pu-238 from nitric acid solutions. The Defense Nuclear Facilities Safety Board has requested an updated analysis to calculate maximum temperatures for higher resin loading capacities obtained with a new formulation of the Reillex HPQ anion exchange resin. The increased resin loading capacity will not exceed 118 g plutonium per L of resin bed. Calculations were requested for normal operation of the resin bed at the minimum allowable solution feed rate of 30 mL/min and after an interruption of flow at the end of the feed stage, when one of the columns is fully loaded. The object of the analysis is to demonstrate that the decay heat from the Pu-238 will not cause resin bed temperatures to increase to a level where the resin significantly degrades. At low temperatures, resin bed temperatures increase primarily due to decay heat. At {approx}70 C a Low Temperature Exotherm (LTE) resulting from the reaction between 8-12 M HNO{sub 3} and the resin has been observed. The LTE has been attributed to an irreversible oxidation of pendant ethyl benzene groups at the termini of the resin polymer chains by nitric acid. The ethyl benzene groups are converted to benzoic acid moities. The resin can be treated to permanently remove the LTE by heating a resin suspension in 8M HNO{sub 3} for 30-45 minutes. No degradation of the resin performance is observed after the LTE removal treatment. In fact, heating the resin in boiling ({approx}115-120 C) 12 M HNO{sub 3} for 3 hr displays thermal stability analogous to resin that has been treated to remove the LTE. The analysis is based on a previous study of the SRS Frames Waste Recovery (FWR) column, performed in support of the Pu-238 production campaign for NASA's Cassini mission. In that study, temperature transients

  10. Dopamine-imprinted monolithic column for capillary electrochromatography.

    Science.gov (United States)

    Aşır, Süleyman; Sarı, Duygu; Derazshamshir, Ali; Yılmaz, Fatma; Şarkaya, Koray; Denizli, Adil

    2017-11-01

    A dopamine-imprinted monolithic column was prepared and used in capillary electrochromatography as stationary phase for the first time. Dopamine was selectively separated from aqueous solution containing the competitor molecule norepinephrine, which is similar in size and shape to the template molecule. Morphology of the dopamine-imprinted column was observed by scanning electron microscopy. The influence of the organic solvent content of mobile phase, applied pressure and pH of the mobile phase on the recognition of dopamine by the imprinted monolithic column has been evaluated, and the imprinting effect in the dopamine-imprinted monolithic polymer was verified. Developed dopamine-imprinted monolithic column resulted in excellent separation of dopamine from structurally related competitor molecule, norepinephrine. Separation was achieved in a short period of 10 min, with the electrophoretic mobility of 5.81 × 10 -5  m 2 V -1 s -1 at pH 5.0 and 500 mbar pressure. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. column frame for design of reinforced concrete sway frames

    African Journals Online (AJOL)

    adminstrator

    design of slender reinforced concrete columns in sway frames according .... concrete,. Ac = gross cross-sectional area of the columns. Step 3: Effective Buckling Length Factors. The effective buckling length factors of columns in a sway frame shall be computed by .... shall have adequate resistance to failure in a sway mode ...

  12. Dynamics and Control of Distillation Columns - A Critical Survey

    Directory of Open Access Journals (Sweden)

    Sigurd Skogestad

    1997-07-01

    Full Text Available Distillation column dynamics and control have been viewed by many as a very mature or even dead field. However, as is discussed in this paper significant new results have appeared over the last 5-10 years. These results include multiple steady states and instability in simple columns with ideal thermodynamics (which was believed to be impossible, the understanding of the difference between various control configurations and the systematic transformation between these, the feasibility of using the distillate-bottom structure, for control (which was believed to be impossible, the importance of flow dynamics for control studies, the fundamental problems in identifying models from open-loops responses, the use of simple regression estimators to estimate composition from temperatures, and an improved general understanding of the dynamic behavior of distillation columns which includes a better understanding of the fundamental difference between internal and external flow, simple formulas for estimating the dominant time constant, and a derivation of the linearizing effect of logarithmic transformations. These issues apply to all columns, even for ideal mixtures and simple columns with only two products. In addition, there have been significant advances for cases with complex thermodynamics and complex column configurations. These include the behavior and control of azeotropic distillation columns, and the possible complex dynamics of nonideal mixtures and of interlinked columns. However, both for the simple and more complex cases there are still a number of areas where further research is needed.

  13. Slender CRC Columns

    DEFF Research Database (Denmark)

    Aarup, Bendt; Jensen, Lars Rom; Ellegaard, Peter

    2005-01-01

    CRC is a high-performance steel fibre reinforced concrete with a typical compressive strength of 150 MPa. Design methods for a number of structural elements have been developed since CRC was invented in 1986, but the current project set out to further investigate the range of columns for which...

  14. Kemari: A Portable High Performance Fortran System for Distributed Memory Parallel Processors

    Directory of Open Access Journals (Sweden)

    T. Kamachi

    1997-01-01

    Full Text Available We have developed a compilation system which extends High Performance Fortran (HPF in various aspects. We support the parallelization of well-structured problems with loop distribution and alignment directives similar to HPF's data distribution directives. Such directives give both additional control to the user and simplify the compilation process. For the support of unstructured problems, we provide directives for dynamic data distribution through user-defined mappings. The compiler also allows integration of message-passing interface (MPI primitives. The system is part of a complete programming environment which also comprises a parallel debugger and a performance monitor and analyzer. After an overview of the compiler, we describe the language extensions and related compilation mechanisms in detail. Performance measurements demonstrate the compiler's applicability to a variety of application classes.

  15. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    Science.gov (United States)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  16. Copper (II) Removal In Anaerobic Continuous Column Reactor System By Using Sulfate Reducing Bacteria

    Science.gov (United States)

    Bilgin, A.; Jaffe, P. R.

    2017-12-01

    Copper is an essential element for the synthesis of the number of electrons carrying proteins and the enzymes. However, it has a high level of toxicity. In this study; it is aimed to treat copper heavy metal in anaerobic environment by using anaerobic continuous column reactor. Sulfate reducing bacteria culture was obtained in anaerobic medium using enrichment culture method. The column reactor experiments were carried out with bacterial culture obtained from soil by culture enrichment method. The system is operated with continuous feeding and as parallel. In the first rector, only sand was used as packing material. The first column reactor was only fed with the bacteria nutrient media. The same solution was passed through the second reactor, and copper solution removal was investigated by continuously feeding 15-600 mg/L of copper solution at the feeding inlet in the second reactor. When the experiment was carried out by adding the 10 mg/L of initial copper concentration, copper removal in the rate of 45-75% was obtained. In order to determine the use of carbon source during copper removal of mixed bacterial cultures in anaerobic conditions, total organic carbon TOC analysis was used to calculate the change in carbon content, and it was calculated to be between 28% and 75%. When the amount of sulphate is examined, it was observed that it changed between 28-46%. During the copper removal, the amounts of sulphate and carbon moles were equalized and more sulfate was added by changing the nutrient media in order to determine the consumption of sulphate or carbon. Accordingly, when the concentration of added sulphate is increased, it is calculated that between 35-57% of sulphate is spent. In this system, copper concentration of up to 15-600 mg / L were studied.

  17. Performance of zeolite scavenge column in Xe monitoring system

    International Nuclear Information System (INIS)

    Wang Qian; Wang Hongxia; Li Wei; Bian Zhishang

    2010-01-01

    In order to improve the performance of zeolite scavenge column, its ability of removal of humidity and carbon dioxide was studied by both static and dynamic approaches. The experimental results show that various factors, including the column length and diameter, the mass of zeolite, the content of water in air, the temperature rise during adsorption, and the activation effectiveness all effect the performance of zeolite column in scavenging humanity and carbon dioxide. Based on these results and previous experience, an optimized design of the zeolite column is made for use in xenon monitoring system. (authors)

  18. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. VIII. A WIDE-AREA, HIGH-RESOLUTION MAP OF DUST EXTINCTION IN M31

    Energy Technology Data Exchange (ETDEWEB)

    Dalcanton, Julianne J.; Fouesneau, Morgan; Weisz, Daniel R.; Williams, Benjamin F. [Department of Astronomy, Box 351580, University of Washington, Seattle, WA 98195 (United States); Hogg, David W. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Pl #424, New York, NY 10003 (United States); Lang, Dustin [McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213 (United States); Leroy, Adam K. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Gordon, Karl D.; Gilbert, Karoline M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Sandstrom, Karin [Steward Observatory, University of Arizona, 933 N Cherry Ave, Tucson, AZ 85721 (United States); Bell, Eric F. [Department of Astronomy, University of Michigan, 500 Church St., Ann Arbor, MI 48109 (United States); Dong, Hui; Lauer, Tod R. [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Gouliermis, Dimitrios A. [Max Planck Institute für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Guhathakurta, Puragra [Department of Astronomy and Astrophysics, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Schruba, Andreas [California Institute of Technology, Cahill Center for Astrophysics, 1200 E. California Blvd, Pasadena, CA 91125 (United States); Seth, Anil C. [University of Utah, Salt Lake City, UT (United States); Skillman, Evan D., E-mail: jd@astro.washington.edu [Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States)

    2015-11-20

    We map the distribution of dust in M31 at 25 pc resolution using stellar photometry from the Panchromatic Hubble Andromeda Treasury survey. The map is derived with a new technique that models the near-infrared color–magnitude diagram (CMD) of red giant branch (RGB) stars. The model CMDs combine an unreddened foreground of RGB stars with a reddened background population viewed through a log-normal column density distribution of dust. Fits to the model constrain the median extinction, the width of the extinction distribution, and the fraction of reddened stars in each 25 pc cell. The resulting extinction map has a factor of ≳4 times better resolution than maps of dust emission, while providing a more direct measurement of the dust column. There is superb morphological agreement between the new map and maps of the extinction inferred from dust emission by Draine et al. However, the widely used Draine and Li dust models overpredict the observed extinction by a factor of ∼2.5, suggesting that M31's true dust mass is lower and that dust grains are significantly more emissive than assumed in Draine et al. The observed factor of ∼2.5 discrepancy is consistent with similar findings in the Milky Way by the Plank Collaboration et al., but we find a more complex dependence on parameters from the Draine and Li dust models. We also show that the the discrepancy with the Draine et al. map is lowest where the current interstellar radiation field has a harder spectrum than average. We discuss possible improvements to the CMD dust mapping technique, and explore further applications in both M31 and other galaxies.

  19. Behaviour of normal reinforced concrete columns exposed to different soils

    Directory of Open Access Journals (Sweden)

    Rasheed Laith

    2018-01-01

    Full Text Available Concrete resistance to sulfate attack is one of the most important characteristics for maintaining the durability of concrete. In this study, the effect of the attack of sulfate salts on normal reinforced concrete column was investigated by burying these columns in two types of soils (sandy and clayey in two pits at a depth of 3 m in one of the agricultural areas in the holy city of Karbala, one containing sandy soil (SO3 = 10.609% and the other containing clayey soil with (SO3 = 2.61%. The tests were used (pure axial compression test of reinforced concrete columns, compressive strength test, and splitting tensile strength test, absorption, voids ratio and finally density. It`s found that the strength of RC columns decreasing by (12.51% for age (240 days, for columns buried in clayey soil, where the strength increased by (11.71% for the same period, for columns buried in sandy soils, with respect to the reference column.

  20. Column studies on BTEX biodegradation under microaerophilic and denitrifying conditions

    International Nuclear Information System (INIS)

    Hutchins, S.R.; Moolenaar, S.W.; Rhodes, D.E.

    1992-01-01

    Two column tests were conducted using aquifer material to simulate the nitrate field demonstration project carried out earlier at Traverse City, Michigan. The objectives were to better define the effect nitrate addition had on biodegradation of benzene, toluene, ethylbenzene, xylenes, and trimethylbenzenes (BTEX) in the field study, and to determine whether BTEX removal can be enhanced by supplying a limited amount of oxygen as a supplemental electron acceptor. Columns were operated using limited oxygen, limited oxygen plus nitrate, and nitrate alone. In the first column study, benzene was generally recalcitrant compared to the alkylbenzenes (TEX), although some removal did occur. In the second column study, nitrate was deleted from the feed to the column originally receiving nitrate alone and added to the feed of the column originally receiving limited oxygen alone. Although the requirement for nitrate for optimum TEX removal was clearly demonstrated in these columns, there were significant contributions by biotic and abiotic processes other than denitrification which could not be quantified

  1. Rasch models with exchangeable rows and columns

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt

    The article studies distributions of doubly infinite binary matrices with exchangeable rows and columns which satify the further property that the probability of any $m \\times n$ submatrix is a function of the row- and column sums of that matrix. We show that any such distribution is a (unique...

  2. Strength degradation of oxidized graphite support column in VHTR

    International Nuclear Information System (INIS)

    Park, Byung Ha; No, Hee Cheon

    2010-01-01

    Air-ingress events caused by large pipe breaks are important accidents considered in the design of Very High Temperature Gas-Cooled Reactors (VHTRs). A main safety concern for this type of event is the possibility of core collapse following the failure of the graphite support column, which can be oxidized by ingressed air. In this study, the main target is to predict the strength of the oxidized graphite support column. Through compression tests for fresh and oxidized graphite columns, the compressive strength of IG-110 was obtained. The buckling strength of the IG-110 column is expressed using the following empirical straight-line formula: σ cr,buckling =91.34-1.01(L/r). Graphite oxidation in Zone 1 is volume reaction and that in Zone 3 is surface reaction. We notice that the ultimate strength of the graphite column oxidized in Zones 1 and 3 only depends on the slenderness ratio and bulk density. Its strength degradation oxidized in Zone 1 is expressed in the following nondimensional form: σ/σ 0 =exp(-kd), k=0.114. We found that the strength degradation of a graphite column, oxidized in Zone 3, follows the above buckling empirical formula as the slenderness of the column changes. (author)

  3. Investigation of Gas Holdup in a Vibrating Bubble Column

    Science.gov (United States)

    Mohagheghian, Shahrouz; Elbing, Brian

    2015-11-01

    Synthetic fuels are part of the solution to the world's energy crisis and climate change. Liquefaction of coal during the Fischer-Tropsch process in a bubble column reactor (BCR) is a key step in production of synthetic fuel. It is known from the 1960's that vibration improves mass transfer in bubble column. The current study experimentally investigates the effect that vibration frequency and amplitude has on gas holdup and bubble size distribution within a bubble column. Air (disperse phase) was injected into water (continuous phase) through a needle shape injector near the bottom of the column, which was open to atmospheric pressure. The air volumetric flow rate was measured with a variable area flow meter. Vibrations were generated with a custom-made shaker table, which oscillated the entire column with independently specified amplitude and frequency (0-30 Hz). Geometric dependencies can be investigated with four cast acrylic columns with aspect ratios ranging from 4.36 to 24, and injector needle internal diameters between 0.32 and 1.59 mm. The gas holdup within the column was measured with a flow visualization system, and a PIV system was used to measure phase velocities. Preliminary results for the non-vibrating and vibrating cases will be presented.

  4. Multi-column adsorption systems with condenser for tritiated water vapor removal

    International Nuclear Information System (INIS)

    Kotoh, Kenji; Kudo, Kazuhiko

    1996-01-01

    Two types of multi-column adsorption system are proposed as the system for removal of tritiated moisture from tritium process gases or/and handling room atmospheres. The types are of recycle use of adsorption columns, and are composed of twin or triplet columns and one condenser which is used for collecting the adsorbed moisture from columns in desorption process. The systems utilize the dry gas from a working column as the purge gas for regenerating a saturated column and appropriate an active column for recovery of the tritiated moisture passing through the condenser. Each column hence needs the additional amount of adsorbent for collecting the moisture from the condenser. In the modeling and design of an adsorption column, it is primary to estimate the necessary amount of a candidate adsorbent for its packed-bed. The performance of the proposed systems is examined here by analyzing the dependence of the necessary amount of adsorbent for their columns on process operational conditions and adsorbent moisture-adsorption characteristics. The result shows that the necessary amount is sensitive to the types of adsorption isotherm, and suggests that these systems should employ adsorbents which exhibit the Langmuir-type isotherms. (author)

  5. Seismic Performance of High-Ductile Fiber-Reinforced Concrete Short Columns

    Directory of Open Access Journals (Sweden)

    Mingke Deng

    2018-01-01

    Full Text Available This study mainly aims to investigate the effectiveness of high-ductile fiber-reinforced concrete (HDC as a means to enhance the seismic performance of short columns. Six HDC short columns and one reinforced concrete (RC short column were designed and tested under lateral cyclic loading. The influence of the material type (concrete or HDC, axial load, stirrup ratio, and shear span ratio on crack patterns, hysteresis behavior, shear strength, deformation capacity, energy dissipation, and stiffness degradation was presented and discussed, respectively. The test results show that the RC short column failed in brittle shear with poor energy dissipation, while using HDC to replace concrete can effectively improve the seismic behavior of the short columns. Compared with the RC short column, the shear strength of HDC specimens was improved by 12.6–30.2%, and the drift ratio and the energy dissipation increases were 56.9–88.5% and 237.7–336.7%, respectively, at the ultimate displacement. Additionally, the prediction model of the shear strength for RC columns based on GB50010-2010 (Chinese code can be safely adopted to evaluate the shear strength of HDC short columns.

  6. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  7. Data driven parallelism in experimental high energy physics applications

    International Nuclear Information System (INIS)

    Pohl, M.

    1987-01-01

    I present global design principles for the implementation of high energy physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of high energy physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordiate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms). (orig.)

  8. Data driven parallelism in experimental high energy physics applications

    Science.gov (United States)

    Pohl, Martin

    1987-08-01

    I present global design principles for the implementation of High Energy Physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of High Energy Physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The Task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordinate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms).

  9. Insights into Tikhonov regularization: application to trace gas column retrieval and the efficient calculation of total column averaging kernels

    Directory of Open Access Journals (Sweden)

    T. Borsdorff

    2014-02-01

    Full Text Available Insights are given into Tikhonov regularization and its application to the retrieval of vertical column densities of atmospheric trace gases from remote sensing measurements. The study builds upon the equivalence of the least-squares profile-scaling approach and Tikhonov regularization method of the first kind with an infinite regularization strength. Here, the vertical profile is expressed relative to a reference profile. On the basis of this, we propose a new algorithm as an extension of the least-squares profile scaling which permits the calculation of total column averaging kernels on arbitrary vertical grids using an analytic expression. Moreover, we discuss the effective null space of the retrieval, which comprises those parts of a vertical trace gas distribution which cannot be inferred from the measurements. Numerically the algorithm can be implemented in a robust and efficient manner. In particular for operational data processing with challenging demands on processing time, the proposed inversion method in combination with highly efficient forward models is an asset. For demonstration purposes, we apply the algorithm to CO column retrieval from simulated measurements in the 2.3 μm spectral region and to O3 column retrieval from the UV. These represent ideal measurements of a series of spaceborne spectrometers such as SCIAMACHY, TROPOMI, GOME, and GOME-2. For both spectral ranges, we consider clear-sky and cloudy scenes where clouds are modelled as an elevated Lambertian surface. Here, the smoothing error for the clear-sky and cloudy atmosphere is significant and reaches several percent, depending on the reference profile which is used for scaling. This underlines the importance of the column averaging kernel for a proper interpretation of retrieved column densities. Furthermore, we show that the smoothing due to regularization can be underestimated by calculating the column averaging kernel on a too coarse vertical grid. For both

  10. Materials performance in prototype Thermal Cycling Absorption Process (TCAP) columns

    International Nuclear Information System (INIS)

    Clark, E.A.

    1992-01-01

    Two prototype Thermal Cycling Absorption Process (TCAP) columns have been metallurgically examined after retirement, to determine the causes of failure and to evaluate the performance of the column container materials in this application. Leaking of the fluid heating and cooling subsystems caused retirement of both TCAP columns, not leaking of the main hydrogen-containing column. The aluminum block design TCAP column (AHL block TCAP) used in the Advanced Hydride Laboratory, Building 773-A, failed in one nitrogen inlet tube that was crimped during fabrication, which lead to fatigue crack growth in the tube and subsequent leaking of nitrogen from this tube. The Third Generation stainless steel design TCAP column (Third generation TCAP), operated in 773-A room C-061, failed in a braze joint between the freon heating and cooling tubes (made of copper) and the main stainless steel column. In both cases, stresses from thermal cycling and local constraint likely caused the nucleation and growth of fatigue cracks. No materials compatibility problems between palladium coated kieselguhr (the material contained in the TCAP column) and either aluminum or stainless steel column materials were observed. The aluminum-stainless steel transition junction appeared to be unaffected by service in the AHL block TCAP. Also, no evidence of cracking was observed in the AHL block TCAP in a location expected to experience the highest thermal shock fatigue in this design. It is important to limit thermal stresses caused by constraint in hydride systems designed to work by temperature variation, such as hydride storage beds and TCAP columns

  11. Columns in Clay

    Science.gov (United States)

    Leenhouts, Robin

    2010-01-01

    This article describes a clay project for students studying Greece and Rome. It provides a wonderful way to learn slab construction techniques by making small clay column capitols. With this lesson, students learn architectural vocabulary and history, understand the importance of classical architectural forms and their influence on today's…

  12. 46 CFR 174.085 - Flooding on column stabilized units.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Flooding on column stabilized units. 174.085 Section 174... Units § 174.085 Flooding on column stabilized units. (a) Watertight compartments that are outboard of... of the unit, must be assumed to be subject to flooding as follows: (1) When a column is subdivided...

  13. Tracking senescence-induced patterns in leaf litter leachate using parallel factor analysis (PARAFAC) modeling and self-organizing maps

    Science.gov (United States)

    Wheeler, K. I.; Levia, D. F.; Hudson, J. E.

    2017-09-01

    In autumn, the dissolved organic matter (DOM) contribution of leaf litter leachate to streams in forested watersheds changes as trees undergo resorption, senescence, and leaf abscission. Despite its biogeochemical importance, little work has investigated how leaf litter leachate DOM changes throughout autumn and how any changes might differ interspecifically and intraspecifically. Since climate change is expected to cause vegetation migration, it is necessary to learn how changes in forest composition could affect DOM inputs via leaf litter leachate. We examined changes in leaf litter leachate fluorescent DOM (FDOM) from American beech (Fagus grandifolia Ehrh.) leaves in Maryland, Rhode Island, Vermont, and North Carolina and from yellow poplar (Liriodendron tulipifera L.) leaves from Maryland. FDOM in leachate samples was characterized by excitation-emission matrices (EEMs). A six-component parallel factor analysis (PARAFAC) model was created to identify components that accounted for the majority of the variation in the data set. Self-organizing maps (SOM) compared the PARAFAC component proportions of leachate samples. Phenophase and species exerted much stronger influence on the determination of a sample's SOM placement than geographic origin. As expected, FDOM from all trees transitioned from more protein-like components to more humic-like components with senescence. Percent greenness of sampled leaves and the proportion of tyrosine-like component 1 were found to be significantly different between the two genetic beech clusters, suggesting differences in photosynthesis and resorption. Our results highlight the need to account for interspecific and intraspecific variations in leaf litter leachate FDOM throughout autumn when examining the influence of allochthonous inputs to streams.

  14. A study of pulse columns for thorium fuel reprocessing

    International Nuclear Information System (INIS)

    Fumoto, H.

    1982-03-01

    Two 5 m pulse columns with the same cartridge geometries are installed to investigate the performance. The characteristic differences of the aqueous continous and the organic continuous columns were investigated experimentally. A ternary system of 30% TBP in dodecane-acetic acid-water was adopted for the mass-transfer study. It was concluded that the overall mass-transfer coefficient was independent of whether the mass-transfer is from the dispersed to the continuous phase or from the continuous to the dispersed phase. Thorium nitrate was extracted and reextracted using both modes of operation. Both HETS and HTU were obtained. The aqueous continuous column gave much shorter HTU than the organic continuous column. In reextraction the organic continuous column gave shorter HTU. The Thorex-processes for uranium and thorium co-extraction, co-stripping, and partitioning were studied. Both acid feed solution and acid deficiend feed solution were investigated. The concentration profiles along the column height were obtained. The data were analysed with McCABE-THIELE diagrams to evaluate HETS. (orig./HP) [de

  15. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  16. Influence of plant roots on electrical resistivity measurements of cultivated soil columns

    Science.gov (United States)

    Maloteau, Sophie; Blanchy, Guillaume; Javaux, Mathieu; Garré, Sarah

    2016-04-01

    Electrical resistivity methods have been widely used for the last 40 years in many fields: groundwater investigation, soil and water pollution, engineering application for subsurface surveys, etc. Many factors can influence the electrical resistivity of a media, and thus influence the ERT measurements. Among those factors, it is known that plant roots affect bulk electrical resistivity. However, this impact is not yet well understood. The goals of this experiment are to quantify the effect of plant roots on electrical resistivity of the soil subsurface and to map a plant roots system in space and time with ERT technique in a soil column. For this research, it is assumed that roots system affect the electrical properties of the rhizosphere. Indeed the root activity (by transporting ions, releasing exudates, changing the soil structure,…) will modify the rhizosphere electrical conductivity (Lobet G. et al, 2013). This experiment is included in a bigger research project about the influence of roots system on geophysics measurements. Measurements are made on cylinders of 45 cm high and a diameter of 20 cm, filled with saturated loam on which seeds of Brachypodium distachyon (L.) Beauv. are sowed. Columns are equipped with electrodes, TDR probes and temperature sensors. Experiments are conducted at Gembloux Agro-Bio Tech, in a growing chamber with controlled conditions: temperature of the air is fixed to 20° C, photoperiod is equal to 14 hours, photosynthetically active radiation is equal to 200 μmol m-2s-1, and air relative humidity is fixed to 80 %. Columns are fully saturated the first day of the measurements duration then no more irrigation is done till the end of the experiment. The poster will report the first results analysis of the electrical resistivity distribution in the soil columns through space and time. These results will be discussed according to the plant development and other controlled factors. Water content of the soil will also be detailed

  17. Modeling and analysis of conventional and heat-integrated distillation columns

    DEFF Research Database (Denmark)

    Bisgaard, Thomas; Huusom, Jakob Kjøbsted; Abildskov, Jens

    2015-01-01

    A generic model that can cover diabatic and adiabatic distillation column configurations is presented, with the aim ofproviding a consistent basis for comparison of alternative distillation column technologies. Both a static and a dynamic formulation of the model, together with a model catalogue...... consisting of the conventional, the heat-integrated and the mechanical vapor recompression distillation columns are presented. The solution procedure of the model is outlined and illustrated in three case studies. One case study being a benchmark study demonstrating the size of the model and the static...... properties of two different heat-integrated distillation column (HIDiC) schemes and the mechanical vapor recompression column. The second case study exemplifies the difference between a HIDiC and a conventional distillation column in the composition profiles within a multicomponent separation, whereas...

  18. Synthesis and characterization of Mn-doped ZnO column arrays

    International Nuclear Information System (INIS)

    Yang Mei; Guo Zhixing; Qiu Kehui; Long Jianping; Yin Guangfu; Guan Denggao; Liu Sutian; Zhou Shijie

    2010-01-01

    Mn-doped ZnO column arrays were successfully synthesized by conventional sol-gel process. Effect of Mn/Zn atomic ratio and reaction time were investigated, and the morphology, tropism and optical properties of Mn-doped ZnO column arrays were characterized by SEM, XRD and photoluminescence (PL) spectroscopy. The result shows that a Mn/Zn atomic ratio of 0.1 and growth time of 12 h are the optimal condition for the preparation of densely distributed ZnO column arrays. XRD analysis shows that Mn-doped ZnO column arrays are highly c-axis oriented. As for Mn-doped ZnO column arrays, obvious increase of photoluminescence intensity is observed at the wavelength of ∼395 nm and ∼413 nm, compared to pure ZnO column arrays.

  19. Sensory experience modifies feature map relationships in visual cortex

    Science.gov (United States)

    Cloherty, Shaun L; Hughes, Nicholas J; Hietanen, Markus A; Bhagavatula, Partha S

    2016-01-01

    The extent to which brain structure is influenced by sensory input during development is a critical but controversial question. A paradigmatic system for studying this is the mammalian visual cortex. Maps of orientation preference (OP) and ocular dominance (OD) in the primary visual cortex of ferrets, cats and monkeys can be individually changed by altered visual input. However, the spatial relationship between OP and OD maps has appeared immutable. Using a computational model we predicted that biasing the visual input to orthogonal orientation in the two eyes should cause a shift of OP pinwheels towards the border of OD columns. We then confirmed this prediction by rearing cats wearing orthogonally oriented cylindrical lenses over each eye. Thus, the spatial relationship between OP and OD maps can be modified by visual experience, revealing a previously unknown degree of brain plasticity in response to sensory input. DOI: http://dx.doi.org/10.7554/eLife.13911.001 PMID:27310531

  20. MAPS application for the ITS upgrade

    CERN Document Server

    Lattucaon, A

    2016-01-01

    The Monolithic Active Pixel Sensor (MAPS) technology is of central interest for the innermost tracking layers of particle physics experiments since they enhance the detector granularity and thus allow for very high spatial resolution with low material budget. This contribution will focus on the MAPS implementation for the ALICE ITS Upgrade. Within the ongoing R&D; program, the ALPIDE chip is under development with a wide pixel matrix consisting of 512 rows and 1024 columns. With this high pixel granularity a fast read out is mandatory. For this purpose a high speed serial link, which works at the targeting speeds of 1.2Gbps/400 Mbps, is integrated in the chip in order to send out data at the far end of a differential cable. To overcome the physical limitations imposed by the signal lines and properly reconstruct the signal, pre-emphasis technique is mandatory at such long distances. This contribution summarizes the ongoing studies on the data transmission quality and presents the first measurement of the ...

  1. Mini-columns for Conducting Breakthrough Experiments. Design and Construction

    Energy Technology Data Exchange (ETDEWEB)

    Dittrich, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ware, Stuart Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-06-11

    Experiments with moderately and strongly sorbing radionuclides (i.e., U, Cs, Am) have shown that sorption between experimental solutions and traditional column materials must be accounted for to accurately determine stationary phase or porous media sorption properties (i.e., sorption site density, sorption site reaction rate coefficients, and partition coefficients or Kd values). This report details the materials and construction of mini-columns for use in breakthrough columns to allow for accurate measurement and modeling of sorption parameters. Material selection, construction techniques, wet packing of columns, tubing connections, and lessons learned are addressed.

  2. Position Based Visual Servoing control of a Wheelchair Mounter Robotic Arm using Parallel Tracking and Mapping of task objects

    Directory of Open Access Journals (Sweden)

    Alessandro Palla

    2017-05-01

    Full Text Available In the last few years power wheelchairs have been becoming the only device able to provide autonomy and independence to people with motor skill impairments. In particular, many power wheelchairs feature robotic arms for gesture emulation, like the interaction with objects. However, complex robotic arms often require a joystic to be controlled; this feature make the arm hard to be controlled by impaired users. Paradoxically, if the user were able to proficiently control such devices, he would not need them. For that reason, this paper presents a highly autonomous robotic arm, designed in order to minimize the effort necessary for the control of the arm. In order to do that, the arm feature an easy to use human - machine interface and is controlled by Computer Vison algorithm, implementing a Position Based Visual Servoing (PBVS control. It was realized by extracting features by the camera and fusing them with the distance from the target, obtained by a proximity sensor. The Parallel Tracking and Mapping (PTAM algorithm was used to find the 3D position of the task object in the camera reference system. The visual servoing algorithm was implemented in an embedded platform, in real time. Each part of the control loop was developed in Robotic Operative System (ROS Environment, which allows to implement the previous algorithms as different nodes. Theoretical analysis, simulations and in system measurements proved the effectiveness of the proposed solution.

  3. Design of a Cryogenic Distillation Column for JET Water Detritiation System for Tritium Recovery

    International Nuclear Information System (INIS)

    Parracho, A.I.; Camp, P.; Dalgliesh, P.; Hollingsworth, A.; Lefebvre, X.; Lesnoj, S.; Sacks, R.; Shaw, R.; Smith, R.; Wakeling, B.

    2015-01-01

    A Water Detritiation System (WDS) is currently being designed and manufactured to be installed in the Active Gas Handling System (AGHS) of JET, currently the largest magnetic fusion experiment in the world. JET has been designed and built to study fusion operating conditions with the plasma fuelling done by means of a deuterium-tritium gas mixture. AGHS is a plant designed and built to safely process gas mixtures and impurities containing tritium recovered from the JET torus exhaust gases. Tritium is removed from these gas mixtures and recycled. Tritium depleted gases are sent to Exhaust Detritiation System (EDS) for final tritium removal prior to discharge into the environment. In EDS, tritium and tritiated species are catalytically oxidized into water, this tritiated water is then adsorbed onto molecular sieve beds (MSB). After saturation the MSBs are heated and the water is desorbed and collected for tritium recovery. The WDS facility is designed to recover tritium from water with an average activity of 1.9 GBq/l, and is able to process water with activities of 85 GBq/l and higher. Tritiated water is filtered and supplied to the electrolyser where the water is converted into gaseous oxygen and tritiated hydrogen. The hydrogen stream is first purified by selective diffusion through membranes of palladium alloy and then is fed to two cryogenic distillation columns (CD). These operate in parallel or in series depending on the water activity. In the CD columns, hydrogen isotopes containing tritium are recovered as the bottom product and hydrogen, the top product, is safely discarded to a stack. The CD columns are foreseen to have a throughput between 200 and 300 mole/h of hydrogen isotopes vapour and they operate at approximately ≈21.2K and 105 kPa. The design of the CD columns will be presented in this work. This work has been carried out within the framework of the Contract for the Operation of the JET Facilities and has received funding from the European Union

  4. Rapid micro-scale proteolysis of proteins for MALDI-MS peptide mapping using immobilized trypsin

    Science.gov (United States)

    Gobom, Johan; Nordhoff, Eckhard; Ekman, Rolf; Roepstorff, Peter

    1997-12-01

    In this study we present a rapid method for tryptic digestion of proteins using micro-columns with enzyme immobilized on perfusion chromatography media. The performance of the method is exemplified with acyl-CoA-binding protein and reduced carbamidomethylated bovine serum albumin. The method proved to be significantly faster and yielded a better sequence coverage and an improved signal-to-noise ratio for the MALDI-MS peptide maps, compared to in-solution- and on-target digestion. Only a single sample transfer step is required, and therefore sample loss due to adsorption to surfaces is reduced, which is a critical issue when handling low picomole to femtomole amounts of proteins. An example is shown with on-column proteolytic digestion and subsequent elution of the digest into a reversed-phase micro-column. This is useful if the sample contains large amounts of salt or is too diluted for MALDI-MS analysis. Furthermore, by step-wise elution from the reversedphase column, a complex digest can be fractionated, which reduces signal suppression and facilitates data interpretation in the subsequent MS-analysis. The method also proved useful for consecutive digestions with enzymes of different cleavage specificity. This is exemplified with on-column tryptic digestion, followed by reversed-phase step-wise elution, and subsequent on-target V8 protease digestion.

  5. Design of parallel dual-energy X-ray beam and its performance for security radiography

    International Nuclear Information System (INIS)

    Kim, Kwang Hyun; Myoung, Sung Min; Chung, Yong Hyun

    2011-01-01

    A new concept of dual-energy X-ray beam generation and acquisition of dual-energy security radiography is proposed. Erbium (Er) and rhodium (Rh) with a copper filter were positioned in front of X-ray tube to generate low- and high-energy X-ray spectra. Low- and high-energy X-rays were guided to separately enter into two parallel detectors. Monte Carlo code of MCNPX was used to derive an optimum thickness of each filter for improved dual X-ray image quality. It was desired to provide separation ability between organic and inorganic matters for the condition of 140 kVp/0.8 mA as used in the security application. Acquired dual-energy X-ray beams were evaluated by the dual-energy Z-map yielding enhanced performance compared with a commercial dual-energy detector. A collimator for the parallel dual-energy X-ray beam was designed to minimize X-ray beam interference between low- and high-energy parallel beams for 500 mm source-to-detector distance.

  6. Extended substitution-diffusion based image cipher using chaotic standard map

    Science.gov (United States)

    Kumar, Anil; Ghose, M. K.

    2011-01-01

    This paper proposes an extended substitution-diffusion based image cipher using chaotic standard map [1] and linear feedback shift register to overcome the weakness of previous technique by adding nonlinearity. The first stage consists of row and column rotation and permutation which is controlled by the pseudo-random sequences which is generated by standard chaotic map and linear feedback shift register, second stage further diffusion and confusion is obtained in the horizontal and vertical pixels by mixing the properties of the horizontally and vertically adjacent pixels, respectively, with the help of chaotic standard map. The number of rounds in both stage are controlled by combination of pseudo-random sequence and original image. The performance is evaluated from various types of analysis such as entropy analysis, difference analysis, statistical analysis, key sensitivity analysis, key space analysis and speed analysis. The experimental results illustrate that performance of this is highly secured and fast.

  7. Reliability assessment of slender concrete columns at the stability failure

    Science.gov (United States)

    Valašík, Adrián; Benko, Vladimír; Strauss, Alfred; Täubling, Benjamin

    2018-01-01

    The European Standard for designing concrete columns within the use of non-linear methods shows deficiencies in terms of global reliability, in case that the concrete columns fail by the loss of stability. The buckling failure is a brittle failure which occurs without warning and the probability of its formation depends on the columns slenderness. Experiments with slender concrete columns were carried out in cooperation with STRABAG Bratislava LTD in Central Laboratory of Faculty of Civil Engineering SUT in Bratislava. The following article aims to compare the global reliability of slender concrete columns with slenderness of 90 and higher. The columns were designed according to methods offered by EN 1992-1-1 [1]. The mentioned experiments were used as basis for deterministic nonlinear modelling of the columns and subsequent the probabilistic evaluation of structural response variability. Final results may be utilized as thresholds for loading of produced structural elements and they aim to present probabilistic design as less conservative compared to classic partial safety factor based design and alternative ECOV method.

  8. Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity

    Science.gov (United States)

    Thomas, Abey E.

    2018-05-01

    Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.

  9. Equipment for automatic measurement of gamma activity distribution in a column

    International Nuclear Information System (INIS)

    Kalincak, M.; Machan, V.; Vilcek, S.; Balkovsky, K.

    1978-01-01

    The design of a device for stepwise scanning of gamma activity distributions along chromatographic columns is described. In connection with a single-channel gamma spectrometer and a counting ratemeter with a recorder this device permits the resolution of a number of gamma emitters on the column, the determination of the gamma nuclide content in different chemical forms in the sample by means of column separation methods - Gel Chromatography Columns Scanning Method - and the determination of gamma nuclide distribution along the columns. The device permits the scanning of columns of up to 20 mm in diameter and 700 mm in length and continual scanning over a 450 mm column length with one clamping. With minor adaptations it is possible to scan columns up to 30 mm in diameter. The length of the scanned sections is 5 or 10 mm, the scanning time setting is arbitrary and variable activity levels and radiation energies may be measured. (author)

  10. Prediction of axial limit capacity of stone columns using dimensional analysis

    Science.gov (United States)

    Nazaruddin A., T.; Mohamed, Zainab; Mohd Azizul, L.; Hafez M., A.

    2017-08-01

    Stone column is the most favorable method used by engineers in designing work for stabilization of soft ground for road embankment, and foundation for liquid structure. Easy installation and cheaper cost are among the factors that make stone column more preferable than other method. Furthermore, stone column also can acts as vertical drain to increase the rate of consolidation during preloading stage before construction work started. According to previous studied there are several parameters that influence the capacity of stone column. Among of them are angle friction of among the stones, arrangement of column (two pattern arrangement most applied triangular and square), spacing center to center between columns, shear strength of soil, and physical size of column (diameter and length). Dimensional analysis method (Buckingham-Pi Theorem) has used to carry out the new formula for prediction of load capacity stone columns. Experimental data from two previous studies was used for analysis of study.

  11. Wire-Mesh Tomography Measurements of Void Fraction in Rectangular Bubble Columns

    International Nuclear Information System (INIS)

    Reddy Vanga, B.N.; Lopez de Bertodano, M.A.; Zaruba, A.; Prasser, H.M.; Krepper, E.

    2004-01-01

    Bubble Columns are widely used in the process industry and their scale-up from laboratory scale units to industrial units have been a subject of extensive study. The void fraction distribution in the bubble column is affected by the column size, superficial velocity of the dispersed phase, height of the liquid column, size of the gas bubbles, flow regime, sparger design and geometry of the bubble column. The void fraction distribution in turn affects the interfacial momentum transfer in the bubble column. The void fraction distribution in a rectangular bubble column 10 cm wide and 2 cm deep has been measured using Wire-Mesh Tomography. Experiments were performed in an air-water system with the column operating in the dispersed bubbly flow regime. The experiments also serve the purpose of studying the performance of wire-mesh sensors in batch flows. A 'wall peak' has been observed in the measured void fraction profiles, for the higher gas flow rates. This 'wall peak' seems to be unique, as this distribution has not been previously reported in bubble column literature. Low gas flow rates yielded the conventional 'center peak' void profile. The effect of column height and superficial gas velocity on the void distribution has been investigated. Wire-mesh Tomography also facilitates the measurement of bubble size distribution in the column. This paper presents the measurement principle and the experimental results for a wide range of superficial gas velocities. (authors)

  12. Capacity of columns with splice imperfections

    International Nuclear Information System (INIS)

    Popov, E.P.; Stephen, R.M.

    1977-01-01

    To study the behavior of spliced columns subjected to tensile forces simulating situations which may develop in an earthquake, all of the spliced specimens were tested to failure in tension after first having been subjected to large compressive loads. The results of these tests indicate that the lack of perfect contact at compression splices of columns may not be important, provided that the gaps are shimmed and welding is used to maintain the sections in alignment

  13. The central column structure in SPHEX

    International Nuclear Information System (INIS)

    Duck, R.C.; French, P.A.; Browning, P.K.; Cunningham, G.; Gee, S.J.; al-Karkhy, A.; Martin, R.; Rusbridge, M.G.

    1994-01-01

    SPHEX is a gun injected spheromak in which a magnetised Marshall gun generates and maintains an approximately axisymmetric toroidal plasma within a topologically spherical flux conserving vessel. The central column has been defined as a region of high mean floating potential, f > up to ∼ 150 V, aligned with the geometric axis of the device. It has been suggested that this region corresponds to the open magnetic flux which is connected directly to the central electrode of the gun and links the toroidal annulus (in which f > ∼ 0 V). Poynting vector measurements have shown that the power required to drive toroidal current in the annulus is transmitted out of the column by the coherent 20 kHz mode which pervades the plasma. Measurements of the MHD dynamo in the column indicate an 'antidynamo' electric field due to correlated fluctuations in v and B at the 20 kHz mode frequency which is consistent with the time-averaged Ohm's Law. On shorting the gun electrodes, the density in the column region decays rapidly leaving a 'hole' of radius R c ∼ 7 cm. This agrees with the estimated dimension of the open flux from mean internal B measurements and axisymmetric force-free equilibrium modelling, but is considerably smaller than the radius of ∼ 13 cm inferred from the time-averaged potential. In standard operating conditions the gun delivers a current of I G ∼ 60 kA at V G ∼ 500 V for ∼ 1 ms, driving a toroidal current of I t ∼ 60 kA. Ultimately we wish to understand the mechanism which drives toroidal current in the annulus; the central column is of interest because of the crucial role it plays in this process. (author) 8 refs., 6 figs

  14. Two-dimensional chromatographic analysis using three second-dimension columns for continuous comprehensive analysis of intact proteins.

    Science.gov (United States)

    Zhu, Zaifang; Chen, Huang; Ren, Jiangtao; Lu, Juan J; Gu, Congying; Lynch, Kyle B; Wu, Si; Wang, Zhe; Cao, Chengxi; Liu, Shaorong

    2018-03-01

    We develop a new two-dimensional (2D) high performance liquid chromatography (HPLC) approach for intact protein analysis. Development of 2D HPLC has a bottleneck problem - limited second-dimension (second-D) separation speed. We solve this problem by incorporating multiple second-D columns to allow several second-D separations to be proceeded in parallel. To demonstrate the feasibility of using this approach for comprehensive protein analysis, we select ion-exchange chromatography as the first-dimension and reverse-phase chromatography as the second-D. We incorporate three second-D columns in an innovative way so that three reverse-phase separations can be performed simultaneously. We test this system for separating both standard proteins and E. coli lysates and achieve baseline resolutions for eleven standard proteins and obtain more than 500 peaks for E. coli lysates. This is an indication that the sample complexities are greatly reduced. We see less than 10 bands when each fraction of the second-D effluents are analyzed by sodium dodecyl sulfate - polyacrylamide gel electrophoresis (SDS-PAGE), compared to hundreds of SDS-PAGE bands as the original sample is analyzed. This approach could potentially be an excellent and general tool for protein analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. NOAA JPSS Ozone Mapping and Profiler Suite (OMPS) Version 8 Total Ozone (V8TOz) Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of total column ozone from the Ozone Mapping and Profiling Suite (OMPS) instrument...

  16. High accuracy microwave frequency measurement based on single-drive dual-parallel Mach-Zehnder modulator

    DEFF Research Database (Denmark)

    Zhao, Ying; Pang, Xiaodan; Deng, Lei

    2011-01-01

    A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing a...... 10−3 relative error. This high accuracy frequency measurement technique is a promising candidate for high-speed electronic warfare and defense applications....

  17. Design of zeolite ion-exchange columns for wastewater treatment

    International Nuclear Information System (INIS)

    Robinson, S.M.; Arnold, W.D.; Byers, C.H.

    1991-01-01

    Oak Ridge National Laboratory plans to use chabazite zeolites for decontamination of wastewater containing parts-per-billion levels of 90 Sr and 137 Cs. Treatability studies indicate that such zeolites can remove trace amounts of 90 Sr and 137 Cs from wastewater containing high concentrations of calcium and magnesium. These studies who that zeolite system efficiency is dependent on column design and operating conditions. Previous results with bench-scale, pilot-scale, and near-full-scale columns indicate that optimized design of full-scale columns could reduce the volume of spent solids generation by one-half. The data indicate that shortcut scale-up methods cannot be used to design columns to minimize secondary waste generation. Since the secondary waste generation rate is a primary influence on process cost effectiveness, a predictive mathematical model for column design is being developed. Equilibrium models and mass-transfer mechanisms are being experimentally determined for isothermal multicomponent ion exchange (Ca, Mg, Na, Cs, and Sr). Mathematical models of these data to determine the breakthrough curves for different column configurations and operating conditions will be used to optimize the final design of full-scale treatment plant. 32 refs., 6 figs., 3 tabs

  18. Graph Modelling Approach: Application to a Distillation Column

    DEFF Research Database (Denmark)

    Hovelaque, V.; Commault, C.; Bahar, Mehrdad

    1997-01-01

    Introduction, structured systems and digraphs, distillation column model, generic input-output decoupling problem, generic disturbance rejection problem, concluding remarks.......Introduction, structured systems and digraphs, distillation column model, generic input-output decoupling problem, generic disturbance rejection problem, concluding remarks....

  19. SPEEDUPtrademark ion exchange column model

    International Nuclear Information System (INIS)

    Hang, T.

    2000-01-01

    A transient model to describe the process of loading a solute onto the granular fixed bed in an ion exchange (IX) column has been developed using the SpeedUptrademark software package. SpeedUp offers the advantage of smooth integration into other existing SpeedUp flowsheet models. The mathematical algorithm of a porous particle diffusion model was adopted to account for convection, axial dispersion, film mass transfer, and pore diffusion. The method of orthogonal collocation on finite elements was employed to solve the governing transport equations. The model allows the use of a non-linear Langmuir isotherm based on an effective binary ionic exchange process. The SpeedUp column model was tested by comparing to the analytical solutions of three transport problems from the ion exchange literature. In addition, a sample calculation of a train of three crystalline silicotitanate (CST) IX columns in series was made using both the SpeedUp model and Purdue University's VERSE-LC code. All test cases showed excellent agreement between the SpeedUp model results and the test data. The model can be readily used for SuperLigtrademark ion exchange resins, once the experimental data are complete

  20. Applicability of hydroxylamine nitrate reductant in pulse-column contactors

    International Nuclear Information System (INIS)

    Reif, D.J.

    1983-05-01

    Uranium and plutonium separations were made from simulated breeder reactor spent fuel dissolver solution with laboratory-sized pulse column contactors. Hydroxylamine nitrate (HAN) was used for reduction of plutonium (1V). An integrated extraction-partition system, simulating a breeder fuel reprocessing flowsheet, carried out a partial partition of uranium and plutonium in the second contactor. Tests have shown that acceptable coprocessing can be ontained using HAN as a plutonium reductant. Pulse column performance was stable even though gaseous HAN oxidation products were present in the column. Gas evolution rates up to 0.27 cfm/ft 2 of column cross section were tested and found acceptable

  1. Partial strengthening of R.C square columns using CFRP

    Directory of Open Access Journals (Sweden)

    Ahmed Shaban Abdel-Hay

    2014-12-01

    An experimental program was undertaken testing ten square columns 200 × 200 × 2000 mm. One of them was a control specimen and the other nine specimens were strengthened with CFRP. The main parameters studied in this research were the compressive strength of the upper part, the height of the upper poor concrete part, and the height of CFRP wrapped part of column. The experimental results including mode of failure, ultimate load, concrete strain, and fiber strains were analyzed. The main conclusion of this research was, partial strengthening of square column using CFRP can be permitted and gives good results of the column carrying capacity.

  2. Fire response of composite columns subject to sway

    DEFF Research Database (Denmark)

    Virdi, Kuldeep

    Composite columns, using profiled steel sections encased in concrete or steel tubes filled with concrete, are increasingly used in practice taking advantage of speed of erection as well as offering cost-effective solutions. While the design of braced and unbraced composite columns under ambient...... conditions is adequately covered in the relevant standard, Eurocode 4, simplified design of unbraced composite columns for the fire limit state has not been included. Recognising this, a collaborative research project was undertaken with funding from the Research Fund for Coal and Steel. The paper describes...... the scope of the project which covered control tests under ambient conditions, carried out by the author while at City University London. Other aspects covered in the project included fire tests carried out by CTICM in France, on isolated columns and on two frames designed by Leibniz Universität Hannover...

  3. Cross flow cyclonic flotation column for coal and minerals beneficiation

    Science.gov (United States)

    Lai, Ralph W.; Patton, Robert A.

    2000-01-01

    An apparatus and process for the separation of coal from pyritic impurities using a modified froth flotation system. The froth flotation column incorporates a helical track about the inner wall of the column in a region intermediate between the top and base of the column. A standard impeller located about the central axis of the column is used to generate a centrifugal force thereby increasing the separation efficiency of coal from the pyritic particles and hydrophillic tailings.

  4. Temperature-based on-column solute focusing in capillary liquid chromatography reduces peak broadening from pre-column dispersion and volume overload when used alone or with solvent-based focusing.

    Science.gov (United States)

    Groskreutz, Stephen R; Horner, Anthony R; Weber, Stephen G

    2015-07-31

    On-column focusing is essential for satisfactory performance using capillary scale columns. On-column focusing results from generating transient conditions at the head of the column that lead to high solute retention. Solvent-based on-column focusing is a well-known approach to achieve this. Temperature-assisted on-column focusing (TASF) can also be effective. TASF improves focusing by cooling a short segment of the column inlet to a temperature that is lower than the column temperature during the injection and then rapidly heating the focusing segment to the match the column temperature. A troublesome feature of an earlier implementation of TASF was the need to leave the capillary column unpacked in that portion of the column inside the fitting connecting it to the injection valve. We have overcome that problem in this work by packing the head of the column with solid silica spheres. In addition, technical improvements to the TASF instrumentation include: selection of a more powerful thermo-electric cooler to create faster temperature changes and electronic control for easy incorporation into conventional capillary instruments. Used in conjunction with solvent-based focusing and with isocratic elution, volumes of paraben samples (esters of p-hydroxybenzoic acid) up to 4.5-times the column liquid volume can be injected without significant bandspreading due to volume overload. Interestingly, the shapes of the peaks from the lowest volume injections that we can make, 30nL, are improved when using TASF. TASF is very effective at reducing the detrimental effects of pre-column dispersion using isocratic elution. Finally, we show that TASF can be used to focus the neuropeptide galanin in a sample solvent with elution strength stronger than the mobile phase. Here, the stronger solvent is necessitated by the need to prevent peptide adsorption prior to and during analysis. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Control structure selection for energy integrated distillation column

    DEFF Research Database (Denmark)

    Hansen, J.E.; Jørgensen, Sten Bay

    1998-01-01

    This paper treats a case study on control structure selection for an almost binary distillation column. The column is energy integrated with a heat pump in order to transfer heat from the condenser to the reboiler. This integrated plant configuration renders the possible control structures somewhat...... different from what is usual for binary distillation columns. Further the heat pump enables disturbances to propagate faster through the system. The plant has six possible actuators of which three must be used to stabilize the system. Hereby three actuators are left for product purity control. An MILP...

  6. QuBiLS-MIDAS: a parallel free-software for molecular descriptors computation based on multilinear algebraic maps.

    Science.gov (United States)

    García-Jacas, César R; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Valdés-Martiní, José R; Contreras-Torres, Ernesto

    2014-07-05

    The present report introduces the QuBiLS-MIDAS software belonging to the ToMoCoMD-CARDD suite for the calculation of three-dimensional molecular descriptors (MDs) based on the two-linear (bilinear), three-linear, and four-linear (multilinear or N-linear) algebraic forms. Thus, it is unique software that computes these tensor-based indices. These descriptors, establish relations for two, three, and four atoms by using several (dis-)similarity metrics or multimetrics, matrix transformations, cutoffs, local calculations and aggregation operators. The theoretical background of these N-linear indices is also presented. The QuBiLS-MIDAS software was developed in the Java programming language and employs the Chemical Development Kit library for the manipulation of the chemical structures and the calculation of the atomic properties. This software is composed by a desktop user-friendly interface and an Abstract Programming Interface library. The former was created to simplify the configuration of the different options of the MDs, whereas the library was designed to allow its easy integration to other software for chemoinformatics applications. This program provides functionalities for data cleaning tasks and for batch processing of the molecular indices. In addition, it offers parallel calculation of the MDs through the use of all available processors in current computers. The studies of complexity of the main algorithms demonstrate that these were efficiently implemented with respect to their trivial implementation. Lastly, the performance tests reveal that this software has a suitable behavior when the amount of processors is increased. Therefore, the QuBiLS-MIDAS software constitutes a useful application for the computation of the molecular indices based on N-linear algebraic maps and it can be used freely to perform chemoinformatics studies. Copyright © 2014 Wiley Periodicals, Inc.

  7. Column gamma-ray scanning of the 'Hector Molina' Distillery

    International Nuclear Information System (INIS)

    Derivet Zarzabal, M.; Capote Ferrera, E.; Fernandez Gomez, I.; Carrazana Gonzalez, L.; Borroto Portela, J.

    2015-01-01

    Gamma-ray scanning, often referred to as 'column scanning', is a convenient, cost effective, fast, efficient and non-invasive technique to examine internal characteristics of a certain equipment, like alcohol distillation columns, while it is in operation. Column scanning allows to engineers, to study hydraulics tray inside of distillation column in on-line condition. It provides essential data to optimize the performance of columns, extend column run times, to evaluate effects of defective track and to identify maintenance requirements. This knowledge can reduce repair times significantly. In the year 2014, the Environmental Radiological Surveillance Laboratory from Center of Radiation Protection and Hygiene, introduced this service in the 'Hector Molina' Distillery. The diagnosis carried out allowed the detection of some anomalies in its operation. In this work the results obtained during gamma-ray scanning of the column are shown. (Author)

  8. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  9. Buckling of liquid columns

    NARCIS (Netherlands)

    Habibi, M.; Rahmani, Y.; Bonn, D.; Ribe, N.M.

    2010-01-01

    Under appropriate conditions, a column of viscous liquid falling onto a rigid surface undergoes a buckling instability. Here we show experimentally and theoretically that liquid buckling exhibits a hitherto unsuspected complexity involving three different modes—viscous, gravitational, and

  10. Structural Decoupling and Disturbance Rejection in a Distillation Column

    DEFF Research Database (Denmark)

    Bahar, Mehrdad; Jantzen, Jan; Commault, C.

    1996-01-01

    Introduction, distillation column model, input-output decoupling, disturbance rejection, concluding remarks, references.......Introduction, distillation column model, input-output decoupling, disturbance rejection, concluding remarks, references....

  11. Cervical column morphology and craniofacial profiles in monozygotic twins.

    Science.gov (United States)

    Sonnesen, Liselotte; Pallisgaard, Carsten; Kjaer, Inger

    2008-02-01

    Previous studies have described the relationships between cervical column morphology and craniofacial morphology. The aims of the present study were to describe cervical column morphology in 38 pairs of adult monozygotic (MZ) twins, and compare craniofacial morphology in twins with fusions with craniofacial morphology in twins without fusion. Visual assessment of cervical column morphology and cephalometric measurements of craniofacial morphology were performed on profile radiographs. In the cervical column, fusion between corpora of the second and third vertebrae was registered as fusion. In the twin group, 8 twin pairs had fusion of the cervical column in both individuals within the pair (sub-group A), 25 pairs had no fusions (subgroup B), and in 5 pairs, cervical column morphology was different within the pair (subgroup C), as one twin had fusion and the other did not. Comparison of craniofacial profiles showed a tendency to increased jaw retrognathia, larger cranial base angle, and larger mandibular inclination in subgroup A than in subgroup B. The same tendency was observed within subgroup C between the individual twins with fusion compared with those without fusion. These results confirm that cervical fusions and craniofacial morphology may be interrelated in twins when analysed on profile radiographs. The study also documents that differences in cervical column morphology can occur in individuals within a pair of MZ twins. It illustrates that differences in craniofacial morphology between individuals within a pair of MZ twins can be associated with cervical fusion.

  12. Radial heterogeneity of some analytical columns used in high-performance liquid chromatography.

    Science.gov (United States)

    Abia, Jude A; Mriziq, Khaled S; Guiochon, Georges A

    2009-04-10

    An on-column electrochemical microdetector was used to determine accurately the radial distribution of the mobile phase velocity and of the column efficiency at the exit of three common analytical columns, namely a 100 mm x 4.6mm C18 bonded silica-based monolithic column, a 150 mm x 4.6mm column packed with 2.7 microm porous shell particles of C18 bonded silica (HALO), and a 150 mm x 4.6mm column packed with 3 microm fully porous C18 bonded silica particles (LUNA). The results obtained demonstrate that all three columns are not radially homogeneous. In all three cases, the efficiency was found to be lower in the wall region of the column than in its core region (the central core with a radius of 1/3 the column inner radius). The decrease in local efficiency from the core to the wall regions was lower in the case of the monolith (ca. 25%) than in that of the two particle-packed columns (ca. 35-50%). The mobile phase velocity was found to be ca. 1.5% higher in the wall than in the core region of the monolithic column while, in contrast, it was ca. 2.5-4.0% lower in the wall region for the two particle-packed columns.

  13. High resolution and high sensitivity methods for oligosaccharide mapping and characterization by normal phase high performance liquid chromatography following derivatization with highly fluorescent anthranilic acid.

    Science.gov (United States)

    Anumula, K R; Dhume, S T

    1998-07-01

    Facile labeling of oligosaccharides (acidic and neutral) in a nonselective manner was achieved with highly fluorescent anthranilic acid (AA, 2-aminobenzoic acid) (more than twice the intensity of 2-aminobenzamide, AB) for specific detection at very high sensitivity. Quantitative labeling in acetate-borate buffered methanol (approximately pH 5.0) at 80 degreesC for 60 min resulted in negligible or no desialylation of the oligosaccharides. A high resolution high performance liquid chromatographic method was developed for quantitative oligosaccharide mapping on a polymeric-NH2bonded (Astec) column operating under normal phase and anion exchange (NP-HPAEC) conditions. For isolation of oligosaccharides from the map by simple evaporation, the chromatographic conditions developed use volatile acetic acid-triethylamine buffer (approximately pH 4.0) systems. The mapping and characterization technology was developed using well characterized standard glycoproteins. The fluorescent oligosaccharide maps were similar to the maps obtained by the high pH anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD), except that the fluorescent maps contained more defined peaks. In the map, the oligosaccharides separated into groups based on charge, size, linkage, and overall structure in a manner similar to HPAEC-PAD with contribution of -COOH function from the label, anthranilic acid. However, selectivity of the column for sialic acid linkages was different. A second dimension normal phase HPLC (NP-HPLC) method was developed on an amide column (TSK Gel amide-80) for separation of the AA labeled neutral complex type and isomeric structures of high mannose type oligosaccharides. The oligosaccharides labeled with AA are compatible with biochemical and biophysical techniques, and use of matrix assisted laser desorption mass spectrometry for rapid determination of oligosaccharide mass map of glycoproteins is demonstrated. High resolution of NP-HPAEC and NP-HPLC methods

  14. Close-range geophotogrammetric mapping of trench walls using multi-model stereo restitution software

    Energy Technology Data Exchange (ETDEWEB)

    Coe, J.A.; Taylor, E.M.; Schilling, S.P.

    1991-06-01

    Methods for mapping geologic features exposed on trench walls have advanced from conventional gridding and sketch mapping to precise close-range photogrammetric mapping. In our study, two strips of small-format (60 {times} 60) stereo pairs, each containing 42 photos and covering approximately 60 m of nearly vertical trench wall (2-4 m high), were contact printed onto eight 205 {times} 255-mm transparent film sheets. Each strip was oriented in a Kern DSR15 analytical plotter using the bundle adjustment module of Multi-Model Stereo Restitution Software (MMSRS). We experimented with several systematic-control-point configurations to evaluate orientation accuracies as a function of the number and position of control points. We recommend establishing control-point columns (each containing 2-3 points) in every 5th photo to achieve the 7-mm Root Mean Square Error (RMSE) accuracy required by our trench-mapping project. 7 refs., 8 figs., 1 tab.

  15. Close-range geophotogrammetric mapping of trench walls using multi-model stereo restitution software

    International Nuclear Information System (INIS)

    Coe, J.A.; Taylor, E.M.; Schilling, S.P.

    1991-01-01

    Methods for mapping geologic features exposed on trench walls have advanced from conventional gridding and sketch mapping to precise close-range photogrammetric mapping. In our study, two strips of small-format (60 x 60) stereo pairs, each containing 42 photos and covering approximately 60 m of nearly vertical trench wall (2-4 m high), were contact printed onto eight 205 x 255-mm transparent film sheets. Each strip was oriented in a Kern DSR15 analytical plotter using the bundle adjustment module of Multi-Model Stereo Restitution Software (MMSRS). We experimented with several systematic-control-point configurations to evaluate orientation accuracies as a function of the number and position of control points. We recommend establishing control-point columns (each containing 2-3 points) in every 5th photo to achieve the 7-mm Root Mean Square Error (RMSE) accuracy required by our trench-mapping project. 7 refs., 8 figs., 1 tab

  16. Characterization of retentivity of reversed phase liquid chromatography columns.

    Science.gov (United States)

    Ying, P T; Dorsey, J G

    1991-03-01

    There are dozens of commercially available reversed phase columns, most marketed as C-8 or C-18 materials, but with no useful way of classifying their retentivity. A useful way of ranking these columns in terms of column "strength" or retentivity is presented. The method utilizes a value for ln k'(w), the estimated retention of a solute from a mobile phase of 100% water, and the slope of the plot of ln k' vsE(T)(30), the solvent polarity. The method is validated with 26 solutes varying in ln k'(w) from about 2 to over 20, on 14 different reversed phase columns. In agreement with previous work, it is found that the phase volume ratio of the column is the most important parameter in determining retentivity. It is strongly suggested that manufacturers adopt a uniform method of calculating this value and that it be made available in advertising, rather than the uninterpretable "% carbon".

  17. Seismic performance of recycled concrete-filled square steel tube columns

    Science.gov (United States)

    Chen, Zongping; Jing, Chenggui; Xu, Jinjun; Zhang, Xianggang

    2017-01-01

    An experimental study on the seismic performance of recycled concrete-filled square steel tube (RCFST) columns is carried out. Six specimens were designed and tested under constant axial compression and cyclic lateral loading. Two parameters, replacement percentage of recycled coarse aggregate (RCA) and axial compression level, were considered in the test. Based on the experimental data, the hysteretic loops, skeleton curves, ductility, energy dissipation capacity and stiffness degradation of RCFST columns were analyzed. The test results indicate that the failure modes of RCFST columns are the local buckling of the steel tube at the bottom of the columns, and the hysteretic loops are full and their shapes are similar to normal CFST columns. Furthermore, the ductility coefficient of all specimens are close to 3.0, and the equivalent viscous damping coefficient corresponding to the ultimate lateral load ranges from 0.323 to 0.360, which demonstrates that RCFST columns exhibit remarkable seismic performance.

  18. Tritium isotope separation by water distillation column packed with silica-gel beads

    International Nuclear Information System (INIS)

    Fukada, Satoshi

    2004-01-01

    Tritium enrichment or depletion by water distillation was investigated using a glass column of 32cm in height packed with silica-gel beads of 3.4mm in average diameter. The total separation factor of the silica-gel distillation column, α H-T , was compared with those of an open column distillation tower and of a column packed with stainless-steel Dixon rings. Depletion of the tritium activity in the distillate was enhanced by isotopic exchange with water absorbed on silica-gel beads that have a higher affinity for HTO than for H 2 O. The value of α H-T -1 of the silica-gel distillation column was about four times larger than that of a column without any packing and about two times larger than that of the Dixon-ring column. The improvement of α H-T by the silica-gel adsorbent indicated that the height of the distillation-adsorption column becomes shorter than that of the height of conventional distillation columns. (author)

  19. Input-Output Decoupling of a Distillation Column LV-Configuration

    DEFF Research Database (Denmark)

    Yazdi, H.; Jørgensen, Sten Bay; Bahar (fratrådt), Mehrdad

    1996-01-01

    Introduction, digraph approach, distillation column, digraph analysis, solution analysis, discussion and conclusion, references.......Introduction, digraph approach, distillation column, digraph analysis, solution analysis, discussion and conclusion, references....

  20. Short steel and concrete columns under high temperatures

    Directory of Open Access Journals (Sweden)

    A. E. P. G. A. Jacintho

    Full Text Available The growing demand for knowledge about the effect of high temperatures on structures has stimulated increasing research worldwide. This article presents experimental results for short composite steel and concrete columns subjected to high temperatures in ovens with or without an axial compression load, numerically analyzes the temperature distribution in these columns after 30 and 60 minutes and compares them with experimental results. The models consist of concrete-filled tubes of three different thicknesses and two different diameters, and the concrete fill has conventional properties that remained constant for all of the models. The stress-strain behavior of the composite columns was altered after exposure to high temperatures relative to the same columns at room temperature, which was most evident in the 60-minute tests due to the higher temperatures reached. The computational analysis adopted temperature rise curves that were obtained experimentally.

  1. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  2. Bubble column fermenter modeling: a comparison for pressure effects

    Energy Technology Data Exchange (ETDEWEB)

    Shioya, S; Dang, N D.P.; Dunn, I J

    1978-01-01

    Two models which describe the oxygen transfer, oxygen uptake, and axial mixing in a bubble column fermenter are described. Model I includes no pressure effects and can be solved analytically. Model II incorporates the influence of hydrostatic pressure on oxygen solubility and gas expansion and must be solved numerically. The liquid phase oxygen concentration profiles from both models are compared to ascertain for what parametric conditions and for what maximum column height Model I is valid. Results show that for many situations Model I can approximate the oxygen profiles in a 10 m column within 20%. As the transfer and uptake rates increase, the deviation of Model I can reach 80% for a 10 m column. 7 figures.

  3. HPLC separation of triacylglycerol positional isomers on a polymeric ODS column.

    Science.gov (United States)

    Kuroda, Ikuma; Nagai, Toshiharu; Mizobe, Hoyo; Yoshimura, Nobuhito; Gotoh, Naohiro; Wada, Shun

    2008-07-01

    A polymeric ODS column was applied to the resolution of triacylglycerol positional isomers (TAG-PI), i.e. 1,3-dioleoyl-2-palmitoyl-glycerol (OPO) and 1,2-dioleoyl-3-palmitoyl-rac-glycerol (OOP), with a recycle HPLC system. To investigate the ODS column species and the column temperatures for the resolution of a TAG-PI pair, a mixture of OPO and OOP was subjected to an HPLC system equipped with a non-endcapped polymeric, endcapped monomeric, endcapped intermediate, or non-endcapped monomeric ODS column at three different column temperatures (40, 25, or 10 degrees C). Only the non-endcapped polymeric ODS column achieved the separation of OPO and OOP, and the lowest column temperature (10 degrees C) showed the best resolution for them. The other pair of TAG-PI, a mixture of 1,3-dipalmitoyl-2-oleoyl-glycerol (POP) and 1,2-dipalmitoyl-3-oleoyl-rac-glycerol (PPO) was also subjected to the system equipped with a non-endcapped polymeric or monomeric ODS column at five different column temperatures (40, 32, 25, 17, and 10 degrees C). Thus, POP and PPO were also separated on only the non-endcapped polymeric ODS column at 25 degrees C. However, no clear peak appeared at 10 degrees C. These results would indicate that the polymeric ODS stationary phase has an ability to recognize the structural differences between TAG-PI pairs. Also, the column temperature is a very important factor for separating the TAG-PI pair, and the optimal temperature would relate to the solubility of TAG-PI in the mobile phase. Furthermore, the recycle HPLC system provided measurements for the separation and analysis of TAG-PI pairs.

  4. REST-MapReduce: An Integrated Interface but Differentiated Service

    Directory of Open Access Journals (Sweden)

    Jong-Hyuk Park

    2014-01-01

    Full Text Available With the fast deployment of cloud computing, MapReduce architectures are becoming the major technologies for mobile cloud computing. The concept of MapReduce was first introduced as a novel programming model and implementation for a large set of computing devices. In this research, we propose a novel concept of REST-MapReduce, enabling users to use only the REST interface without using the MapReduce architecture. This approach provides a higher level of abstraction by integration of the two types of access interface, REST API and MapReduce. The motivation of this research stems from the slower response time for accessing simple RDBMS on Hadoop than direct access to RDMBS. This is because there is overhead to job scheduling, initiating, starting, tracking, and management during MapReduce-based parallel execution. Therefore, we provide a good performance for REST Open API service and for MapReduce, respectively. This is very useful for constructing REST Open API services on Hadoop hosting services, for example, Amazon AWS (Macdonald, 2005 or IBM Smart Cloud. For evaluating performance of our REST-MapReduce framework, we conducted experiments with Jersey REST web server and Hadoop. Experimental result shows that our approach outperforms conventional approaches.

  5. Real Time Mapping and Dynamic Navigation for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Maki K. Habib

    2008-11-01

    Full Text Available This paper discusses the importance, the complexity and the challenges of mapping mobile robot?s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.

  6. 40 CFR Table 25 to Subpart G of... - Effective Column Diameter (Fc)

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Effective Column Diameter (Fc) 25 Table..., Table 25 Table 25 to Subpart G of Part 63—Effective Column Diameter (Fc) Column type Fc (feet) 9-inch by 7-inch built-up columns 1.1 8-inch-diameter pipe columns 0.7 No construction details known 1.0 ...

  7. Comparative study of the performance of columns packed with several new fine silica particles. Would the external roughness of the particles affect column properties?

    Science.gov (United States)

    Gritti, Fabrice; Guiochon, Georges

    2007-09-28

    We measured and compared the characteristics and performance of columns packed with particles of five different C(18)-bonded silica, 3 and 5 microm Luna, 3 microm Atlantis, 3.5 microm Zorbax, and 2.7 microm Halo. The average particle size of each material was derived from the SEM pictures of 200 individual particles. These pictures contrast the irregular morphology of the external surface of the Zorbax and Halo particles and the smooth surface of the Luna and Atlantis particles. In a wide range of mobile phase velocities (from 0.010 to 3 mL/min) and at ambient temperature, we measured the first and second central moments of the peaks of naphthalene, insulin, and bovine serum albumin (BSA). These moments were corrected for the contributions of the extra-column volumes to calculate the reduced HETPs. The C-terms of naphthalene and insulin are largest for the Halo and Zorbax materials and the A-term smallest for the Halo-packed column. The Halo column performs the best for the low molecular weight compound naphthalene (minimum reduced HETP, 1.4) but is not as good as the Atlantis or Luna columns for the large molecular weight compound insulin. The Zorbax column is the least efficient column because of its large C-term. The lowest sample diffusivity through these particles, alone, does not account for the results. It is most likely that the roughness of the external surface of the Halo and Zorbax particles limit the performance of these columns at high flow rates generating an unusually high film mass transfer resistance.

  8. Lighting columns. Research on the behaviour of lighting columns in sideways- on and head-on impact tests with private cars.

    NARCIS (Netherlands)

    1978-01-01

    Lighting columns can be made more safe by providing them with a sliding construction. This construction makes it possible that the column slides from his ground construction in case of an accident. Aluminium lighting can be constructed in such a way that the break off in case of a car collides

  9. Distillation columns inspection through gamma scanning

    International Nuclear Information System (INIS)

    Garcia, Marco

    1999-09-01

    The application of nuclear energy is very wide and it allows the saving of economic resources since the investigation of a certain process is carried out without stop the plant. The gamma scanning of oil c racking c olumns are practical examples, they allow to determine the hydraulic operation of the inspected columns. A source of Co-60 22mCi and a detector with a crystal of INa(TI) are used. This paper shows the results got from a profile carried out in a column distillation

  10. Calculation code PULCO for Purex process in pulsed column

    International Nuclear Information System (INIS)

    Gonda, Kozo; Matsuda, Teruo

    1982-03-01

    The calculation code PULCO, which can simulate the Purex process using a pulsed column as an extractor, has been developed. The PULCO is based on the fundamental concept of mass transfer that the mass transfer within a pulsed column occurs through the interface of liquid drops and continuous phase fluid, and is the calculation code different from conventional ones, by which various phenomena such as the generation of liquid drops, their rising and falling, and the unification of liquid drops actually occurring in a pulsed column are exactly reflected and can be correctly simulated. In the PULCO, the actually measured values of the fundamental quantities representing the extraction behavior of liquid drops in a pulsed column are incorporated, such as the mass transfer coefficient of each component, the diameter and velocity of liquid drops in a pulsed column, the holdup of dispersed phase, and axial turbulent flow diffusion coefficient. The verification of the results calculated with the PULCO was carried out by installing a pulsed column of 50 mm inside diameter and 2 m length with 40 plate stage in a glove box for unirradiated uranium-plutonium mixed system. The results of the calculation and test were in good agreement, and the validity of the PULCO was confirmed. (Kako, I.)

  11. The instability analysis of the cryogenic distillation column condenser

    International Nuclear Information System (INIS)

    David, Claudia; Stefanescu, Ioan; Vasut, Felicia; Preda, Anisoara; Ghitulescu, Alina

    2008-01-01

    Full text: The column plays the main part of the process in a distillation plant for hydrogen isotopes. The variable parameters like vaporizer power or liquid hydrogen level fluctuation can induce non-steady states which lead to performance decrease of the process. In the paper the liquid hydrogen fluctuation from the cryogenic column condenser is taken into account. This fluctuation determines the variation of the gas holdup from the top of the column. It was considered a column equipped with order package, NT theoretical plates, H height and fed with a deuterium-tritium mixture. A mathematical model was developed based on the balance equations in the column, on every plate and in the condenser. As fluctuation of the level in the condenser a sinusoidal function was considered. The program developed was used for some cases with input data variables like: initial concentration of tritium in the mixture, the amplitude and period of the sinusoidal function. It was calculated the height of the transfer unit as the ratio of the column height to the Fenske number and it was determined the time for entrance in steady state. The results were presented in a master table. There were given also diagrams. There were analyzed the results. (authors)

  12. Demonstration of motionless Knudsen pump based micro-gas chromatography featuring micro-fabricated columns and on-column detectors.

    Science.gov (United States)

    Liu, Jing; Gupta, Naveen K; Wise, Kensall D; Gianchandani, Yogesh B; Fan, Xudong

    2011-10-21

    This paper reports the investigation of a micro-gas chromatography (μGC) system that utilizes an array of miniaturized motionless Knudsen pumps (KPs) as well as microfabricated separation columns and optical detectors. A prototype system was built to achieve a flow rate of 1 mL min(-1) and 0.26 mL min(-1) for helium and dry air, respectively, when they were used as carrier gas. This system was then employed to evaluate GC performance compromises and demonstrate the ability to separate and detect gas mixtures containing analytes of different volatilities and polarities. Furthermore, the use of pressure programming of the KP array was demonstrated to significantly shorten the analysis time while maintaining a high detection resolution. Using this method, we obtained a high resolution detection of 5 alkanes of different volatilities within 5 min. Finally, we successfully detected gas mixtures of various polarities using a tandem-column μGC configuration by installing two on-column optical detectors to obtain complementary chromatograms.

  13. Impact of Holes on the Buckling of RHS Steel Column

    Directory of Open Access Journals (Sweden)

    Najla'a H. AL-Shareef

    2018-03-01

    Full Text Available This study presented an experimental and theoretical study on the effect of hole on the behavior of rectangular hollow steel columns subjected to axial compression load. Specimens were tested to investigated the ultimate capacity and the load- axial displacement behavior of steel columns. In this paper finite element analysis is done by using general purpose ANSYS 12.0 to investigate the behavior of rectangular hollow steel column with hole. In the experimental work, rectangular hollow steel columns with rounded corners were used in the constriction of the specimens which have dimensions of cross section (50*80mm and height of (250 and 500mm with thickness of (1.25,4 and 6mm with hole ((α*80*80mm when α is equal to (0.2,0.4,0.6 and 0.8. Twenty four columns under compression load were tested in order to investigate the effect of hole on the ultimate load of rectangular hollow steel column. The experimental results indicated that the typical failure  mode for all the tested hollow specimen was the local buckling. The tested results indicated that the increasing of hole dimension leads to reduction in ultimate loads of tested column to 75%. The results show the reducing of load by 94.7% due to decreasing  the thickness of  column while the hole size is constant (0.2*80*80. The buckling load decreases by 84.62% when hole position changes from Lo=0.25L to 0.75L. Holes can be made in the middle of column with dimension up to 0.4 of column's length. The AISC (2005 presents the values closest to the experimental results for the nominal yielding compressive strength. The effect for increasing of slendeness ratio and thickness to area ratio(t/A leading to decreacing the critical stresses and the failure of column with large size of hole and (t/A ratio less than 0.74% was due to lacal  buckling while the global buckling failure was abserve for column with small size of hole and (t/A ratio above than 0.74%. The compersion  between the experimental

  14. BitPAl: a bit-parallel, general integer-scoring sequence alignment algorithm.

    Science.gov (United States)

    Loving, Joshua; Hernandez, Yozen; Benson, Gary

    2014-11-15

    Mapping of high-throughput sequencing data and other bulk sequence comparison applications have motivated a search for high-efficiency sequence alignment algorithms. The bit-parallel approach represents individual cells in an alignment scoring matrix as bits in computer words and emulates the calculation of scores by a series of logic operations composed of AND, OR, XOR, complement, shift and addition. Bit-parallelism has been successfully applied to the longest common subsequence (LCS) and edit-distance problems, producing fast algorithms in practice. We have developed BitPAl, a bit-parallel algorithm for general, integer-scoring global alignment. Integer-scoring schemes assign integer weights for match, mismatch and insertion/deletion. The BitPAl method uses structural properties in the relationship between adjacent scores in the scoring matrix to construct classes of efficient algorithms, each designed for a particular set of weights. In timed tests, we show that BitPAl runs 7-25 times faster than a standard iterative algorithm. Source code is freely available for download at http://lobstah.bu.edu/BitPAl/BitPAl.html. BitPAl is implemented in C and runs on all major operating systems. jloving@bu.edu or yhernand@bu.edu or gbenson@bu.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  15. Ultimate strength and ductility of steel reinforced concrete beam-columns

    International Nuclear Information System (INIS)

    Shohara, Ryoichi

    1991-01-01

    The ultimate strength and ductility of SRC beam-columns are investigated using the data gathered in Architectural Institute of Japan. Though the simple superposed strength formula in AIJ standard underestimates the strength of SRC beam-column failed in flexure, the generalized superposed strength formula estimates it satisfactory. The strength formula in AIJ standard does not good agreement with test data. The SRC beam-column failed in shear has almost equalductility with that failed in flexure owing to the encased steel. Author presents the formulas which estimate the ultimate deformation angle for SRC beam-columns. (author)

  16. One column method to prepare 11C-labelled methyl iodide

    International Nuclear Information System (INIS)

    Kovacs, Z.; Priboczki, E.

    1999-01-01

    A new method in which the [ 11 C]methyl iodide is prepared on one alumina column is presented. A high specific surface alumina column, previously impregnated with lithium aluminium hydride solution, was used for direct trapping from the target gas and reduction into radiocomplex. The complex was then reacted on this column with HI to form [ 11 C]methyl iodide. The use of one alumina column, instead of a freezing trap, reaction vessel and separate unit for iodination, simplifies the apparatus, shortens the synthesis time and is well suitable for automation. (K.A.)

  17. Evaluation of Controller Tuning Methods Applied to Distillation Column Control

    DEFF Research Database (Denmark)

    Nielsen, Kim; W. Andersen, Henrik; Kümmel, Professor Mogens

    A frequency domain approach is used to compare the nominal performance and robustness of dual composition distillation column control tuned according to Ziegler-Nichols (ZN) and Biggest Log Modulus Tuning (BLT) for three binary distillation columns, WOBE, LUVI and TOFA. The scope of this is to ex......A frequency domain approach is used to compare the nominal performance and robustness of dual composition distillation column control tuned according to Ziegler-Nichols (ZN) and Biggest Log Modulus Tuning (BLT) for three binary distillation columns, WOBE, LUVI and TOFA. The scope...

  18. Tile-based parallel coordinates and its application in financial visualization

    Science.gov (United States)

    Alsakran, Jamal; Zhao, Ye; Zhao, Xinlei

    2010-01-01

    Parallel coordinates technique has been widely used in information visualization applications and it has achieved great success in visualizing multivariate data and perceiving their trends. Nevertheless, visual clutter usually weakens or even diminishes its ability when the data size increases. In this paper, we first propose a tile-based parallel coordinates, where the plotting area is divided into rectangular tiles. Each tile stores an intersection density that counts the total number of polylines intersecting with that tile. Consequently, the intersection density is mapped to optical attributes, such as color and opacity, by interactive transfer functions. The method visualizes the polylines efficiently and informatively in accordance with the density distribution, and thus, reduces visual cluttering and promotes knowledge discovery. The interactivity of our method allows the user to instantaneously manipulate the tiles distribution and the transfer functions. Specifically, the classic parallel coordinates rendering is a special case of our method when each tile represents only one pixel. A case study on a real world data set, U.S. stock mutual fund data of year 2006, is presented to show the capability of our method in visually analyzing financial data. The presented visual analysis is conducted by an expert in the domain of finance. Our method gains the support from professionals in the finance field, they embrace it as a potential investment analysis tool for mutual fund managers, financial planners, and investors.

  19. Characterization of the neutron flux in the Hohlraum of the thermal column of the TRIGA Mark III reactor of the ININ; Caracterizacion del flujo neutronico en el Hohlraum de la columna termica del reactor TRIGA Mark III del ININ

    Energy Technology Data Exchange (ETDEWEB)

    Delfin L, A.; Palacios, J.C.; Alonso, G. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: adl@nuclear.inin.mx

    2006-07-01

    Knowing the magnitude of the neutron flux in the reactor irradiation facilities, is so much importance for the operation of the same one, like for the investigation developing. Particularly, knowing with certain precision the spectrum and the neutron flux in the different positions of irradiation of a reactor, it is essential for the evaluation of the results obtained for a certain irradiation experiment. The TRIGA Mark III reactor account with irradiation facilities designed to carry out experimentation, where the reactor is used like an intense neutron source and gamma radiation, what allows to make irradiations of samples or equipment in radiation fields with components and diverse levels in the different facilities, one of these irradiation facilities is the Thermal Column where the Hohlraum is. In this work it was carried out a characterization of the neutron flux inside the 'Hohlraum' of the irradiation facility Thermal Column of the TRIGA Mark III reactor of the Nuclear Center of Mexico to 1 MW of power. It was determined the sub cadmic neutron flux and the epi cadmic by means of the neutron activation technique of thin sheets of gold. The maps of the distribution of the neutron flux for both energy groups in three different positions inside the 'Hohlraum' are presented, these maps were obtained by means of the irradiation of undressed thin activation sheets of gold and covered with cadmium in arrangements of 10 x 12, located parallel to 11.5 cm, 40.5 cm and 70.5 cm to the internal wall of graphite of the installation in inverse address to the position of the reactor core. Starting from the obtained values of neutron flux it was found that, for the same position of the surface of irradiation of the experimental arrangement, the relative differences among the values of neutron flux can be of 80%, and that the differences among different positions of the irradiation surfaces can vary until in a one order of magnitude. (Author)

  20. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race