WorldWideScience

Sample records for multiple parallel chains

  1. Keldysh formalism for multiple parallel worlds

    International Nuclear Information System (INIS)

    Ansari, M.; Nazarov, Y. V.

    2016-01-01

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  2. Keldysh formalism for multiple parallel worlds

    Science.gov (United States)

    Ansari, M.; Nazarov, Y. V.

    2016-03-01

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  3. Keldysh formalism for multiple parallel worlds

    Energy Technology Data Exchange (ETDEWEB)

    Ansari, M.; Nazarov, Y. V., E-mail: y.v.nazarov@tudelft.nl [Delft University of Technology, Kavli Institute of Nanoscience (Netherlands)

    2016-03-15

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  4. Parallel magnetotransport in multiple quantum well structures

    International Nuclear Information System (INIS)

    Sheregii, E.M.; Ploch, D.; Marchewka, M.; Tomaka, G.; Kolek, A.; Stadler, A.; Mleczko, K.; Strupinski, W.; Jasik, A.; Jakiela, R.

    2004-01-01

    The results of investigations of parallel magnetotransport in AlGaAs/GaAs and InGaAs/InAlAs/InP multiple quantum wells structures (MQW's) are presented in this paper. The MQW's were obtained by metalorganic vapour phase epitaxy with different shapes of QW, numbers of QW and levels of doping. The magnetotransport measurements were performed in wide region of temperatures (0.5-300 K) and at high magnetic fields up to 30 T (B is perpendicular and current is parallel to the plane of the QW). Three types of observed effects are analyzed: quantum Hall effect and Shubnikov-de Haas oscillations at low temperatures (0.5-6 K) as well as magnetophonon resonance at higher temperatures (77-300 K)

  5. A Parallel Solver for Large-Scale Markov Chains

    Czech Academy of Sciences Publication Activity Database

    Benzi, M.; Tůma, Miroslav

    2002-01-01

    Roč. 41, - (2002), s. 135-153 ISSN 0168-9274 R&D Projects: GA AV ČR IAA2030801; GA ČR GA101/00/1035 Keywords : parallel preconditioning * iterative methods * discrete Markov chains * generalized inverses * singular matrices * graph partitioning * AINV * Bi-CGSTAB Subject RIV: BA - General Mathematics Impact factor: 0.504, year: 2002

  6. Conceptual design of multiple parallel switching controller

    International Nuclear Information System (INIS)

    Ugolini, D.; Yoshikawa, S.; Ozawa, K.

    1996-01-01

    This paper discusses the conceptual design and the development of a preliminary model of a multiple parallel switching (MPS) controller. The introduction of several advanced controllers has widened and improved the control capability of nonlinear dynamical systems. However, it is not possible to uniquely define a controller that always outperforms the others, and, in many situations, the controller providing the best control action depends on the operating conditions and on the intrinsic properties and behavior of the controlled dynamical system. The desire to combine the control action of several controllers with the purpose to continuously attain the best control action has motivated the development of the MPS controller. The MPS controller consists of a number of single controllers acting in parallel and of an artificial intelligence (AI) based selecting mechanism. The AI selecting mechanism analyzes the output of each controller and implements the one providing the best control performance. An inherent property of the MPS controller is the possibility to discard unreliable controllers while still being able to perform the control action. To demonstrate the feasibility and the capability of the MPS controller the simulation of the on-line operation control of a fast breeder reactor (FBR) evaporator is presented. (author)

  7. Parallel algorithms for simulating continuous time Markov chains

    Science.gov (United States)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  8. Multiple impacts in dissipative granular chains

    CERN Document Server

    Nguyen, Ngoc Son

    2014-01-01

    The extension of collision models for single impacts between two bodies, to the case of multiple impacts (which take place when several collisions occur at the same time in a multibody system) is a challenge in Solid Mechanics, due to the complexity of such phenomena, even in the frictionless case. This monograph aims at presenting the main multiple collision rules proposed in the literature. Such collisions typically occur in granular materials, the simplest of which are made of chains of aligned balls. These chains are used throughout the book to analyze various multiple impact rules which extend the classical Newton (kinematic restitution), Poisson (kinetic restitution) and Darboux-Keller (energetic or kinetic restitution) approaches for impact modelling. The shock dynamics in various types of chains of aligned balls (monodisperse, tapered, decorated, stepped chains) is carefully studied and shown to depend on several parameters: restitution coefficients, contact stiffness ratios, elasticity coefficients (...

  9. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  10. SWAMP+: multiple subsequence alignment using associative massive parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  11. Honest Importance Sampling with Multiple Markov Chains.

    Science.gov (United States)

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable

  12. Parallelized event chain algorithm for dense hard sphere and polymer systems

    International Nuclear Information System (INIS)

    Kampmann, Tobias A.; Boltz, Horst-Holger; Kierfeld, Jan

    2015-01-01

    We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers

  13. An efficient parallel algorithm for matrix-vector multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    1993-03-01

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

  14. Multiple Independent File Parallel I/O with HDF5

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M. C.

    2016-07-13

    The HDF5 library has supported the I/O requirements of HPC codes at Lawrence Livermore National Labs (LLNL) since the late 90’s. In particular, HDF5 used in the Multiple Independent File (MIF) parallel I/O paradigm has supported LLNL code’s scalable I/O requirements and has recently been gainfully used at scales as large as O(106) parallel tasks.

  15. Efficient multitasking: parallel versus serial processing of multiple tasks.

    Science.gov (United States)

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  16. Dimer coverings on random multiple chains of planar honeycomb lattices

    International Nuclear Information System (INIS)

    Ren, Haizhen; Zhang, Fuji; Qian, Jianguo

    2012-01-01

    We study dimer coverings on random multiple chains. A multiple chain is a planar honeycomb lattice constructed by successively fusing copies of a ‘straight’ condensed hexagonal chain at the bottom of the previous one in two possible ways. A random multiple chain is then generated by admitting the Bernoulli distribution on the two types of fusing, which describes a zeroth-order Markov process. We determine the expectation of the number of the pure dimer coverings (perfect matchings) over the ensemble of random multiple chains by the transfer matrix approach. Our result shows that, with only two exceptions, the average of the logarithm of this expectation (i.e., the annealed entropy per dimer) is asymptotically nonzero when the fusing process goes to infinity and the length of the hexagonal chain is fixed, though it is zero when the fusing process and the length of the hexagonal chain go to infinity simultaneously. Some numerical results are provided to support our conclusion, from which we can see that the asymptotic behavior fits well to the theoretical results. We also apply the transfer matrix approach to the quenched entropy and reveal that the quenched entropy of random multiple chains has a close connection with the well-known Lyapunov exponent of random matrices. Using the theory of Lyapunov exponents we show that, for some random multiple chains, the quenched entropy per dimer is strictly smaller than the annealed one when the fusing process goes to infinity. Finally, we determine the expectation of the free energy per dimer over the ensemble of the random multiple chains in which the three types of dimers in different orientations are distinguished, and specify a series of non-random multiple chains whose free energy per dimer is asymptotically equal to this expectation. (paper)

  17. A tactical supply chain planning model with multiple flexibility options

    DEFF Research Database (Denmark)

    Esmaeilikia, Masoud; Fahimnia, Behnam; Sarkis, Joeseph

    2016-01-01

    Supply chain flexibility is widely recognized as an approach to manage uncertainty. Uncertainty in the supply chain may arise from a number of sources such as demand and supply interruptions and lead time variability. A tactical supply chain planning model with multiple flexibility options...... incorporated in sourcing, manufacturing and logistics functions can be used for the analysis of flexibility adjustment in an existing supply chain. This paper develops such a tactical supply chain planning model incorporating a realistic range of flexibility options. A novel solution method is designed...

  18. A scalable parallel algorithm for multiple objective linear programs

    Science.gov (United States)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  19. Parallel computer calculation of quantum spin lattices; Calcul de chaines de spins quantiques sur ordinateur parallele

    Energy Technology Data Exchange (ETDEWEB)

    Lamarcq, J. [Service de Physique Theorique, CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1998-07-10

    Numerical simulation allows the theorists to convince themselves about the validity of the models they use. Particularly by simulating the spin lattices one can judge about the validity of a conjecture. Simulating a system defined by a large number of degrees of freedom requires highly sophisticated machines. This study deals with modelling the magnetic interactions between the ions of a crystal. Many exact results have been found for spin 1/2 systems but not for systems of other spins for which many simulation have been carried out. The interest for simulations has been renewed by the Haldane`s conjecture stipulating the existence of a energy gap between the ground state and the first excited states of a spin 1 lattice. The existence of this gap has been experimentally demonstrated. This report contains the following four chapters: 1. Spin systems; 2. Calculation of eigenvalues; 3. Programming; 4. Parallel calculation 14 refs., 6 figs.

  20. Markov chain solution of photon multiple scattering through turbid slabs.

    Science.gov (United States)

    Lin, Ying; Northrop, William F; Li, Xuesong

    2016-11-14

    This work introduces a Markov Chain solution to model photon multiple scattering through turbid slabs via anisotropic scattering process, i.e., Mie scattering. Results show that the proposed Markov Chain model agree with commonly used Monte Carlo simulation for various mediums such as medium with non-uniform phase functions and absorbing medium. The proposed Markov Chain solution method successfully converts the complex multiple scattering problem with practical phase functions into a matrix form and solves transmitted/reflected photon angular distributions by matrix multiplications. Such characteristics would potentially allow practical inversions by matrix manipulation or stochastic algorithms where widely applied stochastic methods such as Monte Carlo simulations usually fail, and thus enable practical diagnostics reconstructions such as medical diagnosis, spray analysis, and atmosphere sciences.

  1. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  2. Partitioning of electron flux between the respiratory chains of the yeast Candida parapsilosis: parallel working of the two chains.

    Science.gov (United States)

    Guerin, M G; Camougrand, N M

    1994-02-08

    Partitioning of the electron flux between the classical and the alternative respiratory chains of the yeast Candida parapsilosis, was measured as a function of the oxidation rate and of the Q-pool redox poise. At low respiration rate, electrons from external NADH travelled preferentially through the alternative pathway as indicated by the antimycin A-insensitivity of electron flow. Inhibition of the alternative pathway by SHAM restored full antimycin A-sensitivity to the remaining electro flow. The dependence of the respiratory rate on the redox poise of the quinone pool was investigated when the electron flux was mediated either by the main respiratory chain (growth in the absence of antimycin A) or by the second respiratory chain (growth in the presence of antimycin A). In the former case, a linear relationship was found between these two parameters. In contrast, in the latter case, the relationship between Q-pool reduction level and electron flux was non-linear, but it could be resolved into two distinct curves. This second quinone is not reducible in the presence of antimycin A but only in the presence of high concentrations of myxothiazol or cyanide. Since two quinone species exist in C. parapsilosis, UQ9 and Qx (C33H54O4), we hypothesized that these two curves could correspond to the functioning of the second quinone engaged during the alternative pathway activity. Partitioning of electrons between both respiratory chains could occur upstream of complex III with the second chain functioning in parallel to the main one, and with the additional possibility of merging into the main one at the complex IV level.

  3. Parallel Beam-Beam Simulation Incorporating Multiple Bunches and Multiple Interaction Regions

    CERN Document Server

    Jones, F W; Pieloni, T

    2007-01-01

    The simulation code COMBI has been developed to enable the study of coherent beam-beam effects in the full collision scenario of the LHC, with multiple bunches interacting at multiple crossing points over many turns. The program structure and input are conceived in a general way which allows arbitrary numbers and placements of bunches and interaction points (IP's), together with procedural options for head-on and parasitic collisions (in the strong-strong sense), beam transport, statistics gathering, harmonic analysis, and periodic output of simulation data. The scale of this problem, once we go beyond the simplest case of a pair of bunches interacting once per turn, quickly escalates into the parallel computing arena, and herein we will describe the construction of an MPI-based version of COMBI able to utilize arbitrary numbers of processors to support efficient calculation of multi-bunch multi-IP interactions and transport. Implementing the parallel version did not require extensive disruption of the basic ...

  4. The electronic structure of quasi-one-dimensional disordered systems with parallel multi-chains

    International Nuclear Information System (INIS)

    Liu Xiaoliang; Xu Hui; Deng Chaosheng; Ma Songshan

    2006-01-01

    For the quasi-one-dimensional disordered systems with parallel multi-chains, taking a special method to code the sites and just considering the nearest-neighbor hopping integral, we write the systems' Hamiltonians as precisely symmetric matrixes, which can be transformed into three diagonally symmetric matrixes by using the Householder transformation. The densities of states, the localization lengths and the conductance of the systems are calculated numerically using the minus eigenvalue theory and the transfer matrix method. From the results of quasi-one-dimensional disordered systems with varied chains, we find, the energy band of the systems extends slightly, the energy gaps are observed and the distribution of the density of states changes obviously with the increase of the dimensionality. Especially, for the systems with four, five or six chains, at the energy band center, there exist extended states whose localization lengths are greater than the size of the systems, accordingly, there having great conductance. With the increasing of the number of the chains, the correlated ranges expand and the systems present the similar behavior to that with off-diagonal long-range correlation

  5. Parallel k-means++ for Multiple Shared-Memory Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Mackey, Patrick S.; Lewis, Robert R.

    2016-09-22

    In recent years k-means++ has become a popular initialization technique for improved k-means clustering. To date, most of the work done to improve its performance has involved parallelizing algorithms that are only approximations of k-means++. In this paper we present a parallelization of the exact k-means++ algorithm, with a proof of its correctness. We develop implementations for three distinct shared-memory architectures: multicore CPU, high performance GPU, and the massively multithreaded Cray XMT platform. We demonstrate the scalability of the algorithm on each platform. In addition we present a visual approach for showing which platform performed k-means++ the fastest for varying data sizes.

  6. Carotid chemoreceptors tune breathing via multipath routing: reticular chain and loop operations supported by parallel spike train correlations.

    Science.gov (United States)

    Morris, Kendall F; Nuding, Sarah C; Segers, Lauren S; Iceman, Kimberly E; O'Connor, Russell; Dean, Jay B; Ott, Mackenzie M; Alencar, Pierina A; Shuman, Dale; Horton, Kofi-Kermit; Taylor-Clark, Thomas E; Bolser, Donald C; Lindsey, Bruce G

    2018-02-01

    We tested the hypothesis that carotid chemoreceptors tune breathing through parallel circuit paths that target distinct elements of an inspiratory neuron chain in the ventral respiratory column (VRC). Microelectrode arrays were used to monitor neuronal spike trains simultaneously in the VRC, peri-nucleus tractus solitarius (p-NTS)-medial medulla, the dorsal parafacial region of the lateral tegmental field (FTL-pF), and medullary raphe nuclei together with phrenic nerve activity during selective stimulation of carotid chemoreceptors or transient hypoxia in 19 decerebrate, neuromuscularly blocked, and artificially ventilated cats. Of 994 neurons tested, 56% had a significant change in firing rate. A total of 33,422 cell pairs were evaluated for signs of functional interaction; 63% of chemoresponsive neurons were elements of at least one pair with correlational signatures indicative of paucisynaptic relationships. We detected evidence for postinspiratory neuron inhibition of rostral VRC I-Driver (pre-Bötzinger) neurons, an interaction predicted to modulate breathing frequency, and for reciprocal excitation between chemoresponsive p-NTS neurons and more downstream VRC inspiratory neurons for control of breathing depth. Chemoresponsive pericolumnar tonic expiratory neurons, proposed to amplify inspiratory drive by disinhibition, were correlationally linked to afferent and efferent "chains" of chemoresponsive neurons extending to all monitored regions. The chains included coordinated clusters of chemoresponsive FTL-pF neurons with functional links to widespread medullary sites involved in the control of breathing. The results support long-standing concepts on brain stem network architecture and a circuit model for peripheral chemoreceptor modulation of breathing with multiple circuit loops and chains tuned by tegmental field neurons with quasi-periodic discharge patterns. NEW & NOTEWORTHY We tested the long-standing hypothesis that carotid chemoreceptors tune the

  7. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  8. Performance Analysis of a Threshold-Based Parallel Multiple Beam Selection Scheme for WDM FSO Systems

    KAUST Repository

    Nam, Sung Sik; Alouini, Mohamed-Slim; Ko, Young-Chai

    2018-01-01

    In this paper, we statistically analyze the performance of a threshold-based parallel multiple beam selection scheme for a free-space optical (FSO) based system with wavelength division multiplexing (WDM) in cases where a pointing error has occurred

  9. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  10. Prognostic Value of Serum Free Light Chain in Multiple Myeloma.

    Science.gov (United States)

    El Naggar, Amel A; El-Naggar, Mostafa; Mokhamer, El-Hassan; Avad, Mona W

    2015-01-01

    The measurement of serum free light chain (sFLC) has been shown to be valuable in screening for the presence of plasma cell dyscrasia as well as for baseline prognosis in newly diagnosed patients. The aim of the present work was to study the prognostic value of sFLC in multiple myeloma in relation to other serum biomarkers, response to therapy and survival. Forty five newly diagnosed patients with MM were included in the study. Patients were divided into responders and non-responders groups according to response to therapy. sFLC and serum Amyloid A (SAA) were measured by immunonephelometry. The non-responders group showed a statistically significant higher kappa/lambda or lambda/kappa ratio and higher β2 microglobulin level, but lower albumin level at presentation, as compared to the responders group (P < 0.001). However, no statistically significant difference was detected between the two groups regarding SA A or calcium levels. Comparison between sFLC ratio obtained before and after therapy revealed significant decrease after treatment in the responders group (P = 0.05). Survival was significantly inferior in patients with an FLC ratio of ≥ 2.6 or ≤ 0.56 compared with those with an FLC ratio that was between 0.56 and 2.6 (P = 0.002).

  11. Synergy between the Multiple Supply Chain and Green Supply Chain Management (GSCM) approaches: an initial analysis aimed at fostering supply chain sustainability

    OpenAIRE

    Ana Lima de Carvalho; Livia Rodrigues Ignácio; Kleber Francisco Esposto; Aldo Roberto Ometto

    2016-01-01

    The concept of Green Supply Chain Management (GSCM) was created in the 90s to reduce the environmental impacts of productive systems. This approach seeks to improve the environmental performance of all the participants in a supply chain, from the extraction of raw materials to the use and final disposal of the product, through relationships of collaboration or conformity between the parties. The multiple supply chains approach established by Gattorna (2009) brought to light different supply c...

  12. Aperture and counting rate of rectangular telescopes for single and multiple parallel particles. [Spark chamber telescopes

    Energy Technology Data Exchange (ETDEWEB)

    D' Ettorre Piazzoli, B; Mannocchi, G [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Melone, S [Istituto di Fisica dell' Universita, Ancona, Italy; Picchi, P; Visentin, R [Comitato Nazionale per l' Energia Nucleare, Frascati (Italy). Laboratori Nazionali di Frascati

    1976-06-01

    Expressions for the counting rate of rectangular telescopes in the case of single as well as multiple particles are given. The aperture for single particles is obtained in the form of a double integral and analytical solutions are given for some cases. The intensity for different multiplicities of parallel particles is related to the geometry of the detectors and to the features of the radiation. This allows an absolute comparison between the data recorded by different devices.

  13. Further exploration of antimicrobial ketodihydronicotinic acid derivatives by multiple parallel syntheses

    DEFF Research Database (Denmark)

    Laursen, Jane B.; Nielsen, Janne; Haack, T.

    2006-01-01

    A synthetic reexamination of a series of ketodihydronicotinic acid class antibacterial agents was undertaken in an attempt to improve their therapeutic potential. A convenient new synthesis was developed involving hetero Diels-Alder chemistry producing 74 new analogs in a multiple parallel synthe...

  14. Modelling and simulation of multiple single - phase induction motor in parallel connection

    Directory of Open Access Journals (Sweden)

    Sujitjorn, S.

    2006-11-01

    Full Text Available A mathematical model for parallel connected n-multiple single-phase induction motors in generalized state-space form is proposed in this paper. The motor group draws electric power from one inverter. The model is developed by the dq-frame theory and was tested against four loading scenarios in which satisfactory results were obtained.

  15. Innovative supply chain optimization models with multiple uncertainty factors

    DEFF Research Database (Denmark)

    Choi, Tsan Ming; Govindan, Kannan; Li, Xiang

    2017-01-01

    Uncertainty is an inherent factor that affects all dimensions of supply chain activities. In today’s business environment, initiatives to deal with one specific type of uncertainty might not be effective since other types of uncertainty factors and disruptions may be present. These factors relate...... to supply chain competition and coordination. Thus, to achieve a more efficient and effective supply chain requires the deployment of innovative optimization models and novel methods. This preface provides a concise review of critical research issues regarding innovative supply chain optimization models...

  16. Harmonic resonance assessment of multiple paralleled grid-connected inverters system

    DEFF Research Database (Denmark)

    Wang, Yanbo; Wang, Xiongfei; Blaabjerg, Frede

    2017-01-01

    This paper presents an eigenvalue-based impedance stability analytical method of multiple paralleled grid-connected inverter system. Different from the conventional impedance-based stability criterion, this work first built the state-space model of paralleled grid-connected inverters. On the basis...... of this, a bridge between the state-space-based modelling and impedance-based stability criterion is presented. The proposed method is able to perform stability assessment locally at the connection points of the component. Meanwhile, the eigenvalue-based sensitivity analysis is adopted to identify...

  17. Coherent transport in a system of periodic linear chain of quantum dots situated between two parallel quantum wires

    International Nuclear Information System (INIS)

    Petrosyan, Lyudvig S

    2016-01-01

    We study coherent transport in a system of periodic linear chain of quantum dots situated between two parallel quantum wires. We show that the resonant-tunneling conductance between the wires exhibits a Rabi splitting of the resonance peak as a function of Fermi energy in the wires. This effect is an electron transport analogue of the Rabi splitting in optical spectra of two interacting systems. The conductance peak splitting originates from the anticrossing of Bloch bands in a periodic system that is caused by a strong coupling between the electron states in the quantum dot chain and quantum wires. (paper)

  18. Light chain deposition disease in multiple myeloma: MR imaging features correlated with histopathological findings

    International Nuclear Information System (INIS)

    Baur, A.; Staebler, A.; Reiser, M.; Lamerz, R.; Bartl, R.

    1998-01-01

    The clinical, histopathological, and imaging findings on MRI of a 56-year-old woman with light chain deposition disease occurring in multiple myeloma are presented. Light chain deposition disease is a variant of multiple myeloma with distinct clinical and histological characteristics. MRI of this patient also revealed an infiltration pattern in the bone marrow distinct from that of typical multiple myeloma. Multiple small foci of low signal intensity were present on T1- and T2-weighted spin echo and STIR images, corresponding to conglomerates of light chains in bone marrow biopsy. Contrast-enhanced T1-weighted spin echo images show diffuse enhancement of 51% over all vertebral bodies, with a minor enhancement of the focal conglomerates of light chains. Light chain deposition disease in multiple myeloma should be added to the list of those few entities with normal radiographs and discrete low-signal marrow lesions on T1- and T2-weighted spin echo pulse sequences. (orig.)

  19. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    KAUST Repository

    Quintin, Jean-Noel

    2013-10-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon\\'s algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon\\'s algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  20. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    KAUST Repository

    Quintin, Jean-Noel; Hasanov, Khalid; Lastovetsky, Alexey

    2013-01-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon's algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon's algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  1. SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation

    Science.gov (United States)

    Steinman, Jeff S.

    1992-01-01

    Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.

  2. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  3. Electric field and dielectrophoretic force on a dielectric particle chain in a parallel-plate electrode system

    International Nuclear Information System (INIS)

    Techaumnat, B; Eua-arporn, B; Takuma, T

    2004-01-01

    This paper presents results of calculations of the electric field and dielectrophoretic force on a dielectric particle chain suspended in a host liquid lying between parallel-plate electrodes. The method of calculation is based on the method of multipole images using the multipole re-expansion technique. We have investigated the effect of the particle permittivity, the tilt angle (between the chain and the applied field) and the chain arrangement on the electric field and force. The results show that the electric field intensification rises in accordance with the increase in the ratio of the particle-to-liquid permittivity, Γ ε . The electric field at the contact point between the particles decreases with increasing tilt angle, while the maximal field at the contact point between the particles and the plate electrodes is almost unchanged. The maximal field can be approximated by a simple formula, which is a quadratic function of Γ ε . The dielectrophoretic force depends significantly on the distance from other particles or an electrode. However, for the tilt angles in this paper, the horizontal force on the upper particle of the chain always has the direction opposite to the shear direction. The maximal horizontal force of a chain varies proportional to (Γ ε - 1) 1.7 if the particles in the chain are still in contact with each other. The approximated force, based on the force on an isolated chain, has been compared with our calculation results. The comparison shows that no approximation model agrees well with our results throughout the range of permittivity ratios

  4. Design and Analysis of Cooperative Cable Parallel Manipulators for Multiple Mobile Cranes

    Directory of Open Access Journals (Sweden)

    Bin Zi

    2012-11-01

    Full Text Available The design, dynamic modelling, and workspace are presented in this paper concerning cooperative cable parallel manipulators for multiple mobile cranes (CPMMCs. The CPMMCs can handle complex tasks that are more difficult or even impossible for a single mobile crane. Kinematics and dynamics of the CPMMCs are studied on the basis of geometric methodology and d'Alembert's principle, and a mathematical model of the CPMMCs is developed and presented with dynamic simulation. The constant orientation workspace analysis of the CPMMCs is carried out additionally. As an example, a cooperative cable parallel manipulator for triple mobile cranes with 6 Degrees of Freedom is investigated on the basis of the above design objectives.

  5. Prioritizing multiple therapeutic targets in parallel using automated DNA-encoded library screening

    Science.gov (United States)

    Machutta, Carl A.; Kollmann, Christopher S.; Lind, Kenneth E.; Bai, Xiaopeng; Chan, Pan F.; Huang, Jianzhong; Ballell, Lluis; Belyanskaya, Svetlana; Besra, Gurdyal S.; Barros-Aguirre, David; Bates, Robert H.; Centrella, Paolo A.; Chang, Sandy S.; Chai, Jing; Choudhry, Anthony E.; Coffin, Aaron; Davie, Christopher P.; Deng, Hongfeng; Deng, Jianghe; Ding, Yun; Dodson, Jason W.; Fosbenner, David T.; Gao, Enoch N.; Graham, Taylor L.; Graybill, Todd L.; Ingraham, Karen; Johnson, Walter P.; King, Bryan W.; Kwiatkowski, Christopher R.; Lelièvre, Joël; Li, Yue; Liu, Xiaorong; Lu, Quinn; Lehr, Ruth; Mendoza-Losana, Alfonso; Martin, John; McCloskey, Lynn; McCormick, Patti; O'Keefe, Heather P.; O'Keeffe, Thomas; Pao, Christina; Phelps, Christopher B.; Qi, Hongwei; Rafferty, Keith; Scavello, Genaro S.; Steiginga, Matt S.; Sundersingh, Flora S.; Sweitzer, Sharon M.; Szewczuk, Lawrence M.; Taylor, Amy; Toh, May Fern; Wang, Juan; Wang, Minghui; Wilkins, Devan J.; Xia, Bing; Yao, Gang; Zhang, Jean; Zhou, Jingye; Donahue, Christine P.; Messer, Jeffrey A.; Holmes, David; Arico-Muendel, Christopher C.; Pope, Andrew J.; Gross, Jeffrey W.; Evindar, Ghotas

    2017-07-01

    The identification and prioritization of chemically tractable therapeutic targets is a significant challenge in the discovery of new medicines. We have developed a novel method that rapidly screens multiple proteins in parallel using DNA-encoded library technology (ELT). Initial efforts were focused on the efficient discovery of antibacterial leads against 119 targets from Acinetobacter baumannii and Staphylococcus aureus. The success of this effort led to the hypothesis that the relative number of ELT binders alone could be used to assess the ligandability of large sets of proteins. This concept was further explored by screening 42 targets from Mycobacterium tuberculosis. Active chemical series for six targets from our initial effort as well as three chemotypes for DHFR from M. tuberculosis are reported. The findings demonstrate that parallel ELT selections can be used to assess ligandability and highlight opportunities for successful lead and tool discovery.

  6. Multiple comparative studies of Green Supply Chain Management

    DEFF Research Database (Denmark)

    Xu, Lihui; Mathiyazhagan, K.; Govindan, Kannan

    2013-01-01

    friendly operation strategies to lower their overall carbon footprint. Currently, there is increased awareness among customers even in developing countries about eco friendly manufacturing solutions. Multi-national firms have identified economies of developed nations as a potential market...... for their products. Such organizations in developing countries like India and China are under pressure to adopt green concepts in supply chain operations to compete in the market and satisfy their customers' increasing needs. This paper offers a comparative study of pressures that impact the adoption of Green Supply...

  7. INFORMATION SHARING ACROSS MULTIPLE BUYERS IN A SUPPLY CHAIN

    OpenAIRE

    JIANGHUA WU; ANANTH IYER; PAUL V. PRECKEL; XIN ZHAI

    2012-01-01

    We model the impact of information visibility in a two-level supply chain consisting of independent retailers who share upstream supply. The manufacturer supplies similar products to the two retailers and each retailer serves its independent end market. Retailers face one period of demand and satisfy the demand by ordering in the first period or back-ordering some of the demand and satisfying it in the second period. The wholesale price in the second period is decreasing in the total order si...

  8. DIALIGN P: Fast pair-wise and multiple sequence alignment using parallel processors

    Directory of Open Access Journals (Sweden)

    Kaufmann Michael

    2004-09-01

    Full Text Available Abstract Background Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Results Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. Conclusions By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.

  9. Cubic systems with invariant affine straight lines of total parallel multiplicity seven

    Directory of Open Access Journals (Sweden)

    Alexandru Suba

    2013-12-01

    Full Text Available In this article, we study the planar cubic differential systems with invariant affine straight lines of total parallel multiplicity seven. We classify these system according to their geometric properties encoded in the configurations of invariant straight lines. We show that there are only 17 different topological phase portraits in the Poincar\\'e disc associated to this family of cubic systems up to a reversal of the sense of their orbits, and we provide representatives of every class modulo an affine change of variables and rescaling of the time variable.

  10. Identical parallel machine scheduling with nonlinear deterioration and multiple rate modifying activities

    Directory of Open Access Journals (Sweden)

    Ömer Öztürkoğlu

    2017-07-01

    Full Text Available This study focuses on identical parallel machine scheduling of jobs with deteriorating processing times and rate-modifying activities. We consider non linearly increasing processing times of jobs based on their position assignment. Rate modifying activities are also considered to recover the increase in processing times of jobs due to deterioration. We also propose heuristics algorithms that rely on ant colony optimization and simulated annealing algorithms to solve the problem with multiple RMAs in a reasonable amount of time. Finally, we show that ant colony optimization algorithm generates close optimal solutions and superior results than simulated annealing algorithm.

  11. The Great Chains of Computing: Informatics at Multiple Scales

    Directory of Open Access Journals (Sweden)

    Kevin Kirby

    2011-10-01

    Full Text Available The perspective from which information processing is pervasive in the universe has proven to be an increasingly productive one. Phenomena from the quantum level to social networks have commonalities that can be usefully explicated using principles of informatics. We argue that the notion of scale is particularly salient here. An appreciation of what is invariant and what is emergent across scales, and of the variety of different types of scales, establishes a useful foundation for the transdiscipline of informatics. We survey the notion of scale and use it to explore the characteristic features of information statics (data, kinematics (communication, and dynamics (processing. We then explore the analogy to the principles of plenitude and continuity that feature in Western thought, under the name of the "great chain of being", from Plato through Leibniz and beyond, and show that the pancomputational turn is a modern counterpart of this ruling idea. We conclude by arguing that this broader perspective can enhance informatics pedagogy.

  12. Price competition and equilibrium analysis in multiple hybrid channel supply chain

    Science.gov (United States)

    Kuang, Guihua; Wang, Aihu; Sha, Jin

    2017-06-01

    The amazing boom of Internet and logistics industry prompts more and more enterprises to sell commodity through multiple channels. Such market conditions make the participants of multiple hybrid channel supply chain compete each other in traditional and direct channel at the same time. This paper builds a two-echelon supply chain model with a single manufacturer and a single retailer who both can choose different channel or channel combination for their own sales, then, discusses the price competition and calculates the equilibrium price under different sales channel selection combinations. Our analysis shows that no matter the manufacturer and retailer choose same or different channel price to compete, the equilibrium price does not necessarily exist the equilibrium price in the multiple hybrid channel supply chain and wholesale price change is not always able to coordinate supply chain completely. We also present the sufficient and necessary conditions for the existence of equilibrium price and coordination wholesale price.

  13. The existence and characterization of self-sustaining multiplicative fusion and fission reaction chains

    International Nuclear Information System (INIS)

    Harms, A.A.; Heindler, M.

    1980-01-01

    The mathematical-physical similarities and differences between fusion and fission multiplication processes are investigated. It is shown that advanced fusion cycles can sustain excursion tendencies which are essentially analogous to conventional fission cycles. The result that fission excursions are unbounded and that fusion excursions eventually attain an asymptote represents a significant distinction between these fundamental self-sustaining nuclear multiplicative chains. (Auth.)

  14. Innovation in a multiple-stage, multiple-product food marketing chain

    DEFF Research Database (Denmark)

    Baker, Alister Derek; Christensen, Tove

    A model of a 3-stage food marketing chain is presented for the case of two products. Its extension of existing work is its capacity to examine non-competitive input and output markets in two marketing chains at once, and have them related by demand and cost interactions. The simulated impacts...... of market power in a single chain generally reproduce those delivered by previous authors. The impacts of market power in related chains are found to depend on linkages between chains in terms of interactions in consumer demand. Interactions between products in costs (economies of scope) generate...... an interesting result in that a possible market failure is identified that may be offset by the exercise of market power. The generation of farm-level innovation is seen to be largely unaffected by market power, but where market power is exercised the benefits are extracted from farmers and consumers...

  15. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    Directory of Open Access Journals (Sweden)

    Md Selim Hossain

    Full Text Available In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM, which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST. The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text] and Area × Time × Energy (ATE product of the proposed design are far better than the most significant studies found in the literature.

  16. Vipie: web pipeline for parallel characterization of viral populations from multiple NGS samples.

    Science.gov (United States)

    Lin, Jake; Kramna, Lenka; Autio, Reija; Hyöty, Heikki; Nykter, Matti; Cinek, Ondrej

    2017-05-15

    Next generation sequencing (NGS) technology allows laboratories to investigate virome composition in clinical and environmental samples in a culture-independent way. There is a need for bioinformatic tools capable of parallel processing of virome sequencing data by exactly identical methods: this is especially important in studies of multifactorial diseases, or in parallel comparison of laboratory protocols. We have developed a web-based application allowing direct upload of sequences from multiple virome samples using custom parameters. The samples are then processed in parallel using an identical protocol, and can be easily reanalyzed. The pipeline performs de-novo assembly, taxonomic classification of viruses as well as sample analyses based on user-defined grouping categories. Tables of virus abundance are produced from cross-validation by remapping the sequencing reads to a union of all observed reference viruses. In addition, read sets and reports are created after processing unmapped reads against known human and bacterial ribosome references. Secured interactive results are dynamically plotted with population and diversity charts, clustered heatmaps and a sortable and searchable abundance table. The Vipie web application is a unique tool for multi-sample metagenomic analysis of viral data, producing searchable hits tables, interactive population maps, alpha diversity measures and clustered heatmaps that are grouped in applicable custom sample categories. Known references such as human genome and bacterial ribosomal genes are optionally removed from unmapped ('dark matter') reads. Secured results are accessible and shareable on modern browsers. Vipie is a freely available web-based tool whose code is open source.

  17. Practical enhancement factor model based on GM for multiple parallel reactions: Piperazine (PZ) CO2 capture

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Fosbøl, Philip Loldrup

    2017-01-01

    Reactive absorption is a key process for gas separation and purification and it is the main technology for CO2 capture. Thus, reliable and simple mathematical models for mass transfer rate calculation are essential. Models which apply to parallel interacting and non-interacting reactions, for all......, desorption and pinch conditions.In this work, we apply the GM model to multiple parallel reactions. We deduce the model for piperazine (PZ) CO2 capture and we validate it against wetted-wall column measurements using 2, 5 and 8 molal PZ for temperatures between 40 °C and 100 °C and CO2 loadings between 0.......23 and 0.41 mol CO2/2 mol PZ. We show that overall second order kinetics describes well the reaction between CO2 and PZ accounting for the carbamate and bicarbamate reactions. Here we prove the GM model for piperazine and MEA but we expect that this practical approach is applicable for various amines...

  18. Tunable multiple plasmon induced transparencies in parallel graphene sheets and its applications

    Science.gov (United States)

    khazaee, Sara; Granpayeh, Nosrat

    2018-01-01

    Tunable plasmon induced transparency is achieved by using only two parallel graphene sheets beyond silicon diffractive grating in mid-infrared region. Excitation of the guided-wave resonance (GWR) in this structure is illustrated on the normal incident transmission spectra and plays the bright resonance mode role. Weak hybridization between two bright modes, creates plasmon induced transparency (PIT) optical response. The resonance frequency of transparency window can be tuned by different geometrical parameters. Also, variation of graphene Fermi energy can be used to achieve tunability of the resonance frequency of transparency window without reconstruction and re-fabrication of the structure. We demonstrate the existence of multiple PIT spectral responses resulting from a series of self-assembled GWRs to be used as the wavelength demultiplexer. This study can be used for design of the optical ultra-compact devices and photonic integrated circuits.

  19. Scattering by multiple parallel radially stratified infinite cylinders buried in a lossy half space.

    Science.gov (United States)

    Lee, Siu-Chun

    2013-07-01

    The theoretical solution for scattering by an arbitrary configuration of closely spaced parallel infinite cylinders buried in a lossy half space is presented in this paper. The refractive index and permeability of the half space and cylinders are complex in general. Each cylinder is radially stratified with a distinct complex refractive index and permeability. The incident radiation is an arbitrarily polarized plane wave propagating in the plane normal to the axes of the cylinders. Analytic solutions are derived for the electric and magnetic fields and the Poynting vector of backscattered radiation emerging from the half space. Numerical examples are presented to illustrate the application of the scattering solution to calculate backscattering from a lossy half space containing multiple homogeneous and radially stratified cylinders at various depths and different angles of incidence.

  20. Markov chain formalism for generalized radiative transfer in a plane-parallel medium, accounting for polarization

    International Nuclear Information System (INIS)

    Xu, Feng; Davis, Anthony B.; Diner, David J.

    2016-01-01

    A Markov chain formalism is developed for computing the transport of polarized radiation according to Generalized Radiative Transfer (GRT) theory, which was developed recently to account for unresolved random fluctuations of scattering particle density and can also be applied to unresolved spectral variability of gaseous absorption as an improvement over the standard correlated-k method. Using Gamma distribution to describe the probability density function of the extinction or absorption coefficient, a shape parameter a that quantifies the variability is introduced, defined as the mean extinction or absorption coefficient squared divided by its variance. It controls the decay rate of a power-law transmission that replaces the usual exponential Beer-Lambert-Bouguer law. Exponential transmission, hence classic RT, is recovered when a→∞. The new approach is verified to high accuracy against numerical benchmark results obtained with a custom Monte Carlo method. For a<∞, angular reciprocity is violated to a degree that increases with the spatial variability, as observed for finite portions of real-world cloudy scenes. While the degree of linear polarization in liquid water cloudbows, supernumerary bows, and glories is affected by spatial heterogeneity, the positions in scattering angle of these features are relatively unchanged. As a result, a single-scattering model based on the assumption of subpixel homogeneity can still be used to derive droplet size distributions from polarimetric measurements of extended stratocumulus clouds. - Highlights: • A Markov chain formalism is developed for Generalized Radiative Transfer theory. • Angular reciprocity is violated to a degree that increases with spatial variability. • The positions of cloudbows and glories in scattering angle are relatively unchanged.

  1. Markov chain formalism for generalized radiative transfer in a plane-parallel medium, accounting for polarization

    Science.gov (United States)

    Xu, Feng; Davis, Anthony B.; Diner, David J.

    2016-11-01

    A Markov chain formalism is developed for computing the transport of polarized radiation according to Generalized Radiative Transfer (GRT) theory, which was developed recently to account for unresolved random fluctuations of scattering particle density and can also be applied to unresolved spectral variability of gaseous absorption as an improvement over the standard correlated-k method. Using Gamma distribution to describe the probability density function of the extinction or absorption coefficient, a shape parameter a that quantifies the variability is introduced, defined as the mean extinction or absorption coefficient squared divided by its variance. It controls the decay rate of a power-law transmission that replaces the usual exponential Beer-Lambert-Bouguer law. Exponential transmission, hence classic RT, is recovered when a→∞. The new approach is verified to high accuracy against numerical benchmark results obtained with a custom Monte Carlo method. For a<∞, angular reciprocity is violated to a degree that increases with the spatial variability, as observed for finite portions of real-world cloudy scenes. While the degree of linear polarization in liquid water cloudbows, supernumerary bows, and glories is affected by spatial heterogeneity, the positions in scattering angle of these features are relatively unchanged. As a result, a single-scattering model based on the assumption of subpixel homogeneity can still be used to derive droplet size distributions from polarimetric measurements of extended stratocumulus clouds.

  2. Sequential optimization of matrix chain multiplication relative to different cost functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2011-01-01

    In this paper, we present a methodology to optimize matrix chain multiplication sequentially relative to different cost functions such as total number of scalar multiplications, communication overhead in a multiprocessor environment, etc. For n matrices our optimization procedure requires O(n 3) arithmetic operations per one cost function. This work is done in the framework of a dynamic programming extension that allows sequential optimization relative to different criteria. © 2011 Springer-Verlag Berlin Heidelberg.

  3. The rhizosphere microbial community in a multiple parallel mineralization system suppresses the pathogenic fungus Fusarium oxysporum

    Science.gov (United States)

    Fujiwara, Kazuki; Iida, Yuichiro; Iwai, Takashi; Aoyama, Chihiro; Inukai, Ryuya; Ando, Akinori; Ogawa, Jun; Ohnishi, Jun; Terami, Fumihiro; Takano, Masao; Shinohara, Makoto

    2013-01-01

    The rhizosphere microbial community in a hydroponics system with multiple parallel mineralization (MPM) can potentially suppress root-borne diseases. This study focused on revealing the biological nature of the suppression against Fusarium wilt disease, which is caused by the fungus Fusarium oxysporum, and describing the factors that may influence the fungal pathogen in the MPM system. We demonstrated that the rhizosphere microbiota that developed in the MPM system could suppress Fusarium wilt disease under in vitro and greenhouse conditions. The microbiological characteristics of the MPM system were able to control the population dynamics of F. oxysporum, but did not eradicate the fungal pathogen. The roles of the microbiological agents underlying the disease suppression and the magnitude of the disease suppression in the MPM system appear to depend on the microbial density. F. oxysporum that survived in the MPM system formed chlamydospores when exposed to the rhizosphere microbiota. These results suggest that the microbiota suppresses proliferation of F. oxysporum by controlling the pathogen's morphogenesis and by developing an ecosystem that permits coexistence with F. oxysporum. PMID:24311557

  4. Performance Analysis of a Threshold-Based Parallel Multiple Beam Selection Scheme for WDM FSO Systems

    KAUST Repository

    Nam, Sung Sik

    2018-04-09

    In this paper, we statistically analyze the performance of a threshold-based parallel multiple beam selection scheme for a free-space optical (FSO) based system with wavelength division multiplexing (WDM) in cases where a pointing error has occurred under independent identically distributed Gamma-Gamma fading conditions. To simplify the mathematical analysis, we additionally consider Gamma turbulence conditions, which are a good approximation of Gamma-Gamma distribution. Specifically, we statistically analyze the characteristics in operation under conventional detection schemes (i.e., heterodyne detection (HD) and intensity modulation/direct detection (IM/DD) techniques) for both adaptive modulation (AM) case in addition to non-AM case (i.e., coherent/non-coherent binary modulation). Then, based on the statistically derived results, we evaluate the outage probability of a selected beam, the average spectral efficiency (ASE), the average number of selected beams (ANSB) and the average bit error rate (BER). Selected results show that we can obtain higher spectral efficiency and simultaneously reduce the potential for increasing the complexity of implementation caused by applying the selection-based beam selection scheme without considerable performance loss. Especially for the AM case, the ASE can be increased further compared to the non- AM cases. Our derived results based on the Gamma distribution as an approximation of the Gamma-Gamma distribution can be used as approximated performance measure bounds, especially, they may lead to lower bounds on the approximated considered performance measures.

  5. The rhizosphere microbial community in a multiple parallel mineralization system suppresses the pathogenic fungus Fusarium oxysporum.

    Science.gov (United States)

    Fujiwara, Kazuki; Iida, Yuichiro; Iwai, Takashi; Aoyama, Chihiro; Inukai, Ryuya; Ando, Akinori; Ogawa, Jun; Ohnishi, Jun; Terami, Fumihiro; Takano, Masao; Shinohara, Makoto

    2013-12-01

    The rhizosphere microbial community in a hydroponics system with multiple parallel mineralization (MPM) can potentially suppress root-borne diseases. This study focused on revealing the biological nature of the suppression against Fusarium wilt disease, which is caused by the fungus Fusarium oxysporum, and describing the factors that may influence the fungal pathogen in the MPM system. We demonstrated that the rhizosphere microbiota that developed in the MPM system could suppress Fusarium wilt disease under in vitro and greenhouse conditions. The microbiological characteristics of the MPM system were able to control the population dynamics of F. oxysporum, but did not eradicate the fungal pathogen. The roles of the microbiological agents underlying the disease suppression and the magnitude of the disease suppression in the MPM system appear to depend on the microbial density. F. oxysporum that survived in the MPM system formed chlamydospores when exposed to the rhizosphere microbiota. These results suggest that the microbiota suppresses proliferation of F. oxysporum by controlling the pathogen's morphogenesis and by developing an ecosystem that permits coexistence with F. oxysporum. © 2013 The Authors. MicrobiologyOpen published by John Wiley & Sons Ltd.

  6. MSAProbs-MPI: parallel multiple sequence aligner for distributed-memory systems.

    Science.gov (United States)

    González-Domínguez, Jorge; Liu, Yongchao; Touriño, Juan; Schmidt, Bertil

    2016-12-15

    MSAProbs is a state-of-the-art protein multiple sequence alignment tool based on hidden Markov models. It can achieve high alignment accuracy at the expense of relatively long runtimes for large-scale input datasets. In this work we present MSAProbs-MPI, a distributed-memory parallel version of the multithreaded MSAProbs tool that is able to reduce runtimes by exploiting the compute capabilities of common multicore CPU clusters. Our performance evaluation on a cluster with 32 nodes (each containing two Intel Haswell processors) shows reductions in execution time of over one order of magnitude for typical input datasets. Furthermore, MSAProbs-MPI using eight nodes is faster than the GPU-accelerated QuickProbs running on a Tesla K20. Another strong point is that MSAProbs-MPI can deal with large datasets for which MSAProbs and QuickProbs might fail due to time and memory constraints, respectively. Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at http://msaprobs.sourceforge.net CONTACT: jgonzalezd@udc.esSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Short term scheduling of multiple grid-parallel PEM fuel cells for microgrid applications

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkh, M.Y.; Rahman, A.; Alam, M.S. [Dept. of Electrical and Computer Engineering, University of South Alabama, Mobile, AL 36688 (United States)

    2010-10-15

    This paper presents a short term scheduling scheme for multiple grid-parallel PEM fuel cell power plants (FCPPs) connected to supply electrical and thermal energy to a microgrid community. As in the case of regular power plants, short term scheduling of FCPP is also a cost-based optimization problem that includes the cost of operation, thermal power recovery, and the power trade with the local utility grid. Due to the ability of the microgrid community to trade power with the local grid, the power balance constraint is not applicable, other constraints like the real power operating limits of the FCPP, and minimum up and down time are therefore used. To solve the short term scheduling problem of the FCPPs, a hybrid technique based on evolutionary programming (EP) and hill climbing technique (HC) is used. The EP is used to estimate the optimal schedule and the output power from each FCPP. The HC technique is used to monitor the feasibility of the solution during the search process. The short term scheduling problem is used to estimate the schedule and the electrical and thermal power output of five FCPPs supplying a maximum power of 300 kW. (author)

  8. Category-based attentional guidance can operate in parallel for multiple target objects.

    Science.gov (United States)

    Jenkins, Michael; Grubert, Anna; Eimer, Martin

    2018-04-30

    The question whether the control of attention during visual search is always feature-based or can also be based on the category of objects remains unresolved. Here, we employed the N2pc component as an on-line marker for target selection processes to compare the efficiency of feature-based and category-based attentional guidance. Two successive displays containing pairs of real-world objects (line drawings of kitchen or clothing items) were separated by a 10 ms SOA. In Experiment 1, target objects were defined by their category. In Experiment 2, one specific visual object served as target (exemplar-based search). On different trials, targets appeared either in one or in both displays, and participants had to report the number of targets (one or two). Target N2pc components were larger and emerged earlier during exemplar-based search than during category-based search, demonstrating the superior efficiency of feature-based attentional guidance. On trials where target objects appeared in both displays, both targets elicited N2pc components that overlapped in time, suggesting that attention was allocated in parallel to these target objects. Critically, this was the case not only in the exemplar-based task, but also when targets were defined by their category. These results demonstrate that attention can be guided by object categories, and that this type of category-based attentional control can operate concurrently for multiple target objects. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Column-Parallel Single Slope ADC with Digital Correlated Multiple Sampling for Low Noise CMOS Image Sensors

    NARCIS (Netherlands)

    Chen, Y.; Theuwissen, A.J.P.; Chae, Y.

    2011-01-01

    This paper presents a low noise CMOS image sensor (CIS) using 10/12 bit configurable column-parallel single slope ADCs (SS-ADCs) and digital correlated multiple sampling (CMS). The sensor used is a conventional 4T active pixel with a pinned-photodiode as photon detector. The test sensor was

  10. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  11. A Laboratory Preparation of Aspartame Analogs Using Simultaneous Multiple Parallel Synthesis Methodology

    Science.gov (United States)

    Qvit, Nir; Barda, Yaniv; Gilon, Chaim; Shalev, Deborah E.

    2007-01-01

    This laboratory experiment provides a unique opportunity for students to synthesize three analogues of aspartame, a commonly used artificial sweetener. The students are introduced to the powerful and useful method of parallel synthesis while synthesizing three dipeptides in parallel using solid-phase peptide synthesis (SPPS) and simultaneous…

  12. Influence of temporal context on value in the multiple-chains and successive-encounters procedures.

    Science.gov (United States)

    O'Daly, Matthew; Angulo, Samuel; Gipson, Cassandra; Fantino, Edmund

    2006-05-01

    This set of studies explored the influence of temporal context across multiple-chain and multiple-successive-encounters procedures. Following training with different temporal contexts, the value of stimuli sharing similar reinforcement schedules was assessed by presenting these stimuli in concurrent probes. The results for the multiple-chain schedule indicate that temporal context does impact the value of a conditioned reinforcer consistent with delay-reduction theory, such that a stimulus signaling a greater reduction in delay until reinforcement has greater value. Further, nonreinforced stimuli that are concurrently presented with the preferred terminal link also have greater value, consistent with value transfer. The effects of context on value for conditions with the multiple-successive-encounters procedure, however, appear to depend on whether the search schedule or alternate handling schedule was manipulated, as well as on whether the tested stimuli were the rich or lean schedules in their components. Overall, the results help delineate the conditions under which temporal context affects conditioned-reinforcement value (acting as a learning variable) and the conditions under which it does not (acting as a performance variable), an issue of relevance to theories of choice.

  13. Multiple-Criteria Decision Support for a Sustainable Supply Chain: Applications to the Fashion Industry

    Directory of Open Access Journals (Sweden)

    Kim Leng Poh

    2017-10-01

    Full Text Available With increasing globalization and international cooperation, the importance of sustainability management across supply chains has received much attention by companies across various industries. Companies therefore strive to implement effective and integrated sustainable supply chain management initiatives to improve their operational and economic performance while also minimizing unnecessary damage to the environment and maintaining their social reputation and images. The paper presents an easy-to-use decision-support approach based on multiple-criteria decision-making (MCDM methodologies that aim to help companies develop effective models for timely decision-making involving sustainable supply chain management strategies. The proposed approach can be used by practitioners to ultimately build a comprehensive Analytic Network Process model that will adequately capture and reveal all the interrelationships and interdependency among the elements in the problem, which is often a very difficult task. To facilitate and simplify this complex process, we propose that hierarchical thinking be used first to structure the essences of the problem capturing only the major issues, and an Analytic Hierarchy Process (AHP model be built. Users can learn from the modeling process and gain much insight into the problem. The AHP can then be extended to an Analytic Network Process (ANP model so as to capture the relationships and interdependencies among the elements. Our approach can reduce the sustainable expertise, effort and information that are often needed to build an ANP model from scratch. We apply our approach to the evaluation of sustainable supply chain management strategies for the fashion industry. Three main dimensions of sustainability—environmental, economic and social—are considered. Based on the literature, we identified four alternative supply chain management strategies. It was found that the Reverse Logistics alternative appears to be the

  14. A note on the nucleation with multiple steps: Parallel and series nucleation

    OpenAIRE

    Iwamatsu, Masao

    2012-01-01

    Parallel and series nucleation are the basic elements of the complex nucleation process when two saddle points exist on the free-energy landscape. It is pointed out that the nucleation rates follow formulas similar to those of parallel and series connection of resistors or conductors in an electric circuit. Necessary formulas to calculate individual nucleation rates at the saddle points and the total nucleation rate are summarized and the extension to the more complex nucleation process is su...

  15. Optimal inventory policy in a closed loop supply chain system with multiple periods

    International Nuclear Information System (INIS)

    Sasi Kumar, A.; Natarajan, K.; Ramasubramaniam, Muthu Rathna Sapabathy.; Deepaknallasamy, K.K.

    2017-01-01

    Purpose: This paper aims to model and optimize the closed loop supply chain for maximizing the profit by considering the fixed order quantity inventory policy in various sites at multiple periods. Design/methodology/approach: In forward supply chain, a standard inventory policy can be followed when the product moves from manufacturer, distributer, retailer and customer but the inventory in the reverse supply chain of the product with the similar standard policy is very difficult to manage. This model investigates the standard policy of fixed order quantity by considering the three major types of return-recovery pair such as commercial returns, end- of- use returns, end –of- life returns and their inventory positioning at multiple periods. The model is configured as mixed integer linear programming and solved by IBM ILOG CPLEX OPL studio. Findings: To find the performance of the model a numerical example is considered for a product with three Parts (A which of 2nos, B and C) for 12 multiple periods. The results of the analysis show that the manufacturer can know how much should to be manufacture in multiple periods based on Variations of the demand by adopting the FOQ inventory policy at different sites considering its capacity constraints. In addition, it is important how much of parts should be purchased from the supplier at the given 12 periods. Originality/value: A sensitivity analysis is performed to validate the proposed model two parts. First part of the analysis will focus on the inventory of product and parts and second part of analysis focus on profit of the company. The analysis which provides some insights in to the structure of the model.

  16. Optimal inventory policy in a closed loop supply chain system with multiple periods

    Energy Technology Data Exchange (ETDEWEB)

    Sasi Kumar, A.; Natarajan, K.; Ramasubramaniam, Muthu Rathna Sapabathy.; Deepaknallasamy, K.K.

    2017-07-01

    Purpose: This paper aims to model and optimize the closed loop supply chain for maximizing the profit by considering the fixed order quantity inventory policy in various sites at multiple periods. Design/methodology/approach: In forward supply chain, a standard inventory policy can be followed when the product moves from manufacturer, distributer, retailer and customer but the inventory in the reverse supply chain of the product with the similar standard policy is very difficult to manage. This model investigates the standard policy of fixed order quantity by considering the three major types of return-recovery pair such as commercial returns, end- of- use returns, end –of- life returns and their inventory positioning at multiple periods. The model is configured as mixed integer linear programming and solved by IBM ILOG CPLEX OPL studio. Findings: To find the performance of the model a numerical example is considered for a product with three Parts (A which of 2nos, B and C) for 12 multiple periods. The results of the analysis show that the manufacturer can know how much should to be manufacture in multiple periods based on Variations of the demand by adopting the FOQ inventory policy at different sites considering its capacity constraints. In addition, it is important how much of parts should be purchased from the supplier at the given 12 periods. Originality/value: A sensitivity analysis is performed to validate the proposed model two parts. First part of the analysis will focus on the inventory of product and parts and second part of analysis focus on profit of the company. The analysis which provides some insights in to the structure of the model.

  17. Optimal inventory policy in a closed loop supply chain system with multiple periods

    Directory of Open Access Journals (Sweden)

    SasiKumar A.

    2017-05-01

    Full Text Available Purpose: This paper aims to model and optimize the closed loop supply chain for maximizing the profit by considering the fixed order quantity inventory policy in various sites at multiple periods. Design/methodology/approach: In forward supply chain, a standard inventory policy can be followed when the product moves from manufacturer, distributer, retailer and customer but the inventory in the reverse supply chain of the product with the similar standard policy is very difficult to manage. This model investigates the standard policy of fixed order quantity by considering the three major types of return-recovery pair such as commercial returns, end- of- use returns, end –of- life returns and their inventory positioning at multiple periods.  The model is configured as mixed integer linear programming and solved by IBM ILOG CPLEX OPL studio. Findings: To find the performance of the model a numerical example is considered for a product with three Parts (A which of 2nos, B and C for 12 multiple periods. The results of the analysis show that the manufacturer can know how much should to be manufacture in multiple periods based on Variations of the demand by adopting the FOQ inventory policy at different sites considering its capacity constraints. In addition, it is important how much of parts should be purchased from the supplier at the given 12 periods. Originality/value: A sensitivity analysis is performed to validate the proposed model two parts. First part of the analysis will focus on the inventory of product and parts and second part of analysis focus on profit of the company. The analysis which provides some insights in to the structure of the model.

  18. Multiple attractors and crisis route to chaos in a model food-chain

    International Nuclear Information System (INIS)

    Upadhyay, Ranjit Kumar

    2003-01-01

    An attempt has been made to identify the mechanism, which is responsible for the existence of chaos in narrow parameter range in a realistic ecological model food-chain. Analytical and numerical studies of a three species food-chain model similar to a situation likely to be seen in terrestrial ecosystems has been carried out. The study of the model food chain suggests that the existence of chaos in narrow parameter ranges is caused by the crisis-induced sudden death of chaotic attractors. Varying one of the critical parameters in its range while keeping all the others constant, one can monitor the changes in the dynamical behaviour of the system, thereby fixing the regimes in which the system exhibits chaotic dynamics. The computed bifurcation diagrams and basin boundary calculations indicate that crisis is the underlying factor which generates chaotic dynamics in this model food-chain. We investigate sudden qualitative changes in chaotic dynamical behaviour, which occur at a parameter value a 1 =1.7804 at which the chaotic attractor destroyed by boundary crisis with an unstable periodic orbit created by the saddle-node bifurcation. Multiple attractors with riddled basins and fractal boundaries are also observed. If ecological systems of interacting species do indeed exhibit multiple attractors etc., the long term dynamics of such systems may undergo vast qualitative changes following epidemics or environmental catastrophes due to the system being pushed into the basin of a new attractor by the perturbation. Coupled with stochasticity, such complex behaviours may render such systems practically unpredictable

  19. Multiple scales and phases in discrete chains with application to folded proteins

    Science.gov (United States)

    Sinelnikova, A.; Niemi, A. J.; Nilsson, Johan; Ulybyshev, M.

    2018-05-01

    Chiral heteropolymers such as large globular proteins can simultaneously support multiple length scales. The interplay between the different scales brings about conformational diversity, determines the phase properties of the polymer chain, and governs the structure of the energy landscape. Most importantly, multiple scales produce complex dynamics that enable proteins to sustain live matter. However, at the moment there is incomplete understanding of how to identify and distinguish the various scales that determine the structure and dynamics of a complex protein. Here we address this impending problem. We develop a methodology with the potential to systematically identify different length scales, in the general case of a linear polymer chain. For this we introduce and analyze the properties of an order parameter that can both reveal the presence of different length scales and can also probe the phase structure. We first develop our concepts in the case of chiral homopolymers. We introduce a variant of Kadanoff's block-spin transformation to coarse grain piecewise linear chains, such as the C α backbone of a protein. We derive analytically, and then verify numerically, a number of properties that the order parameter can display, in the case of a chiral polymer chain. In particular, we propose that in the case of a chiral heteropolymer the order parameter can reveal traits of several different phases, contingent on the length scale at which it is scrutinized. We confirm that this is the case with crystallographic protein structures in the Protein Data Bank. Thus our results suggest relations between the scales, the phases, and the complexity of folding pathways.

  20. A note on the nucleation with multiple steps: parallel and series nucleation.

    Science.gov (United States)

    Iwamatsu, Masao

    2012-01-28

    Parallel and series nucleation are the basic elements of the complex nucleation process when two saddle points exist on the free-energy landscape. It is pointed out that the nucleation rates follow formulas similar to those of parallel and series connection of resistors or conductors in an electric circuit. Necessary formulas to calculate individual nucleation rates at the saddle points and the total nucleation rate are summarized, and the extension to the more complex nucleation process is suggested. © 2012 American Institute of Physics

  1. Configuration of supply chains in emerging industries: a multiple-case study in the wave-and-tidal energy industry

    OpenAIRE

    Bjørgum, Øyvind; Netland, Torbjørn H.

    2017-01-01

    Companies in emerging industries face particular challenges in configuring effective supply chains. In this paper, we build on transaction cost economics to explore how supply chains can be configured in emerging industries. We focus on two key aspects of supply chain configuration: the make-or-buy decision and the strength of the ties between a focal firm and its suppliers. We utilise a multiple-case study methodology, including seven start-up companies in the emerging wave-and-tidal energy ...

  2. Early recognition is important when multiple magnets masquerade as a single chain after foreign body ingestion

    Directory of Open Access Journals (Sweden)

    Auriel August

    2016-10-01

    Full Text Available Ingestions of multiple magnets can lead to serious damage to the gastrointestinal tract. Moreover, these foreign bodies can take deceptive shapes such as single chains which may mislead clinicians. We report the case of a ten-year-old boy who swallowed 33 magnets, the most yet reported, which took on the appearance of a single loop in the stomach, while actually being located in the stomach, small bowel, and colon. Early recognition and prompt intervention are necessary to avoid complications of this foreign body misadventure.

  3. Quantum correlation approach to criticality in the XX spin chain with multiple interaction

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, W.W., E-mail: weien.cheng@gmail.com [Institute of Signal Processing and Transmission, Nanjing University of Posts and Telecommunication, Nanjing 210003 (China); Department of Physics, Hubei Normal University, Huangshi 435002 (China); Key Lab of Broadband Wireless Communication and Sensor Network Technology, Ministry of Education (China); Shan, C.J. [Department of Physics, Hubei Normal University, Huangshi 435002 (China); Sheng, Y.B.; Gong, L.Y.; Zhao, S.M. [Institute of Signal Processing and Transmission, Nanjing University of Posts and Telecommunication, Nanjing 210003 (China); Key Lab of Broadband Wireless Communication and Sensor Network Technology, Ministry of Education (China)

    2012-09-01

    We investigate the quantum critical behavior in the XX spin chain with a XZY-YZX type multiple interaction by means of quantum correlation (Concurrence C, quantum discord D{sub Q} and geometric discord D{sub G}). Around the critical point, the values of these quantum correlations and corresponding derivatives are investigated numerically and analytically. The results show that the non-analyticity property of the concurrence cannot signal well the quantum phase transition, but both the quantum discord and geometric discord can characterize the critical behavior in such model exactly.

  4. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    Science.gov (United States)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  5. A multiple shock model for common cause failures using discrete Markov chain

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Kang, Chang Soon

    1992-01-01

    The most widely used models in common cause analysis are (single) shock models such as the BFR, and the MFR. But, single shock model can not treat the individual common cause separately and has some irrational assumptions. Multiple shock model for common cause failures is developed using Markov chain theory. This model treats each common cause shock as separately and sequently occuring event to implicate the change in failure probability distribution due to each common cause shock. The final failure probability distribution is evaluated and compared with that from the BFR model. The results show that multiple shock model which minimizes the assumptions in the BFR model is more realistic and conservative than the BFR model. The further work for application is the estimations of parameters such as common cause shock rate and component failure probability given a shock,p, through the data analysis

  6. Early Prognostic Value of Monitoring Serum Free Light Chain in Patients with Multiple Myeloma Undergoing Autologous Stem Cell Transplantation.

    Science.gov (United States)

    Özkurt, Zübeyde Nur; Sucak, Gülsan Türköz; Akı, Şahika Zeynep; Yağcı, Münci; Haznedar, Rauf

    2017-03-16

    We hypothesized the levels of free light chains obtained before and after autologous stem cell transplantation can be useful in predicting transplantation outcome. We analyzed 70 multiple myeloma patients. Abnormal free light chain ratios before stem cell transplantation were found to be associated early progression, although without any impact on overall survival. At day +30, the normalization of levels of involved free light chain related with early progression. According to these results almost one-third reduction of free light chain levels can predict favorable prognosis after autologous stem cell transplantation.

  7. Parallelism measurement for base plate of standard artifact with multiple tactile approaches

    Science.gov (United States)

    Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie

    2018-01-01

    Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.

  8. Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs.

    Science.gov (United States)

    Hemming, Karla; Lilford, Richard; Girling, Alan J

    2015-01-30

    Stepped-wedge cluster randomised trials (SW-CRTs) are being used with increasing frequency in health service evaluation. Conventionally, these studies are cross-sectional in design with equally spaced steps, with an equal number of clusters randomised at each step and data collected at each and every step. Here we introduce several variations on this design and consider implications for power. One modification we consider is the incomplete cross-sectional SW-CRT, where the number of clusters varies at each step or where at some steps, for example, implementation or transition periods, data are not collected. We show that the parallel CRT with staggered but balanced randomisation can be considered a special case of the incomplete SW-CRT. As too can the parallel CRT with baseline measures. And we extend these designs to allow for multiple layers of clustering, for example, wards within a hospital. Building on results for complete designs, power and detectable difference are derived using a Wald test and obtaining the variance-covariance matrix of the treatment effect assuming a generalised linear mixed model. These variations are illustrated by several real examples. We recommend that whilst the impact of transition periods on power is likely to be small, where they are a feature of the design they should be incorporated. We also show examples in which the power of a SW-CRT increases as the intra-cluster correlation (ICC) increases and demonstrate that the impact of the ICC is likely to be smaller in a SW-CRT compared with a parallel CRT, especially where there are multiple levels of clustering. Finally, through this unified framework, the efficiency of the SW-CRT and the parallel CRT can be compared. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. Generalized framework for the parallel semantic segmentation of multiple objects and posterior manipulation

    DEFF Research Database (Denmark)

    Llopart, Adrian; Ravn, Ole; Andersen, Nils Axel

    2017-01-01

    The end-to-end approach presented in this paper deals with the recognition, detection, segmentation and grasping of objects, assuming no prior knowledge of the environment nor objects. The proposed pipeline is as follows: 1) Usage of a trained Convolutional Neural Net (CNN) that recognizes up to 80...... different classes of objects in real time and generates bounding boxes around them. 2) An algorithm to derive in parallel the pointclouds of said regions of interest (ROI). 3) Eight different segmentation methods to remove background data and noise from the pointclouds and obtain a precise result...

  10. Reduced dose uncertainty in MRI-based polymer gel dosimetry using parallel RF transmission with multiple RF sources

    International Nuclear Information System (INIS)

    Sang-Young Kim; Jung-Hoon Lee; Jin-Young Jung; Do-Wan Lee; Seu-Ran Lee; Bo-Young Choe; Hyeon-Man Baek; Korea University of Science and Technology, Daejeon; Dae-Hyun Kim; Jung-Whan Min; Ji-Yeon Park

    2014-01-01

    In this work, we present the feasibility of using a parallel RF transmit with multiple RF sources imaging method (MultiTransmit imaging) in polymer gel dosimetry. Image quality and B 1 field homogeneity was statistically better in the MultiTransmit imaging method than in conventional single source RF transmission imaging method. In particular, the standard uncertainty of R 2 was lower on the MultiTransmit images than on the conventional images. Furthermore, the MultiTransmit measurement showed improved dose resolution. Improved image quality and B 1 homogeneity results in reduced dose uncertainty, thereby suggesting the feasibility of MultiTransmit MR imaging in gel dosimetry. (author)

  11. Is orthographic information from multiple parafoveal words processed in parallel: An eye-tracking study.

    Science.gov (United States)

    Cutter, Michael G; Drieghe, Denis; Liversedge, Simon P

    2017-08-01

    In the current study we investigated whether orthographic information available from 1 upcoming parafoveal word influences the processing of another parafoveal word. Across 2 experiments we used the boundary paradigm (Rayner, 1975) to present participants with an identity preview of the 2 words after the boundary (e.g., hot pan ), a preview in which 2 letters were transposed between these words (e.g., hop tan ), or a preview in which the same 2 letters were substituted (e.g., hob fan ). We hypothesized that if these 2 words were processed in parallel in the parafovea then we may observe significant preview benefits for the condition in which the letters were transposed between words relative to the condition in which the letters were substituted. However, no such effect was observed, with participants fixating the words for the same amount of time in both conditions. This was the case both when the transposition was made between the final and first letter of the 2 words (e.g., hop tan as a preview of hot pan ; Experiment 1) and when the transposition maintained within word letter position (e.g., pit hop as a preview of hit pop ; Experiment 2). The implications of these findings are considered in relation to serial and parallel lexical processing during reading. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Free light chains of immunoglobulins in the diagnosis and prognosis of multiple myeloma

    Directory of Open Access Journals (Sweden)

    N. V. Lyubimova

    2017-01-01

    Full Text Available Background: Analysis of free light chains of immunoglobulins (FLC in the serum is an effective method in the diagnosis of multiple myeloma. Plasma cells produce two types of FLC: κand λ-FLC. FLC, which are not incorporated into monoclonal intact immunoglobulins, are released into circulation, and then are filtered and reabsorbed in kidneys depending on their molecular weight. Circulating FLC commonly form homodimers, known as Bence-Jones protein, which is a biomarker of Bence-Jones multiple myeloma. According to the international guidelines, the ratio κ/λ FLC is an important diagnostic criterion of multiple myeloma. Aim: To evaluate the diagnostic and prognostic value of serum FLC in multiple myeloma patients. Materials and methods: We examined 118 patients with multiple myeloma, admitted to the Department of Hemoblastosis Chemotherapy of N.N. Blokhin Russian Cancer Research Center from 2010 to 2016, and 68 healthy men and women. Serum concentrations of FLC were measured with an immunoturbidimetric method using the test-system Freelite Human Lambda and Freelite Human Kappa (Binding Site Inc.. Results: The levels of monoclonal κor λ-FLC in patients with G-, A-myeloma and Bence-Jones multiple myeloma were significantly higher than those in the control group (p < 0.005. The diagnostic sensitivity of quantification of FLC and their ratio was 87.3% and 89.8%, and in combination with the use of immune electrophoresis it was close to 100%. Analysis of progression free survival and overall survival showed significant differences (p < 0.04 between the groups of patients according their κ/λ FLC ratio. The basal value of κ/λ FLC ratio of less than 0.04 and more than 140 was a  predictor of unfavorable outcome. Conclusion: The inclusion of the determination of serum FLC into the assessment plan of patients with suspected monoclonal gammapathy makes it possible to increase diagnostic sensitivity of the available methods for paraprotein

  13. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    Directory of Open Access Journals (Sweden)

    Min-Kyu Kim

    2015-12-01

    Full Text Available This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs. The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  14. The Dynamics of Multiple Pair-Wise Collisions in a Chain for Designing Optimal Shock Amplifiers

    Directory of Open Access Journals (Sweden)

    Bryan Rodgers

    2009-01-01

    Full Text Available The major focus of this work is to examine the dynamics of velocity amplification through pair-wise collisions between multiple masses in a chain, in order to develop useful machines. For instance low-cost machines based on this principle could be used for detailed, very-high acceleration shock-testing of MEMS devices. A theoretical basis for determining the number and mass of intermediate stages in such a velocity amplifier, based on simple rigid body mechanics, is proposed. The influence of mass ratios and the coefficient of restitution on the optimisation of the system is identified and investigated. In particular, two cases are examined: in the first, the velocity of the final mass in the chain (that would have the object under test mounted on it is maximised by defining the ratio of adjacent masses according to a power law relationship; in the second, the energy transfer efficiency of the system is maximised by choosing the mass ratios such that all masses except the final mass come to rest following impact. Comparisons are drawn between both cases and the results are used in proposing design guidelines for optimal shock amplifiers. It is shown that for most practical systems, a shock amplifier with mass ratios based on a power law relationship is optimal and can easily yield velocity amplifications of a factor 5–8 times. A prototype shock testing machine that was made using above principles is briefly introduced.

  15. Supply Chain Contracts with Multiple Retailers in a Fuzzy Demand Environment

    Directory of Open Access Journals (Sweden)

    Shengju Sang

    2013-01-01

    Full Text Available This study investigates supply chain contracts with a supplier and multiple competing retailers in a fuzzy demand environment. The market demand is considered as a positive triangular fuzzy number. The models of centralized decision, return contract, and revenue-sharing contract are built by the method of fuzzy cut sets theory, and their optimal policies are also proposed. Finally, an example is given to illustrate and validate the models and conclusions. It is shown that the optimal total order quantity of the retailers fluctuates at the center of the fuzzy demand. With the rise of the number of retailers, the optimal order quantity and the fuzzy expected profit for each retailer will decrease, and the fuzzy expected profit for supplier will increase.

  16. Performance analysis of a threshold-based parallel multiple beam selection scheme for WDM-based systems for Gamma-Gamma distributions

    KAUST Repository

    Nam, Sung Sik; Yoon, Chang Seok; Alouini, Mohamed-Slim

    2017-01-01

    In this paper, we statistically analyze the performance of a threshold-based parallel multiple beam selection scheme (TPMBS) for Free-space optical (FSO) based system with wavelength division multiplexing (WDM) in cases where a pointing error has

  17. Smoking and increased risk of multiple sclerosis: parallel trends in the sex ratio reinforce the evidence

    DEFF Research Database (Denmark)

    Palacios, Natalia; Alonso, Alvaro; Brønnum-Hansen, Henrik

    2011-01-01

    Smoking behavior in industrialized nations has changed markedly over the second half of the 20th century, with diverging patterns in male and female smoking rates. We examined whether the female/male incidence of multiple sclerosis (MS) changed concomitantly with smoking, as would be expected if ...

  18. Mixed-time parallel evolution in multiple quantum NMR experiments: sensitivity and resolution enhancement in heteronuclear NMR

    International Nuclear Information System (INIS)

    Ying Jinfa; Chill, Jordan H.; Louis, John M.; Bax, Ad

    2007-01-01

    A new strategy is demonstrated that simultaneously enhances sensitivity and resolution in three- or higher-dimensional heteronuclear multiple quantum NMR experiments. The approach, referred to as mixed-time parallel evolution (MT-PARE), utilizes evolution of chemical shifts of the spins participating in the multiple quantum coherence in parallel, thereby reducing signal losses relative to sequential evolution. The signal in a given PARE dimension, t 1 , is of a non-decaying constant-time nature for a duration that depends on the length of t 2 , and vice versa, prior to the onset of conventional exponential decay. Line shape simulations for the 1 H- 15 N PARE indicate that this strategy significantly enhances both sensitivity and resolution in the indirect 1 H dimension, and that the unusual signal decay profile results in acceptable line shapes. Incorporation of the MT-PARE approach into a 3D HMQC-NOESY experiment for measurement of H N -H N NOEs in KcsA in SDS micelles at 50 o C was found to increase the experimental sensitivity by a factor of 1.7±0.3 with a concomitant resolution increase in the indirectly detected 1 H dimension. The method is also demonstrated for a situation in which homonuclear 13 C- 13 C decoupling is required while measuring weak H3'-2'OH NOEs in an RNA oligomer

  19. Power Factor Correction Capacitors for Multiple Parallel Three-Phase ASD Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Today’s three-phase Adjustable Speed Drive (ASD) systems still employ Diode Rectifiers (DRs) and Silicon-Controlled Rectifiers (SCRs) as the front-end converters due to structural and control simplicity, small volume, low cost, and high reliability. However, the uncontrollable DRs and phase......-controllable SCRs bring side-effects by injecting high harmonics to the grid, which will degrade the system performance in terms of lowering the overall efficiency and overheating the system if remain uncontrolled or unattenuated. For multiple ASD systems, certain harmonics in the entire system can be mitigated...... the power factor, passive capacitors can be installed, which yet can trigger the system resonance. Hence, this paper analyzes the resonant issues in multiple ASD systems with power factor correction capacitors. Potential damping solutions are summarized. Simulations are carried out, while laboratory tests...

  20. Prognostic value of free light chains lambda and kappa in early multiple sclerosis.

    Science.gov (United States)

    Voortman, Margarete M; Stojakovic, Tatjana; Pirpamer, Lukas; Jehna, Margit; Langkammer, Christian; Scharnagl, Hubert; Reindl, Markus; Ropele, Stefan; Seifert-Held, Thomas; Archelos, Juan-Jose; Fuchs, Siegrid; Enzinger, Christian; Fazekas, Franz; Khalil, Michael

    2017-10-01

    Cerebrospinal fluid (CSF) immunoglobulin free light chains (FLC) have been suggested as quantitative alternative to oligoclonal bands (OCB) in the diagnosis of multiple sclerosis (MS). However, little is known on their role in predicting clinical and paraclinical disease progression, particularly in early stages. To assess the prognostic value of FLC in OCB-positive patients with clinically isolated syndrome (CIS) suggestive of MS and early MS. We determined FLC kappa (KFLC) and lambda (LFLC) in CSF and serum by nephelometry in 61 patients (CIS ( n = 48), relapsing-remitting multiple sclerosis ( n = 13)) and 60 non-inflammatory neurological controls. Median clinical follow-up time in CIS was 4.8 years (interquartile range (IQR), 1.5-6.5 years). Patients underwent 3T magnetic resonance imaging (MRI) at baseline and follow-up (median time interval, 2.2 years; IQR, 1.0-3.7 years) to determine T2 lesion load (T2LL) and percent brain volume change (PBVC). CSF FLC were significantly increased in CIS/MS compared to controls (all p multiple sclerosis (CDMS) conversion (hazard ratio (HR) = 2.89; 95% confidence interval (CI) = 1.17-7.14; p < 0.05). No correlations were found for FLC variables with T2LL or PBVC. Our study confirms increased intrathecal synthesis of FLC in CIS/MS which supports their diagnostic contribution. The KFLC/LFLC CSF ratio appears to have a prognostic value in CIS beyond OCB.

  1. The evolution of multiple isotypic IgM heavy chain genes in the shark.

    Science.gov (United States)

    Lee, Victor; Huang, Jing Li; Lui, Ming Fai; Malecek, Karolina; Ohta, Yuko; Mooers, Arne; Hsu, Ellen

    2008-06-01

    The IgM H chain gene organization of cartilaginous fishes consists of 15-200 miniloci, each with a few gene segments (V(H)-D1-D2-J(H)) and one C gene. This is a gene arrangement ancestral to the complex IgH locus that exists in all other vertebrate classes. To understand the molecular evolution of this system, we studied the nurse shark, which has relatively fewer loci, and characterized the IgH isotypes for organization, functionality, and the somatic diversification mechanisms that act upon them. Gene numbers differ slightly between individuals ( approximately 15), but five active IgM subclasses are always present. Each gene undergoes rearrangement that is strictly confined within the minilocus; in B cells there is no interaction between adjacent loci located > or =120 kb apart. Without combinatorial events, the shark IgM H chain repertoire is based on junctional diversity and, subsequently, somatic hypermutation. We suggest that the significant contribution by junctional diversification reflects the selected novelty introduced by RAG in the early vertebrate ancestor, whereas combinatorial diversity coevolved with the complex translocon organization. Moreover, unlike other cartilaginous fishes, there are no germline-joined VDJ at any nurse shark mu locus, and we suggest that such genes, when functional, are species-specific and may have specialized roles. With an entire complement of IgM genes available for the first time, phylogenetic analyses were performed to examine how the multiple Ig loci evolved. We found that all domains changed at comparable rates, but V(H) appears to be under strong positive selection for increased amino acid sequence diversity, and surprisingly, so does Cmicro2.

  2. Stochastic optimization of a multi-feedstock lignocellulosic-based bioethanol supply chain under multiple uncertainties

    International Nuclear Information System (INIS)

    Osmani, Atif; Zhang, Jun

    2013-01-01

    An integrated multi-feedstock (i.e. switchgrass and crop residue) lignocellulosic-based bioethanol supply chain is studied under jointly occurring uncertainties in switchgrass yield, crop residue purchase price, bioethanol demand and sales price. A two-stage stochastic mathematical model is proposed to maximize expected profit by optimizing the strategic and tactical decisions. A case study based on ND (North Dakota) state in the U.S. demonstrates that in a stochastic environment it is cost effective to meet 100% of ND's annual gasoline demand from bioethanol by using switchgrass as a primary and crop residue as a secondary biomass feedstock. Although results show that the financial performance is degraded as variability of the uncertain parameters increases, the proposed stochastic model increasingly outperforms the deterministic model under uncertainties. The locations of biorefineries (i.e. first-stage integer variables) are insensitive to the uncertainties. Sensitivity analysis shows that “mean” value of stochastic parameters has a significant impact on the expected profit and optimal values of first-stage continuous variables. Increase in level of mean ethanol demand and mean sale price results in higher bioethanol production. When mean switchgrass yield is at low level and mean crop residue price is at high level, all the available marginal land is used for switchgrass cultivation. - Highlights: • Two-stage stochastic MILP model for maximizing profit of a multi-feedstock lignocellulosic-based bioethanol supply chain. • Multiple uncertainties in switchgrass yield, crop residue purchase price, bioethanol demand, and bioethanol sale price. • Proposed stochastic model outperforms the traditional deterministic model under uncertainties. • Stochastic parameters significantly affect marginal land allocation for switchgrass cultivation and bioethanol production. • Location of biorefineries is found to be insensitive to the stochastic environment

  3. Early Parallel Activation of Semantics and Phonology in Picture Naming: Evidence from a Multiple Linear Regression MEG Study.

    Science.gov (United States)

    Miozzo, Michele; Pulvermüller, Friedemann; Hauk, Olaf

    2015-10-01

    The time course of brain activation during word production has become an area of increasingly intense investigation in cognitive neuroscience. The predominant view has been that semantic and phonological processes are activated sequentially, at about 150 and 200-400 ms after picture onset. Although evidence from prior studies has been interpreted as supporting this view, these studies were arguably not ideally suited to detect early brain activation of semantic and phonological processes. We here used a multiple linear regression approach to magnetoencephalography (MEG) analysis of picture naming in order to investigate early effects of variables specifically related to visual, semantic, and phonological processing. This was combined with distributed minimum-norm source estimation and region-of-interest analysis. Brain activation associated with visual image complexity appeared in occipital cortex at about 100 ms after picture presentation onset. At about 150 ms, semantic variables became physiologically manifest in left frontotemporal regions. In the same latency range, we found an effect of phonological variables in the left middle temporal gyrus. Our results demonstrate that multiple linear regression analysis is sensitive to early effects of multiple psycholinguistic variables in picture naming. Crucially, our results suggest that access to phonological information might begin in parallel with semantic processing around 150 ms after picture onset. © The Author 2014. Published by Oxford University Press.

  4. Single and multiple objective biomass-to-biofuel supply chain optimization considering environmental impacts

    Science.gov (United States)

    Valles Sosa, Claudia Evangelina

    respond to these new challenges, the Modified Multiple Objective Evolutionary Algorithm for the design optimization of a biomass to bio-refinery logistic system that considers the simultaneous maximization of the total profit and the minimization of three environmental impacts is presented. Sustainability balances economic, social and environmental goals and objectives. There exist several works in the literature that have considered economic and environmental objectives for the presented supply chain problem. However, there is a lack of research performed in the social aspect of a sustainable logistics system. This work proposes a methodology to integrate social aspect assessment, based on employment creation. Finally, most of the assessment methodologies considered in the literature only contemplate deterministic values, when in realistic situations uncertainties in the supply chain are present. In this work, Value-at-Risk, an advanced risk measure commonly used in portfolio optimization is included to consider the uncertainties in biofuel prices, among the others.

  5. Massive parallelization of a 3D finite difference electromagnetic forward solution using domain decomposition methods on multiple CUDA enabled GPUs

    Science.gov (United States)

    Schultz, A.

    2010-12-01

    3D forward solvers lie at the core of inverse formulations used to image the variation of electrical conductivity within the Earth's interior. This property is associated with variations in temperature, composition, phase, presence of volatiles, and in specific settings, the presence of groundwater, geothermal resources, oil/gas or minerals. The high cost of 3D solutions has been a stumbling block to wider adoption of 3D methods. Parallel algorithms for modeling frequency domain 3D EM problems have not achieved wide scale adoption, with emphasis on fairly coarse grained parallelism using MPI and similar approaches. The communications bandwidth as well as the latency required to send and receive network communication packets is a limiting factor in implementing fine grained parallel strategies, inhibiting wide adoption of these algorithms. Leading Graphics Processor Unit (GPU) companies now produce GPUs with hundreds of GPU processor cores per die. The footprint, in silicon, of the GPU's restricted instruction set is much smaller than the general purpose instruction set required of a CPU. Consequently, the density of processor cores on a GPU can be much greater than on a CPU. GPUs also have local memory, registers and high speed communication with host CPUs, usually through PCIe type interconnects. The extremely low cost and high computational power of GPUs provides the EM geophysics community with an opportunity to achieve fine grained (i.e. massive) parallelization of codes on low cost hardware. The current generation of GPUs (e.g. NVidia Fermi) provides 3 billion transistors per chip die, with nearly 500 processor cores and up to 6 GB of fast (DDR5) GPU memory. This latest generation of GPU supports fast hardware double precision (64 bit) floating point operations of the type required for frequency domain EM forward solutions. Each Fermi GPU board can sustain nearly 1 TFLOP in double precision, and multiple boards can be installed in the host computer system. We

  6. Multiple imputation by chained equations for systematically and sporadically missing multilevel data.

    Science.gov (United States)

    Resche-Rigon, Matthieu; White, Ian R

    2018-06-01

    In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.

  7. The use of coded PCR primers enables high-throughput sequencing of multiple homolog amplification products by 454 parallel sequencing.

    Directory of Open Access Journals (Sweden)

    Jonas Binladen

    2007-02-01

    Full Text Available The invention of the Genome Sequence 20 DNA Sequencing System (454 parallel sequencing platform has enabled the rapid and high-volume production of sequence data. Until now, however, individual emulsion PCR (emPCR reactions and subsequent sequencing runs have been unable to combine template DNA from multiple individuals, as homologous sequences cannot be subsequently assigned to their original sources.We use conventional PCR with 5'-nucleotide tagged primers to generate homologous DNA amplification products from multiple specimens, followed by sequencing through the high-throughput Genome Sequence 20 DNA Sequencing System (GS20, Roche/454 Life Sciences. Each DNA sequence is subsequently traced back to its individual source through 5'tag-analysis.We demonstrate that this new approach enables the assignment of virtually all the generated DNA sequences to the correct source once sequencing anomalies are accounted for (miss-assignment rate<0.4%. Therefore, the method enables accurate sequencing and assignment of homologous DNA sequences from multiple sources in single high-throughput GS20 run. We observe a bias in the distribution of the differently tagged primers that is dependent on the 5' nucleotide of the tag. In particular, primers 5' labelled with a cytosine are heavily overrepresented among the final sequences, while those 5' labelled with a thymine are strongly underrepresented. A weaker bias also exists with regards to the distribution of the sequences as sorted by the second nucleotide of the dinucleotide tags. As the results are based on a single GS20 run, the general applicability of the approach requires confirmation. However, our experiments demonstrate that 5'primer tagging is a useful method in which the sequencing power of the GS20 can be applied to PCR-based assays of multiple homologous PCR products. The new approach will be of value to a broad range of research areas, such as those of comparative genomics, complete mitochondrial

  8. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  9. Dynamic modeling and hierarchical compound control of a novel 2-DOF flexible parallel manipulator with multiple actuation modes

    Science.gov (United States)

    Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying

    2018-03-01

    This paper addresses the problem of rigid-flexible coupling dynamic modeling and active control of a novel flexible parallel manipulator (PM) with multiple actuation modes. Firstly, based on the flexible multi-body dynamics theory, the rigid-flexible coupling dynamic model (RFDM) of system is developed by virtue of the augmented Lagrangian multipliers approach. For completeness, the mathematical models of permanent magnet synchronous motor (PMSM) and piezoelectric transducer (PZT) are further established and integrated with the RFDM of mechanical system to formulate the electromechanical coupling dynamic model (ECDM). To achieve the trajectory tracking and vibration suppression, a hierarchical compound control strategy is presented. Within this control strategy, the proportional-differential (PD) feedback controller is employed to realize the trajectory tracking of end-effector, while the strain and strain rate feedback (SSRF) controller is developed to restrain the vibration of the flexible links using PZT. Furthermore, the stability of the control algorithm is demonstrated based on the Lyapunov stability theory. Finally, two simulation case studies are performed to illustrate the effectiveness of the proposed approach. The results indicate that, under the redundant actuation mode, the hierarchical compound control strategy can guarantee the flexible PM achieves singularity-free motion and vibration attenuation within task workspace simultaneously. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and efficient controller design of other flexible PMs, especially the emerging ones with multiple actuation modes.

  10. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    Science.gov (United States)

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  11. Multiple and Periodic Measurement of RBC Aggregation and ESR in Parallel Microfluidic Channels under On-Off Blood Flow Control

    Directory of Open Access Journals (Sweden)

    Yang Jun Kang

    2018-06-01

    Full Text Available Red blood cell (RBC aggregation causes to alter hemodynamic behaviors at low flow-rate regions of post-capillary venules. Additionally, it is significantly elevated in inflammatory or pathophysiological conditions. In this study, multiple and periodic measurements of RBC aggregation and erythrocyte sedimentation rate (ESR are suggested by sucking blood from a pipette tip into parallel microfluidic channels, and quantifying image intensity, especially through single experiment. Here, a microfluidic device was prepared from a master mold using the xurography technique rather than micro-electro-mechanical-system fabrication techniques. In order to consider variations of RBC aggregation in microfluidic channels due to continuous ESR in the conical pipette tip, two indices (aggregation index (AI and erythrocyte-sedimentation-rate aggregation index (EAI are evaluated by using temporal variations of microscopic, image-based intensity. The proposed method is employed to evaluate the effect of hematocrit and dextran solution on RBC aggregation under continuous ESR in the conical pipette tip. As a result, EAI displays a significantly linear relationship with modified conventional ESR measurement obtained by quantifying time constants. In addition, EAI varies linearly within a specific concentration of dextran solution. In conclusion, the proposed method is able to measure RBC aggregation under continuous ESR in the conical pipette tip. Furthermore, the method provides multiple data of RBC aggregation and ESR through a single experiment. A future study will involve employing the proposed method to evaluate biophysical properties of blood samples collected from cardiovascular diseases.

  12. Cloud-Coffee: implementation of a parallel consistency-based multiple alignment algorithm in the T-Coffee package and its benchmarking on the Amazon Elastic-Cloud.

    Science.gov (United States)

    Di Tommaso, Paolo; Orobitg, Miquel; Guirado, Fernando; Cores, Fernado; Espinosa, Toni; Notredame, Cedric

    2010-08-01

    We present the first parallel implementation of the T-Coffee consistency-based multiple aligner. We benchmark it on the Amazon Elastic Cloud (EC2) and show that the parallelization procedure is reasonably effective. We also conclude that for a web server with moderate usage (10K hits/month) the cloud provides a cost-effective alternative to in-house deployment. T-Coffee is a freeware open source package available from http://www.tcoffee.org/homepage.html

  13. Key conditions for successful value chain partnerships : A multiple case study in Ethiopia

    NARCIS (Netherlands)

    S. Drost (Sarah); J.C.A.C. van Wijk (Jeroen); F. Mandefro (Fenta)

    2012-01-01

    textabstractThis paper explores the black box of value chain partnerships, by showing how these partnerships can facilitate institutional change that is needed to include smallholder producers and small-and medium sized enterprises into (global) food value chains. It draws on agricultural value

  14. Value of the free light chain analysis in the clinical evaluation of response in multiple myeloma patients receiving anti-myeloma therapy

    DEFF Research Database (Denmark)

    Toftmann Hansen, Charlotte; Pedersen, Per T.; Jensen, Bo Amdi

    Value of the free light chain analysis in the clinical evaluation of response in multiple myeloma patients receiving anti-myeloma therapy.......Value of the free light chain analysis in the clinical evaluation of response in multiple myeloma patients receiving anti-myeloma therapy....

  15. Photo-orientation of azobenzene side chain polymers parallel or perpendicular to the polarization of red HeNe light

    International Nuclear Information System (INIS)

    Kempe, Christian; Rutloh, Michael; Stumpe, Joachim

    2003-01-01

    The mechanism of the light-induced orientation process of azobenzene-containing polymers caused by irradiation with linearly polarized red light is investigated. This process is surprising because there is almost no absorption at 633 nm. Depending on the photochemical pre-treatment and the exposure time, the azobenzene moieties can undergo two different orientation processes resulting in either a parallel or a perpendicular orientation with respect to the electric field vector of the incident light. The fast orientation of the photochromic groups with their long axis in the direction of the light polarization requires a photochemical pre-treatment in which non-polarized UV light generates Z-isomers. Due to this procedure the film becomes 'photochemically activated' for the subsequent polarized irradiation with red light. But on continued exposure a second, much slower reorientation process occurs which establishes an orientation of the azobenzene groups perpendicular to the electric field vector. The fast mechanism is probably caused by an angle-selective photo-isomerization of the Z-isomers to the E-isomers, while the subsequent slow reorientation process is caused by the well-known conventional photo-orientation taking place via the accumulation of a number of photoselection steps and the rotational diffusion minimizing the absorbance of the E-isomer. This process occurs in the steady state but at this wavelength with a very small concentration of Z-isomers. The competing mechanisms take place in the same polymer film under almost identical irradiation conditions, differing only in the actual concentration of the Z-isomers

  16. The multiplicity of dehydrogenases in the electron transport chain of plant mitochondria

    DEFF Research Database (Denmark)

    Rasmusson, Allan G; Geisler, Daniela A; Møller, Ian Max

    2008-01-01

    The electron transport chain in mitochondria of different organisms contains a mixture of common and specialised components. The specialised enzymes form branches to the universal electron path, especially at the level of ubiquinone, and allow the chain to adjust to different cellular and metabolic...... and their consequences for the understanding of electron transport and redundancy of electron paths...... requirements. In plants, specialised components have been known for a long time. However, recently, the known number of plant respiratory chain dehydrogenases has increased, including both components specific to plants and those with mammalian counterparts. This review will highlight the novel branches...

  17. Centrifugo-pneumatic multi-liquid aliquoting - parallel aliquoting and combination of multiple liquids in centrifugal microfluidics.

    Science.gov (United States)

    Schwemmer, F; Hutzenlaub, T; Buselmeier, D; Paust, N; von Stetten, F; Mark, D; Zengerle, R; Kosse, D

    2015-08-07

    The generation of mixtures with precisely metered volumes is essential for reproducible automation of laboratory workflows. Splitting a given liquid into well-defined metered sub-volumes, the so-called aliquoting, has been frequently demonstrated on centrifugal microfluidics. However, so far no solution exists for assays that require simultaneous aliquoting of multiple, different liquids and the subsequent pairwise combination of aliquots with full fluidic separation before combination. Here, we introduce the centrifugo-pneumatic multi-liquid aliquoting designed for parallel aliquoting and pairwise combination of multiple liquids. All pumping and aliquoting steps are based on a combination of centrifugal forces and pneumatic forces. The pneumatic forces are thereby provided intrinsically by centrifugal transport of the assay liquids into dead end chambers to compress the enclosed air. As an example, we demonstrate simultaneous aliquoting of 1.) a common assay reagent into twenty 5 μl aliquots and 2.) five different sample liquids, each into four aliquots of 5 μl. Subsequently, the reagent and sample aliquots are simultaneously transported and combined into twenty collection chambers. All coefficients of variation for metered volumes were between 0.4%-1.0% for intra-run variations and 0.5%-1.2% for inter-run variations. The aliquoting structure is compatible to common assay reagents with a wide range of liquid and material properties, demonstrated here for contact angles between 20° and 60°, densities between 789 and 1855 kg m(-3) and viscosities between 0.89 and 4.1 mPa s. The centrifugo-pneumatic multi-liquid aliquoting is implemented as a passive fluidic structure into a single fluidic layer. Fabrication is compatible to scalable fabrication technologies such as injection molding or thermoforming and does not require any additional fabrication steps such as hydrophilic or hydrophobic coatings or integration of active valves.

  18. In vitro aggregation behavior of a non-amyloidogenic λ light chain dimer deriving from U266 multiple myeloma cells.

    Directory of Open Access Journals (Sweden)

    Paolo Arosio

    Full Text Available Excessive production of monoclonal light chains due to multiple myeloma can induce aggregation-related disorders, such as light chain amyloidosis (AL and light chain deposition diseases (LCDD. In this work, we produce a non-amyloidogenic IgE λ light chain dimer from human mammalian cells U266, which originated from a patient suffering from multiple myeloma, and we investigate the effect of several physicochemical parameters on the in vitro stability of this protein. The dimer is stable in physiological conditions and aggregation is observed only when strong denaturating conditions are applied (acidic pH with salt at large concentration or heating at melting temperature T(m at pH 7.4. The produced aggregates are spherical, amorphous oligomers. Despite the larger β-sheet content of such oligomers with respect to the native state, they do not bind Congo Red or ThT. The impossibility to obtain fibrils from the light chain dimer suggests that the occurrence of amyloidosis in patients requires the presence of the light chain fragment in the monomer form, while dimer can form only amorphous oligomers or amorphous deposits. No aggregation is observed after denaturant addition at pH 7.4 or at pH 2.0 with low salt concentration, indicating that not a generic unfolding but specific conformational changes are necessary to trigger aggregation. A specific anion effect in increasing the aggregation rate at pH 2.0 is observed according to the following order: SO(4(-≫Cl(->H(2PO(4(-, confirming the peculiar role of sulfate in promoting protein aggregation. It is found that, at least for the investigated case, the mechanism of the sulfate effect is related to protein secondary structure changes induced by anion binding.

  19. Bandwidth scalable, coherent transmitter based on the parallel synthesis of multiple spectral slices using optical arbitrary waveform generation.

    Science.gov (United States)

    Geisler, David J; Fontaine, Nicolas K; Scott, Ryan P; He, Tingting; Paraschis, Loukas; Gerstel, Ori; Heritage, Jonathan P; Yoo, S J B

    2011-04-25

    We demonstrate an optical transmitter based on dynamic optical arbitrary waveform generation (OAWG) which is capable of creating high-bandwidth (THz) data waveforms in any modulation format using the parallel synthesis of multiple coherent spectral slices. As an initial demonstration, the transmitter uses only 5.5 GHz of electrical bandwidth and two 10-GHz-wide spectral slices to create 100-ns duration, 20-GHz optical waveforms in various modulation formats including differential phase-shift keying (DPSK), quaternary phase-shift keying (QPSK), and eight phase-shift keying (8PSK) with only changes in software. The experimentally generated waveforms showed clear eye openings and separated constellation points when measured using a real-time digital coherent receiver. Bit-error-rate (BER) performance analysis resulted in a BER < 9.8 × 10(-6) for DPSK and QPSK waveforms. Additionally, we experimentally demonstrate three-slice, 4-ns long waveforms that highlight the bandwidth scalable nature of the optical transmitter. The various generated waveforms show that the key transmitter properties (i.e., packet length, modulation format, data rate, and modulation filter shape) are software definable, and that the optical transmitter is capable of acting as a flexible bandwidth transmitter.

  20. GENESIS 1.1: A hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms.

    Science.gov (United States)

    Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji

    2017-09-30

    GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  2. T-cell libraries allow simple parallel generation of multiple peptide-specific human T-cell clones.

    Science.gov (United States)

    Theaker, Sarah M; Rius, Cristina; Greenshields-Watson, Alexander; Lloyd, Angharad; Trimby, Andrew; Fuller, Anna; Miles, John J; Cole, David K; Peakman, Mark; Sewell, Andrew K; Dolton, Garry

    2016-03-01

    Isolation of peptide-specific T-cell clones is highly desirable for determining the role of T-cells in human disease, as well as for the development of therapies and diagnostics. However, generation of monoclonal T-cells with the required specificity is challenging and time-consuming. Here we describe a library-based strategy for the simple parallel detection and isolation of multiple peptide-specific human T-cell clones from CD8(+) or CD4(+) polyclonal T-cell populations. T-cells were first amplified by CD3/CD28 microbeads in a 96U-well library format, prior to screening for desired peptide recognition. T-cells from peptide-reactive wells were then subjected to cytokine-mediated enrichment followed by single-cell cloning, with the entire process from sample to validated clone taking as little as 6 weeks. Overall, T-cell libraries represent an efficient and relatively rapid tool for the generation of peptide-specific T-cell clones, with applications shown here in infectious disease (Epstein-Barr virus, influenza A, and Ebola virus), autoimmunity (type 1 diabetes) and cancer. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Power Flow Calculation for Weakly Meshed Distribution Networks with Multiple DGs Based on Generalized Chain-table Storage Structure

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Chen, Zhe

    2014-01-01

    Based on generalized chain-table storage structure (GCTSS), a novel power flow method is proposed, which can be used to solve the power flow of weakly meshed distribution networks with multiple distributed generators (DGs). GCTSS is designed based on chain-table technology and its target is to de......Based on generalized chain-table storage structure (GCTSS), a novel power flow method is proposed, which can be used to solve the power flow of weakly meshed distribution networks with multiple distributed generators (DGs). GCTSS is designed based on chain-table technology and its target...... is to describe the topology of radial distribution networks with a clear logic and a small memory size. The strategies of compensating the equivalent currents of break-point branches and the reactive power outputs of PV-type DGs are presented on the basis of superposition theorem. Their formulations...... are simplified to be the final multi-variable linear functions. Furthermore, an accelerating factor is applied to the outer-layer reactive power compensation for improving the convergence procedure. Finally, the proposed power flow method is performed in program language VC++ 6.0, and numerical tests have been...

  4. A Sequential Circuit-Based IP Watermarking Algorithm for Multiple Scan Chains in Design-for-Test

    Directory of Open Access Journals (Sweden)

    C. Wu

    2011-06-01

    Full Text Available In Very Large Scale Integrated Circuits (VLSI design, the existing Design-for-Test(DFT based watermarking techniques usually insert watermark through reordering scan cells, which causes large resource overhead, low security and coverage rate of watermark detection. A novel scheme was proposed to watermark multiple scan chains in DFT for solving the problems. The proposed scheme adopts DFT scan test model of VLSI design, and uses a Linear Feedback Shift Register (LFSR for pseudo random test vector generation. All of the test vectors are shifted in scan input for the construction of multiple scan chains with minimum correlation. Specific registers in multiple scan chains will be changed by the watermark circuit for watermarking the design. The watermark can be effectively detected without interference with normal function of the circuit, even after the chip is packaged. The experimental results on several ISCAS benchmarks show that the proposed scheme has lower resource overhead, probability of coincidence and higher coverage rate of watermark detection by comparing with the existing methods.

  5. Identification of human intestinal parasites affecting an asymptomatic peri-urban Argentinian population using multi-parallel quantitative real-time polymerase chain reaction.

    Science.gov (United States)

    Cimino, Rubén O; Jeun, Rebecca; Juarez, Marisa; Cajal, Pamela S; Vargas, Paola; Echazú, Adriana; Bryan, Patricia E; Nasser, Julio; Krolewiecki, Alejandro; Mejia, Rojelio

    2015-07-17

    In resource-limited countries, stool microscopy is the diagnostic test of choice for intestinal parasites (soil-transmitted helminths and/or intestinal protozoa). However, sensitivity and specificity is low. Improved diagnosis of intestinal parasites is especially important for accurate measurements of prevalence and intensity of infections in endemic areas. The study was carried out in Orán, Argentina. A total of 99 stool samples from a local surveillance campaign were analyzed by concentration microscopy and McMaster egg counting technique compared to the analysis by multi-parallel quantitative real-time polymerase chain reaction (qPCR). This study compared the performance of qPCR assay and stool microscopy for 8 common intestinal parasites that infect humans including the helminths Ascaris lumbricoides, Ancylostoma duodenale, Necator americanus, Strongyloides stercoralis, Trichuris trichiura, and the protozoa Giardia lamblia, Cryptosporidium parvum/hominis, and Entamoeba histolytica, and investigated the prevalence of polyparasitism in an endemic area. qPCR showed higher detection rates for all parasites as compared to stool microscopy except T. trichiura. Species-specific primers and probes were able to distinguish between A. duodenale (19.1%) and N. americanus (36.4%) infections. There were 48.6% of subjects co-infected with both hookworms, and a significant increase in hookworm DNA for A. duodenale versus N. americanus (119.6 fg/μL: 0.63 fg/μL, P parasites in an endemic area that has improved diagnostic accuracy compared to stool microscopy. This first time use of multi-parallel qPCR in Argentina has demonstrated the high prevalence of intestinal parasites in a peri-urban area. These results will contribute to more accurate epidemiological survey, refined treatment strategies on a public scale, and better health outcomes in endemic settings.

  6. Modularity in supply chains: a multiple case study in the construction industry

    NARCIS (Netherlands)

    Voordijk, Johannes T.; Meijboom, Bert; de Haan, Job

    2006-01-01

    Purpose – The objective of this study is to assess the applicability of Fine's three-dimensional modularity concept as a tool to describe and to analyze the alignment of product, process, and supply chain architectures. Fine claims that the degree of modularity in the final output product has a

  7. Choice between Single and Multiple Reinforcers in Concurrent-Chains Schedules

    Science.gov (United States)

    Mazur, James E.

    2006-01-01

    Pigeons responded on concurrent-chains schedules with equal variable-interval schedules as initial links. One terminal link delivered a single reinforcer after a fixed delay, and the other terminal link delivered either three or five reinforcers, each preceded by a fixed delay. Some conditions included a postreinforcer delay after the single…

  8. Multiple attractors and boundary crises in a tri-trophic food chain.

    NARCIS (Netherlands)

    Boer, M.P.; Kooi, B.W.; Kooijman, S.A.L.M.

    2001-01-01

    The asymptotic behaviour of a model of a tri-trophic food chain in the chemostat is analysed in detail. The Monod growth model is used for all trophic levels, yielding a non-linear dynamical system of four ordinary differential equations. Mass conservation makes it possible to reduce the dimension

  9. An Overview of High-performance Parallel Big Data transfers over multiple network channels with Transport Layer Security (TLS) and TLS plus Perfect Forward Secrecy (PFS)

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Chin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corttrell, R. A. [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-05-06

    This Technical Note provides an overview of high-performance parallel Big Data transfers with and without encryption for data in-transit over multiple network channels. It shows that with the parallel approach, it is feasible to carry out high-performance parallel "encrypted" Big Data transfers without serious impact to throughput. But other impacts, e.g. the energy-consumption part should be investigated. It also explains our rationales of using a statistics-based approach for gaining understanding from test results and for improving the system. The presentation is of high-level nature. Nevertheless, at the end we will pose some questions and identify potentially fruitful directions for future work.

  10. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  11. Multiple mobility edges in a 1D Aubry chain with Hubbard interaction in presence of electric field: Controlled electron transport

    Science.gov (United States)

    Saha, Srilekha; Maiti, Santanu K.; Karmakar, S. N.

    2016-09-01

    Electronic behavior of a 1D Aubry chain with Hubbard interaction is critically analyzed in presence of electric field. Multiple energy bands are generated as a result of Hubbard correlation and Aubry potential, and, within these bands localized states are developed under the application of electric field. Within a tight-binding framework we compute electronic transmission probability and average density of states using Green's function approach where the interaction parameter is treated under Hartree-Fock mean field scheme. From our analysis we find that selective transmission can be obtained by tuning injecting electron energy, and thus, the present model can be utilized as a controlled switching device.

  12. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  13. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    Science.gov (United States)

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  14. Efficient, approximate and parallel Hartree-Fock and hybrid DFT calculations. A 'chain-of-spheres' algorithm for the Hartree-Fock exchange

    International Nuclear Information System (INIS)

    Neese, Frank; Wennmohs, Frank; Hansen, Andreas; Becker, Ute

    2009-01-01

    In this paper, the possibility is explored to speed up Hartree-Fock and hybrid density functional calculations by forming the Coulomb and exchange parts of the Fock matrix by different approximations. For the Coulomb part the previously introduced Split-RI-J variant (F. Neese, J. Comput. Chem. 24 (2003) 1740) of the well-known 'density fitting' approximation is used. The exchange part is formed by semi-numerical integration techniques that are closely related to Friesner's pioneering pseudo-spectral approach. Our potentially linear scaling realization of this algorithm is called the 'chain-of-spheres exchange' (COSX). A combination of semi-numerical integration and density fitting is also proposed. Both Split-RI-J and COSX scale very well with the highest angular momentum in the basis sets. It is shown that for extended basis sets speed-ups of up to two orders of magnitude compared to traditional implementations can be obtained in this way. Total energies are reproduced with an average error of <0.3 kcal/mol as determined from extended test calculations with various basis sets on a set of 26 molecules with 20-200 atoms and up to 2000 basis functions. Reaction energies agree to within 0.2 kcal/mol (Hartree-Fock) or 0.05 kcal/mol (hybrid DFT) with the canonical values. The COSX algorithm parallelizes with a speedup of 8.6 observed for 10 processes. Minimum energy geometries differ by less than 0.3 pm in the bond distances and 0.5 deg. in the bond angels from their canonical values. These developments enable highly efficient and accurate self-consistent field calculations including nonlocal Hartree-Fock exchange for large molecules. In combination with the RI-MP2 method and large basis sets, second-order many body perturbation energies can be obtained for medium sized molecules with unprecedented efficiency. The algorithms are implemented into the ORCA electronic structure system

  15. IgD multiple myeloma: Clinical, biological features and prognostic value of the serum free light chain assay.

    Science.gov (United States)

    Djidjik, R; Lounici, Y; Chergeulaïne, K; Berkouk, Y; Mouhoub, S; Chaib, S; Belhani, M; Ghaffor, M

    2015-09-01

    IgD multiple myeloma (MM) is a rare subtype of myeloma, it affects less than 2% of patients with MM. To evaluate the clinical and prognostic attributes of serum free light chains (sFLCs) analysis, we examined 17 cases of IgD MM. From 1998 to 2012, we obtained 1250 monoclonal gammapathies including 590 multiple myeloma and 17 patients had IgD MM. With preponderance of men patients with a mean age at diagnosis of: 59±12years. Patients with IgD MM have a short survival (Median survival=9months). The presenting features included: bone pain (75%), lymphadenopathy (16%), hepatomegaly (25%), splenomegaly (8%), associated AL amyloidosis (6%), renal impairment function (82%), infections (47%), hypercalcemia (37%) and anemia (93%). Serum electrophoresis showed a subtle M-spike (Mean=13.22±10g/L) in all patients associated to a hypogammaglobulinemia. There was an over-representation of Lambda light chain (65%); high serum β2-microglobulin in 91% and Bence Jones proteinuria was identified in 71%. The median rate of sFLCs κ was 19.05mg/L and 296.75mg/L for sFLCs λ. sFLCR was abnormal in 93% of patients and it showed concordance between baseline sFLCR and the survival (P=0.034). The contribution of FLC assay is crucial for the prognosis of patients with IgD MM. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  16. On intra-supply chain system with an improved distribution plan, multiple sales locations and quality assurance.

    Science.gov (United States)

    Chiu, Singa Wang; Huang, Chao-Chih; Chiang, Kuo-Wei; Wu, Mei-Fang

    2015-01-01

    Transnational companies, operating in extremely competitive global markets, always seek to lower different operating costs, such as inventory holding costs in their intra- supply chain system. This paper incorporates a cost reducing product distribution policy into an intra-supply chain system with multiple sales locations and quality assurance studied by [Chiu et al., Expert Syst Appl, 40:2669-2676, (2013)]. Under the proposed cost reducing distribution policy, an added initial delivery of end items is distributed to multiple sales locations to meet their demand during the production unit's uptime and rework time. After rework when the remaining production lot goes through quality assurance, n fixed quantity installments of finished items are then transported to sales locations at a fixed time interval. Mathematical modeling and optimization techniques are used to derive closed-form optimal operating policies for the proposed system. Furthermore, the study demonstrates significant savings in stock holding costs for both the production unit and sales locations. Alternative of outsourcing product delivery task to an external distributor is analyzed to assist managerial decision making in potential outsourcing issues in order to facilitate further reduction in operating costs.

  17. Acyl chains of phospholipase D transphosphatidylation products in Arabidopsis cells: a study using multiple reaction monitoring mass spectrometry.

    Directory of Open Access Journals (Sweden)

    Dominique Rainteau

    Full Text Available BACKGROUND: Phospholipases D (PLD are major components of signalling pathways in plant responses to some stresses and hormones. The product of PLD activity is phosphatidic acid (PA. PAs with different acyl chains do not have the same protein targets, so to understand the signalling role of PLD it is essential to analyze the composition of its PA products in the presence and absence of an elicitor. METHODOLOGY/PRINCIPAL FINDINGS: Potential PLD substrates and products were studied in Arabidopsis thaliana suspension cells treated with or without the hormone salicylic acid (SA. As PA can be produced by enzymes other than PLD, we analyzed phosphatidylbutanol (PBut, which is specifically produced by PLD in the presence of n-butanol. The acyl chain compositions of PBut and the major glycerophospholipids were determined by multiple reaction monitoring (MRM mass spectrometry. PBut profiles of untreated cells or cells treated with SA show an over-representation of 160/18:2- and 16:0/18:3-species compared to those of phosphatidylcholine and phosphatidylethanolamine either from bulk lipid extracts or from purified membrane fractions. When microsomal PLDs were used in in vitro assays, the resulting PBut profile matched exactly that of the substrate provided. Therefore there is a mismatch between the acyl chain compositions of putative substrates and the in vivo products of PLDs that is unlikely to reflect any selectivity of PLDs for the acyl chains of substrates. CONCLUSIONS: MRM mass spectrometry is a reliable technique to analyze PLD products. Our results suggest that PLD action in response to SA is not due to the production of a stress-specific molecular species, but that the level of PLD products per se is important. The over-representation of 160/18:2- and 16:0/18:3-species in PLD products when compared to putative substrates might be related to a regulatory role of the heterogeneous distribution of glycerophospholipids in membrane sub-domains.

  18. Multiple-scattering formalism beyond the quasistatic approximation: Analyzing resonances in plasmonic chains

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2012-01-01

    We present a multiple-scattering formalism for simulating scattering of electromagnetic waves on spherical inhomogeneities in 3D. The formalism is based on the Lippmann-Schwinger equation and the electromagnetic Green's tensor and applies an expansion of the electric field on spherical...

  19. Patterns of gene flow and selection across multiple species of Acrocephalus warblers: footprints of parallel selection on the Z chromosome

    Czech Academy of Sciences Publication Activity Database

    Reifová, R.; Majerová, V.; Reif, J.; Ahola, M.; Lindholm, A.; Procházka, Petr

    2016-01-01

    Roč. 16, č. 130 (2016), s. 130 ISSN 1471-2148 Institutional support: RVO:68081766 Keywords : Adaptive radiation * Speciation * Gene flow * Parallel adaptive evolution * Z chromosome * Acrocephalus warblers Subject RIV: EG - Zoology Impact factor: 3.221, year: 2016

  20. Therapeutic activity of multiple common γ-chain cytokine inhibition in acute and chronic GVHD.

    Science.gov (United States)

    Hechinger, Anne-Kathrin; Smith, Benjamin A H; Flynn, Ryan; Hanke, Kathrin; McDonald-Hyman, Cameron; Taylor, Patricia A; Pfeifer, Dietmar; Hackanson, Björn; Leonhardt, Franziska; Prinz, Gabriele; Dierbach, Heide; Schmitt-Graeff, Annette; Kovarik, Jiri; Blazar, Bruce R; Zeiser, Robert

    2015-01-15

    The common γ chain (CD132) is a subunit of the interleukin (IL) receptors for IL-2, IL-4, IL-7, IL-9, IL-15, and IL-21. Because levels of several of these cytokines were shown to be increased in the serum of patients developing acute and chronic graft-versus-host disease (GVHD), we reasoned that inhibition of CD132 could have a profound effect on GVHD. We observed that anti-CD132 monoclonal antibody (mAb) reduced acute GVHD potently with respect to survival, production of tumor necrosis factor, interferon-γ, and IL-6, and GVHD histopathology. Anti-CD132 mAb afforded protection from GVHD partly via inhibition of granzyme B production in CD8 T cells, whereas exposure of CD8 T cells to IL-2, IL-7, IL-15, and IL-21 increased granzyme B production. Also, T cells exposed to anti-CD132 mAb displayed a more naive phenotype in microarray-based analyses and showed reduced Janus kinase 3 (JAK3) phosphorylation upon activation. Consistent with a role of JAK3 in GVHD, Jak3(-/-) T cells caused less severe GVHD. Additionally, anti-CD132 mAb treatment of established chronic GVHD reversed liver and lung fibrosis, and pulmonary dysfunction characteristic of bronchiolitis obliterans. We conclude that acute GVHD and chronic GVHD, caused by T cells activated by common γ-chain cytokines, each represent therapeutic targets for anti-CD132 mAb immunomodulation. © 2015 by The American Society of Hematology.

  1. E-SCM AND INVENTORY MANAGEMENT: A STUDY OF MULTIPLE CASES IN A SEGMENT OF THE DEPARTMENT STORE CHAIN

    Directory of Open Access Journals (Sweden)

    Juliana Chiaretti Novi

    2011-08-01

    Full Text Available Inventory management through the supply chains is a theme that has always enticed managers throughout the world. Due to the increase in market competitiveness and complexity, the traditional statistical models of forecasting demand, based on time series, no longer met the needs imposed on businesses to maintain adequate levels of their inventory and supply interruptions. With the intent to meet these market demands, ERP systems appeared in the 1990’s. Nevertheless, even if allowing for a more adequate level of inventory and supply interruptions achieved mainly by the optimization of internal processes and the reduction in lead time, ERP systems did not contribute to reach the SCM’s desired levels of inventory that were aimed at by the more competitive businesses. This is because ERP limits itself to an internal analysis of the business. By contrast, inventory management depends on the consumption information (which is external to the business. Aiming to improve even further the level of services delivered to the end consumer, new solutions have been developed, among them the e- SCM, which, since it makes consumption information available in real time, ends up being more dynamic and efficient than the traditional demand forecasting models, Therefore, the present study aims to analyze how the e-SCM can collaborate in maintaining adequate levels of inventory and interruptions in the supply chains. The hypothesis made is that the traditional statistical forecasting models, based on time series and isolatedly, are no longer adequate to adjust the demand, as the tools based on these models do not update the demand in real time and this is fundamental in the current business dynamics. The research method used was the study of multiple cases in a segment of a chain involving a large retailer, its Distribution Center and a supplier of home appliances. For the analysis of the data, the content analysis technique was used. As main results, it was

  2. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  3. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  4. A modified parallel constitutive model for elevated temperature flow behavior of Ti-6Al-4V alloy based on multiple regression

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Jun; Shi, Jiamin; Wang, Kuaishe; Wang, Wen; Wang, Qingjuan; Liu, Yingying [Xi' an Univ. of Architecture and Technology, Xi' an (China). School of Metallurgical Engineering; Li, Fuguo [Northwestern Polytechnical Univ., Xi' an (China). School of Materials Science and Engineering

    2017-07-15

    Constitutive analysis for hot working of Ti-6Al-4V alloy was carried out by using experimental stress-strain data from isothermal hot compression tests. A new kind of constitutive equation called a modified parallel constitutive model was proposed by considering the independent effects of strain, strain rate and temperature. The predicted flow stress data were compared with the experimental data. Statistical analysis was introduced to verify the validity of the developed constitutive equation. Subsequently, the accuracy of the proposed constitutive equations was evaluated by comparing with other constitutive models. The results showed that the developed modified parallel constitutive model based on multiple regression could predict flow stress of Ti-6Al-4V alloy with good correlation and generalization.

  5. Vertical Cost-Information Sharing in a Food Supply Chain with Multiple Unreliable Suppliers and Two Manufacturers

    Directory of Open Access Journals (Sweden)

    Junjian Wu

    2017-01-01

    Full Text Available This paper considers a food supply chain where multiple suppliers provide completely substitutable food products to two manufacturers. Meanwhile, the suppliers face yield uncertainty and the manufacturers face uncertain production costs that are private information. While the suppliers compete on price, the manufacturers compete on quantity. We build a stylized multistage game theoretic model to analyze the issue of vertical cost-information sharing (VCIS within the supply chain by considering key parameters, including the level of yield uncertainty, two manufacturers’ cost correlation, the correlated coefficient of suppliers’ yield processes, and the number of suppliers. We study the suppliers’ optimal wholesale price and the manufacturers’ optimal order quantities under different VCIS strategies. Finally, through numerical analyses, we examine how key parameters affect the value of VCIS to each supplier and each manufacturer, respectively. We found that the manufacturers are willing to share cost information with suppliers only when the two manufacturers’ cost correlation is less than a threshold. While a high correlated coefficient of suppliers’ yield processes and a large number of suppliers promote complete information sharing, a high level of yield uncertainty hinders complete information sharing. All these findings have important implications to industry practices.

  6. The use of coded PCR primers enables high-throughput sequencing of multiple homolog amplification products by 454 parallel sequencing

    DEFF Research Database (Denmark)

    Binladen, Jonas; Gilbert, M Thomas P; Bollback, Jonathan P

    2007-01-01

    BACKGROUND: The invention of the Genome Sequence 20 DNA Sequencing System (454 parallel sequencing platform) has enabled the rapid and high-volume production of sequence data. Until now, however, individual emulsion PCR (emPCR) reactions and subsequent sequencing runs have been unable to combine...... primers that is dependent on the 5' nucleotide of the tag. In particular, primers 5' labelled with a cytosine are heavily overrepresented among the final sequences, while those 5' labelled with a thymine are strongly underrepresented. A weaker bias also exists with regards to the distribution...

  7. Detection of Multiple Parallel Transmission Outbreak of Streptococcus suis Human Infection by Use of Genome Epidemiology, China, 2005.

    Science.gov (United States)

    Du, Pengcheng; Zheng, Han; Zhou, Jieping; Lan, Ruiting; Ye, Changyun; Jing, Huaiqi; Jin, Dong; Cui, Zhigang; Bai, Xuemei; Liang, Jianming; Liu, Jiantao; Xu, Lei; Zhang, Wen; Chen, Chen; Xu, Jianguo

    2017-02-01

    Streptococcus suis sequence type 7 emerged and caused 2 of the largest human infection outbreaks in China in 1998 and 2005. To determine the major risk factors and source of the infections, we analyzed whole genomes of 95 outbreak-associated isolates, identified 160 single nucleotide polymorphisms, and classified them into 6 clades. Molecular clock analysis revealed that clade 1 (responsible for the 1998 outbreak) emerged in October 1997. Clades 2-6 (responsible for the 2005 outbreak) emerged separately during February 2002-August 2004. A total of 41 lineages of S. suis emerged by the end of 2004 and rapidly expanded to 68 genome types through single base mutations when the outbreak occurred in June 2005. We identified 32 identical isolates and classified them into 8 groups, which were distributed in a large geographic area with no transmission link. These findings suggest that persons were infected in parallel in respective geographic sites.

  8. The Modeling and Harmonic Coupling Analysis of Multiple-Parallel Connected Inverter Using Harmonic State Space (HSS)

    DEFF Research Database (Denmark)

    Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth

    2015-01-01

    As the number of power electronics based systems are increasing, studies about overall stability and harmonic problems are rising. In order to analyze harmonics and stability, most research is using an analysis method, which is based on the Linear Time Invariant (LTI) approach. However, this can...... be difficult in terms of complex multi-parallel connected systems, especially in the case of renewable energy, where possibilities for intermittent operation due to the weather conditions exist. Hence, it can bring many different operating points to the power converter, and the impedance characteristics can...... can demonstrate other phenomenon, which can not be found in the conventional LTI approach. The theoretical modeling and analysis are verified by means of simulations and experiments....

  9. Airborne electromagnetic detection of shallow seafloor topographic features, including resolution of multiple sub-parallel seafloor ridges

    Science.gov (United States)

    Vrbancich, Julian; Boyd, Graham

    2014-05-01

    The HoistEM helicopter time-domain electromagnetic (TEM) system was flown over waters in Backstairs Passage, South Australia, in 2003 to test the bathymetric accuracy and hence the ability to resolve seafloor structure in shallow and deeper waters (extending to ~40 m depth) that contain interesting seafloor topography. The topography that forms a rock peak (South Page) in the form of a mini-seamount that barely rises above the water surface was accurately delineated along its ridge from the start of its base (where the seafloor is relatively flat) in ~30 m water depth to its peak at the water surface, after an empirical correction was applied to the data to account for imperfect system calibration, consistent with earlier studies using the same HoistEM system. A much smaller submerged feature (Threshold Bank) of ~9 m peak height located in waters of 35 to 40 m depth was also accurately delineated. These observations when checked against known water depths in these two regions showed that the airborne TEM system, following empirical data correction, was effectively operating correctly. The third and most important component of the survey was flown over the Yatala Shoals region that includes a series of sub-parallel seafloor ridges (resembling large sandwaves rising up to ~20 m from the seafloor) that branch out and gradually decrease in height as the ridges spread out across the seafloor. These sub-parallel ridges provide an interesting topography because the interpreted water depths obtained from 1D inversion of TEM data highlight the limitations of the EM footprint size in resolving both the separation between the ridges (which vary up to ~300 m) and the height of individual ridges (which vary up to ~20 m), and possibly also the limitations of assuming a 1D model in areas where the topography is quasi-2D/3D.

  10. Multiple Facets of Self-Control in Arab Adolescents: Parallel Pathways to Greater Happiness and Less Physical Aggression

    Science.gov (United States)

    Gavriel-Fried, Belle; Ronen, Tammie; Agbaria, Qutaiba; Orkibi, Hod; Hamama, Liat

    2018-01-01

    Adolescence is a period of dramatic change that necessitates using skills and strengths to reduce physical aggression and increase happiness. This study examined the multiple facets of self-control skills in achieving both goals simultaneously, in a sample of 248 Arab adolescents in Israel. We conceptualized and tested a new multi-mediator model…

  11. Clinical usefulness of serum free light chains measurement in patients with multiple myeloma: comparative analysis of two different tests

    Directory of Open Access Journals (Sweden)

    Tadeusz Kubicki

    2017-01-01

    Full Text Available Introduction: There are two commercially available tests for measurement of serum free light chains (sFLC in multiple myeloma (MM patients – Freelite and N Latex FLC. The aim of this study was to perform an assessment and direct comparison of the usefulness of the methods in routine clinical practice.Methods: 40 refractory/relapsed MM patients underwent routine disease activity assessment studies, along with sFLC analysis using both assays. Correlation and concordance between the tests and sensitivity of studied methods of sFLC assessment were established. Special attention was focused on sFLC results in patients finally evaluated after completing the treatment. Results: A weak correlation for the measurement of both κ [Passing–Bablok slope (PB = 0.7681] and λ chains [(PB = 1.542] was found. Using Bland–Altman plots, a bias of 0.0467 (κ and -0.2133 (λ between the measurements was documented. The concordance coefficient equaled 0.87 for κ, 0.62 for λ and 0.52 for κ/λ ratio. Ten patients had an abnormal Freelite assay κ/λ ratio and normal N Latex FLC κ/λ ratio. Three of these patients had negative serum protein electrophoresis results and fulfilled diagnostic criteria of stringent complete remission (sCR according to N Latex FLC (but not according to Freelite. When the κ/λ ratio obtained by both methods was compared to patients’ serum/urine protein electrophoresis and immunofixation results, sensitivity of Freelite and N Latex FLC was established to be 62.5% and 41%, respectively. Conclusions: There was no strong correlation or concordance between the two assays, and the sensitivity in terms of sFLC detection was different. This may cause problems when diagnosis of sCR is considered.

  12. On grouping individual wire segments into equivalent wires or chains, and introduction of multiple domain basis functions

    CSIR Research Space (South Africa)

    Lysko, AA

    2009-06-01

    Full Text Available The paper introduces a method to cover several wire segments with a single basis function, describes related practical algorithms, and gives some results. The process involves three steps: identifying chains of wire segments, splitting the chains...

  13. Photoinduced dynamics of a cyanine dye: parallel pathways of non-radiative deactivation involving multiple excited-state twisted transients.

    Science.gov (United States)

    Upadhyayula, Srigokul; Nuñez, Vicente; Espinoza, Eli M; Larsen, Jillian M; Bao, Duoduo; Shi, Dewen; Mac, Jenny T; Anvari, Bahman; Vullev, Valentine I

    2015-04-01

    Cyanine dyes are broadly used for fluorescence imaging and other photonic applications. 3,3'-Diethylthiacyanine (THIA) is a cyanine dye composed of two identical aromatic heterocyclic moieties linked with a single methine, -CH 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 1111111111111111111111111111111111 1111111111111111111111111111111111 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 1111111111111111111111111111111111 1111111111111111111111111111111111 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 0000000000000000000000000000000000 . The torsional degrees of freedom around the methine bonds provide routes for non-radiative decay, responsible for the inherently low fluorescence quantum yields. Using transient absorption spectroscopy, we determined that upon photoexcitation, the excited state relaxes along two parallel pathways producing three excited-state transients that undergo internal conversion to the ground state. The media viscosity impedes the molecular modes of ring rotation and preferentially affects one of the pathways of non-radiative decay, exerting a dominant effect on the emission

  14. Reverse Transcription Polymerase Chain Reaction-based System for Simultaneous Detection of Multiple Lily-infecting Viruses

    Directory of Open Access Journals (Sweden)

    Ji Yeon Kwon

    2013-09-01

    Full Text Available A detection system based on a multiplex reverse transcription (RT polymerase chain reaction (PCR was developed to simultaneously identify multiple viruses in the lily plant. The most common viruses infecting lily plants are the cucumber mosaic virus (CMV, lily mottle virus (LMoV, lily symptomless virus (LSV. Leaf samples were collected at lily-cultivation facilities located in the Kangwon province of Korea and used to evaluate the detection system. Simplex and multiplex RT-PCR were performed using virus-specific primers to detect single-or mixed viral infections in lily plants. Our results demonstrate the selective detection of 3 different viruses (CMV, LMoV and LSV by using specific primers as well as the potential of simultaneously detecting 2 or 3 different viruses in lily plants with mixed infections. Three sets of primers for each target virus, and one set of internal control primers were used to evaluate the detection system for efficiency, reliability, and reproducibility.

  15. ASSESSMENT OF THE RESIDUAL TUMOR IN PATIENTS WITH MULTIPLE MYELOMA BASED ON THE ANALYSIS OF THE FREE LIGHT CHAINS OF IMMUNOGLOBULINS IN BLOOD SERUM

    Directory of Open Access Journals (Sweden)

    T. A. Мitina

    2013-01-01

    Full Text Available Efficiency of the multiple myeloma treatment with chemotherapy including bortezomib was assessed based on determination of the level of immunoglobulin free light chains in blood serum. The method enables estimation of changes in kinetic parameters of the residual tumor, detection of the disease course prognosis, and the choice of the optimal approach to the disease therapy.

  16. Can Multiple Lifestyle Behaviours Be Improved in People with Familial Hypercholesterolemia? Results of a Parallel Randomised Controlled Trial

    Science.gov (United States)

    Broekhuizen, Karen; van Poppel, Mireille N. M.; Koppes, Lando L.; Kindt, Iris; Brug, Johannes; van Mechelen, Willem

    2012-01-01

    Objective To evaluate the efficacy of an individualised tailored lifestyle intervention on physical activity, dietary intake, smoking and compliance to statin therapy in people with Familial Hypercholesterolemia (FH). Methods Adults with FH (n = 340) were randomly assigned to a usual care control group or an intervention group. The intervention consisted of web-based tailored lifestyle advice and face-to-face counselling. Physical activity, fat, fruit and vegetable intake, smoking and compliance to statin therapy were self-reported at baseline and after 12 months. Regression analyses were conducted to examine between-group differences. Intervention reach, dose and fidelity were assessed. Results In both groups, non-significant improvements in all lifestyle behaviours were found. Post-hoc analyses showed a significant decrease in saturated fat intake among women in the intervention group (β = −1.03; CI −1.98/−0.03). In the intervention group, 95% received a log on account, of which 49% logged on and completed one module. Nearly all participants received face-to-face counselling and on average, 4.2 telephone booster calls. Intervention fidelity was low. Conclusions Individually tailored feedback is not superior to no intervention regarding changes in multiple lifestyle behaviours in people with FH. A higher received dose of computer-tailored interventions should be achieved by uplifting the website and reducing the burden of screening questionnaires. Counsellor training should be more extensive. Trial Registration Dutch Trial Register NTR1899 PMID:23251355

  17. Hydraulic Fracture Induced Seismicity During A Multi-Stage Pad Completion in Western Canada: Evidence of Activation of Multiple, Parallel Faults

    Science.gov (United States)

    Maxwell, S.; Garrett, D.; Huang, J.; Usher, P.; Mamer, P.

    2017-12-01

    Following reports of injection induced seismicity in the Western Canadian Sedimentary Basin, regulators have imposed seismic monitoring and traffic light protocols for fracturing operations in specific areas. Here we describe a case study in one of these reservoirs, the Montney Shale in NE British Columbia, where induced seismicity was monitored with a local array during multi-stage hydraulic fracture stimulations on several wells from a single drilling pad. Seismicity primarily occurred during the injection time periods, and correlated with periods of high injection rates and wellhead pressures above fracturing pressures. Sequential hydraulic fracture stages were found to progressively activate several parallel, critically-stressed faults, as illuminated by multiple linear hypocenter patterns in the range between Mw 1 and 3. Moment tensor inversion of larger events indicated a double-couple mechanism consistent with the regional strike-slip stress state and the hypocenter lineations. The critically-stressed faults obliquely cross the well paths which were purposely drilled parallel to the minimum principal stress direction. Seismicity on specific faults started and stopped when fracture initiation points of individual injection stages were proximal to the intersection of the fault and well. The distance ranges when the seismicity occurs is consistent with expected hydraulic fracture dimensions, suggesting that the induced fault slip only occurs when a hydraulic fracture grows directly into the fault and the faults are temporarily exposed to significantly elevated fracture pressures during the injection. Some faults crossed multiple wells and the seismicity was found to restart during injection of proximal stages on adjacent wells, progressively expanding the seismogenic zone of the fault. Progressive fault slip is therefore inferred from the seismicity migrating further along the faults during successive injection stages. An accelerometer was also deployed close

  18. [The value of serum heavy/light chain immunoassay to assess therapeutic response in patients with multiple myeloma].

    Science.gov (United States)

    Yu, X C; Su, W; Zhuang, J L

    2018-04-14

    Objective: To assess the value of immunoglobulin heavy/light chain (HLC) immunoassay on therapeutic response in patients with multiple myeloma(MM). Methods: A total of 45 newly diagnosed MM patients were retrospectively enrolled in Peking Union Medical College Hospital from 2013 to 2016, whose 115 serum samples were consecutively collected. HLC was tested to evaluate response and compare with other methods for M protein detection. Results: ①There were 30 males and 15 females in total of whom the monoclonal immunoglobulin was IgG in 27 (IgGκ∶IgGλ 12∶15) and IgA (IgAκ∶IgAλ 9∶9) in 18. The arerage age of the studied population was 59 (range 43-80) . ② In 34 patients with serum sample at diagnosis, 32 (94.1%) had abnormal HLC ratio (rHLC) while 2 patients with IgG had normal rHLC. The percentages of abnormal rHLC was 81.8% (18/22) at partial response、50.0%(9/18) at very good complete response and 16.0%(4/25) at complete response. ③In 25 patients reaching CR, there were 13 with IgG and 12 with IgA. 4 patients equally split of IgG and IgA had abnormal rHLC at complete response. ④By monitoring the rHLC of some patients consecutively, we found that the remission of rHLC was to some extent behind the remission of SPE and IEF, or even rFLC. Conclusion: Immunoglobulin HLC detection is one feasible method for minimal residual disease detection.

  19. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    Science.gov (United States)

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  20. Exact solutions in the dynamics of alternating open chains of spins s = 1/2 with the XY Hamiltonian and their application to problems of multiple-quantum dynamics and quantum information theory

    International Nuclear Information System (INIS)

    Kuznetsova, E. I.; Fel'dman, E. B.

    2006-01-01

    A method for exactly diagonalizing the XY Hamiltonian of an alternating open chain of spins s = 1/2 has been proposed on the basis of the Jordan-Wigner transformation and analysis of the dynamics of spinless fermions. The multiple-quantum spin dynamics of alternating open chains at high temperatures has been analyzed and the intensities of multiple-quantum coherences have been calculated. The problem of the transfer of a quantum state from one end of the alternating chain to the other is studied. It has been shown that the ideal transfer of qubits is possible in alternating chains with a larger number of spins than that in homogeneous chains

  1. Analysis of NMR spectra of sugar chains of glycolipids by multiple relayed COSY and 2D homonuclear Hartman-Hahn spectroscopy

    International Nuclear Information System (INIS)

    Inagaki, F.; Kohda, D.; Kodama, C.; Suzuki, A.

    1987-01-01

    The authors applied multiple relayed COSY and 2D homonuclear Hartman-Hahn spectroscopy to globoside, a glycolipid purified from human red blood cells. The subspectra corresponding to individual sugar components were extracted even from overlapping proton resonances by taking the cross sections of 2D spectra parallel to the F 2 axis at anomeric proton resonances, so that unambiguous assignments of sugar proton resonances were accomplished. (Auth.)

  2. Understanding for convergence monitoring for probabilistic risk assessment based on Markov Chain Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Kim, Joo Yeon; Jang, Han Ki; Jang, Sol Ah; Park, Tae Jin

    2014-01-01

    There is a question that the simulation actually leads to draws from its target distribution and the most basic one is whether such Markov chains can always be constructed and all chain values sampled from them. The problem to be solved is the determination of how large this iteration should be to achieve the target distribution. This problem can be answered as convergence monitoring. In this paper, two widely used methods, such as autocorrelation and potential scale reduction factor (PSRF) in MCMC are characterized. There is no general agreement on the subject of the convergence. Although it is generally agreed that running n parallel chains in practice is computationally inefficient and unnecessary, running multiple parallel chains is generally applied for the convergence monitoring due to easy implementation. The main debate is the number of parallel chains needed. If the convergence properties of the chain are well understood then clearly a single chain suffices. Therefore, autocorrelation using single chain and multiple parallel ones are tried and their results then compared with each other in this study. And, the following question is answered from the two convergence results: Have the Markov chain realizations for achieved the target distribution?

  3. A Comprehensive Mathematical Programming Model for Minimizing Costs in A Multiple-Item Reverse Supply Chain with Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Mahmoudi Hoda

    2014-09-01

    Full Text Available These instructions give you guidelines for preparing papers for IFAC conferences. A reverse supply chain is configured by a sequence of elements forming a continuous process to treat return-products until they are properly recovered or disposed. The activities in a reverse supply chain include collection, cleaning, disassembly, test and sorting, storage, transport, and recovery operations. This paper presents a mathematical programming model with the objective of minimizing the total costs of reverse supply chain including transportation, fixed opening, operation, maintenance and remanufacturing costs of centers. The proposed model considers the design of a multi-layer, multi-product reverse supply chain that consists of returning, disassembly, processing, recycling, remanufacturing, materials and distribution centers. This integer linear programming model is solved by using Lingo 9 software and the results are reported. Finally, a sensitivity analysis of the proposed model is also presented.

  4. Confocal Cornea Microscopy Detects Involvement of Corneal Nerve Fibers in a Patient with Light-Chain Amyloid Neuropathy Caused by Multiple Myeloma: A Case Report

    Directory of Open Access Journals (Sweden)

    Dietrich Sturm

    2016-06-01

    Full Text Available Changes in the subbasal corneal plexus detected by confocal cornea microscopy (CCM have been described for various types of neuropathy. An involvement of these nerves within light-chain (AL amyloid neuropathy (a rare cause of polyneuropathy has never been shown. Here, we report on a case of a patient suffering from neuropathy caused by AL amyloidosis and underlying multiple myeloma. Small-fiber damage was detected by CCM.

  5. Performance analysis of a threshold-based parallel multiple beam selection scheme for WDM-based systems for Gamma-Gamma distributions

    KAUST Repository

    Nam, Sung Sik

    2017-03-02

    In this paper, we statistically analyze the performance of a threshold-based parallel multiple beam selection scheme (TPMBS) for Free-space optical (FSO) based system with wavelength division multiplexing (WDM) in cases where a pointing error has occurred for practical consideration over independent identically distributed (i.i.d.) Gamma-Gamma fading conditions. Specifically, we statistically analyze the characteristics in operation under conventional heterodyne detection (HD) scheme for both adaptive modulation (AM) case in addition to non-AM case (i.e., coherentnon-coherent binary modulation). Then, based on the statistically derived results, we evaluate the outage probability (CDF) of a selected beam, the average spectral efficiency (ASE), the average number of selected beams (ANSB), and the average bit error rate (BER). Some selected results shows that we can obtain the higher spectral efficiency and simultaneously reduce the potential increasing of the complexity of implementation caused by applying the selection based beam selection scheme without a considerable performance loss.

  6. Application of multiple parallel perfused microbioreactors: Synthesis, characterization and cytotoxicity testing of the novel rare earth complexes with indole acid as a ligand.

    Science.gov (United States)

    Guan, Qing-Lin; Xing, Yong-Heng; Liu, Jing; Wei, Wen-Juan; Zhang, Rui; Wang, Xuan; Bai, Feng-Ying

    2013-11-01

    Three novel complexes, [La(phen)2(IAA)2]·NO3 (1), [Sm(phen)2(IAA)2]·NO3 (2) and [Sm(IBA)3(phen)]·phen·HNO3·H2O (3) (phen: 1,10-phenanthroline, IAA: indole-3-acetic acid, IBA: indole-3-butyric acid), were synthesized and characterized with spectroscopy (infrared and UV-visible), X-ray crystal diffraction and elemental analysis. Structural analysis revealed that each lanthanide atom in complexes 1-3 held a distorted tricapped trigonal prism geometry in a nine-coordinate mode. There were two types of coordination modes of the IAA ligand in complexes 1 and 2: a μ2-η(1):η(2) bridging mode linking two lanthanide atoms and a μ2-η(1):η(1) double monodentate bridging mode. There were three types of coordination modes of the IBA ligand: a μ2-η(1):η(1) double monodentate bridging mode, a μ1-η(2) bridging mode and a μ2-η(1):η(2) bridging mode linking two lanthanide atoms. Adjacent Sm atoms were linked via the μ2-bridging carboxylate groups of the IBA ligands to generate a binuclear building unit. The biological activity of the complexes was evaluated in human adipose tissue-derived stem cells (hADSCs) and Chang liver cells using a multiple parallel perfused microbioreactor. The results showed that cytotoxicity increased as the concentrations of complexes 1-3 increased. © 2013.

  7. Parallel solid-phase isothermal amplification and detection of multiple DNA targets in microliter-sized wells of a digital versatile disc

    International Nuclear Information System (INIS)

    Santiago-Felipe, Sara; Tortajada-Genaro, Luis Antonio; Puchades, Rosa; Maquieira, Ángel

    2016-01-01

    An integrated method for the parallelized detection of multiple DNA target sequences is presented by using microstructures in a digital versatile disc (DVD). Samples and reagents were managed by using both the capillary and centrifugal forces induced by disc rotation. Recombinase polymerase amplification (RPA), in a bridge solid phase format, took place in separate wells, which thereby modified their optical properties. Then the DVD drive reader recorded the modifications of the transmitted laser beam. The strategy allowed tens of genetic determinations to be made simultaneously within <2 h, with small sample volumes (3 μL), low manipulation and at low cost. The method was applied to high-throughput screening of relevant safety threats (allergens, GMOs and pathogenic bacteria) in food samples. Satisfactory results were obtained in terms of sensitivity (48.7 fg of DNA) and reproducibility (below 18 %). This scheme warrants cost-effective multiplex amplification and detection and is perceived to represent a viable tool for screening of nucleic acid targets. (author)

  8. Pharmacodynamic effects of steady-state fingolimod on antibody response in healthy volunteers: a 4-week, randomized, placebo-controlled, parallel-group, multiple-dose study.

    Science.gov (United States)

    Boulton, Craig; Meiser, Karin; David, Olivier J; Schmouder, Robert

    2012-12-01

    Fingolimod, a first-in-class oral sphingosine 1-phosphate receptor (S1PR) modulator, is approved in many countries for relapsing-remitting multiple sclerosis, at a once-daily 0.5-mg dose. A reduction in peripheral lymphocyte count is an expected consequence of the fingolimod mechanism of S1PR modulation. The authors investigated if this pharmacodynamic effect impacts humoral and cellular immunogenicity. In this double-blind, parallel-group, 4-week study, 72 healthy volunteers were randomized to steady state, fingolimod 0.5 mg, 1.25 mg, or to placebo. The authors compared T-cell dependent and independent responses to the neoantigens, keyhole limpet hemocyanin (KLH), and pneumococcal polysaccharides vaccine (PPV-23), respectively, and additionally recall antigen response (tetanus toxoid [TT]) and delayed-type hypersensitivity (DTH) to KLH, TT, and Candida albicans. Fingolimod caused mild to moderate decreases in anti-KLH and anti-PPV-23 IgG and IgM levels versus placebo. Responder rates were identical between placebo and 0.5-mg groups for anti-KLH IgG (both > 90%) and comparable for anti-PPV-23 IgG (55% and 41%, respectively). Fingolimod did not affect anti-TT immunogenicity, and DTH response did not differ between placebo and fingolimod 0.5-mg groups. Expectedly, lymphocyte count reduced substantially in the fingolimod groups versus placebo but reversed by study end. Fingolimod was well tolerated, and the observed safety profile was consistent with previous reports.

  9. Estimating side-chain order in methyl-protonated, perdeuterated proteins via multiple-quantum relaxation violated coherence transfer NMR spectroscopy

    International Nuclear Information System (INIS)

    Sun Hechao; Godoy-Ruiz, Raquel; Tugarinov, Vitali

    2012-01-01

    Relaxation violated coherence transfer NMR spectroscopy (Tugarinov et al. in J Am Chem Soc 129:1743–1750, 2007) is an established experimental tool for quantitative estimation of the amplitudes of side-chain motions in methyl-protonated, highly deuterated proteins. Relaxation violated coherence transfer experiments monitor the build-up of methyl proton multiple-quantum coherences that can be created in magnetically equivalent spin-systems as long as their transverse magnetization components relax with substantially different rates. The rate of this build-up is a reporter of the methyl-bearing side-chain mobility. Although the build-up of multiple-quantum 1 H coherences is monitored in these experiments, the decay of the methyl signal during relaxation delays occurs when methyl proton magnetization is in a single-quantum state. We describe a relaxation violated coherence transfer approach where the relaxation of multiple-quantum 1 H– 13 C methyl coherences during the relaxation delay period is quantified. The NMR experiment and the associated fitting procedure that models the time-dependence of the signal build-up, are applicable to the characterization of side-chain order in [ 13 CH 3 ]-methyl-labeled, highly deuterated protein systems up to ∼100 kDa in molecular weight. The feasibility of extracting reliable measures of side-chain order is experimentally verified on methyl-protonated, perdeuterated samples of an 8.5-kDa ubiquitin at 10°C and an 82-kDa Malate Synthase G at 37°C.

  10. The roles of information technology in global chain supply: a multiple case study of multinational companies of China

    Science.gov (United States)

    He, Mao; Duan, Wanchun

    2007-12-01

    Nowadays many Chinese companies have being becoming more and more international. Therefore, these Chinese companies have to face global supply chains rather than the former domestic ones. The use of information technology (IT) is considered a prerequisite for the effective control of today's complex global supply chains. Based on empirical data from 10 multinational companies of China, this paper presents a classification of the ways in which companies use IT in SCM, and examines the drivers for these different utilization types. According to the findings of this research, the purposes of using of IT in SCM can be divided into 1) transaction processing, 2) supply chain planning and collaboration, and 3) order tracking and delivery coordination. The findings further suggest that the drivers between these three uses of IT in SCM differ.

  11. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  12. Phenotypic changes in the brain of SIV-infected macaques exposed to methamphetamine parallel macrophage activation patterns induced by the common gamma-chain cytokine system

    Directory of Open Access Journals (Sweden)

    Nikki eBortell

    2015-09-01

    Full Text Available One factor in the development of neuroAIDS is the increase in the migration of pro-inflammatory CD8 T cells across the Blood Brain Barrier. Typically these cells are involved with keeping the viral load down. However, the persistence of above average numbers of CD8 T cells in the brain, not necessarily specific to viral peptides, is facilitated by the upregulation of IL15 from astrocytes, in the absence of IL2, in the brain environment. Both IL15 and IL2 are common gamma chain (γc cytokines. Here, using the non-human primate model of neuroAIDS, we have demonstrated that exposure to Methamphetamine, a powerful illicit drug that has been associated with HIV exposure and neuroAIDS severity, can cause an increase in molecules of the γc system. Among these molecules, IL15, which is upregulated in astrocytes by Methamphetamine, and that induces the proliferation of T cells, may also be involved in driving an inflammatory phenotype in innate immune cells of the brain. Therefore, Methamphetamine and IL15 may be critical in the development and aggravation of Central Nervous System immune-mediated inflammatory pathology in HIV-infected drug abusers.

  13. Key characteristics and success factors of supply chain initiatives tackling consumer-related food waste – A multiple case study

    NARCIS (Netherlands)

    Aschemann-Witzel, Jessica; Hooge, De Ilona E.; Rohm, Harald; Normann, Anne; Bossle, Marilia Bonzanini; Grønhøj, Alice; Oostindjer, Marije

    2017-01-01

    Food waste accounts for a considerable share of the environmental impact of the food sector. Therefore, strategies that aim to reduce food waste have great potential to improve sustainability of the agricultural and food supply chains. Consumer-related food waste is a complex issue that needs

  14. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  15. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  16. Mouse hippocampal GABAB1 but not GABAB2 subunit-containing receptor complex levels are paralleling retrieval in the multiple-T-maze

    Directory of Open Access Journals (Sweden)

    Soheil eKeihan Falsafi

    2015-10-01

    Full Text Available GABAB receptors are heterodimeric G-protein coupled receptors known to be involved in learning and memory. Although a role for GABAB receptors in cognitive processes is evident, there is no information on hippocampal GABAB receptor complexes in a multiple T maze (MTM task, a robust paradigm for evaluation of spatial learning.Trained or untrained (yoked control C57BL/6J male mice (n=10/group were subjected to the MTM task and sacrificed 6 hours following their performance. Hippocampi were taken, membrane proteins extracted and run on blue native PAGE followed by immunoblotting with specific antibodies against GABAB1, GABAB1a and GABAB2. Immunoprecipitation with subsequent mass spectrometric identification of co-precipitates was carried out to show if GABAB1 and GABAB2 as well as other interacting proteins co-precipitate. An antibody shift assay (ASA and a proximity ligation assay (PLA were also used to see if the two GABAB subunits are present in the receptor complex.Single bands were observed on Western blots, each representing GABAB1, GABAB1a or GABAB2 at an apparent molecular weight of approximately 100 kDa. Subsequently, densitometric analysis revealed that levels of GABAB1 and GABAB1a but not GABAB2- containing receptor complexes were significantly higher in trained than untrained groups. Immunoprecipitation followed by mass spectrometric studies confirmed the presence of GABAB1, GABAB2, calcium calmodulin kinases I and II, GluA1 and GluA2 as constituents of the complex. ASA and PLA also showed the presence of the two subunits of GABAB receptor within the complex. It is shown that increased levels of GABAB1 subunit-containing complexes are paralleling performance in a land maze.

  17. Evaluation of pulsing magnetic field effects on paresthesia in multiple sclerosis patients, a randomized, double-blind, parallel-group clinical trial.

    Science.gov (United States)

    Afshari, Daryoush; Moradian, Nasrin; Khalili, Majid; Razazian, Nazanin; Bostani, Arash; Hoseini, Jamal; Moradian, Mohamad; Ghiasian, Masoud

    2016-10-01

    Evidence is mounting that magnet therapy could alleviate the symptoms of multiple sclerosis (MS). This study was performed to test the effects of the pulsing magnetic fields on the paresthesia in MS patients. This study has been conducted as a randomized, double-blind, parallel-group clinical trial during the April 2012 to October 2013. The subjects were selected among patients referred to MS clinic of Imam Reza Hospital; affiliated to Kermanshah University of Medical Sciences, Iran. Sixty three patients with MS were included in the study and randomly were divided into two groups, 35 patients were exposed to a magnetic pulsing field of 4mT intensity and 15-Hz frequency sinusoidal wave for 20min per session 2 times per week over a period of 2 months involving 16 sessions and 28 patients was exposed to a magnetically inactive field (placebo) for 20min per session 2 times per week over a period of 2 months involving 16 sessions. The severity of paresthesia was measured by the numerical rating scale (NRS) at 30, 60days. The study primary end point was NRS change between baseline and 60days. The secondary outcome was NRS change between baseline and 30days. Patients exposing to magnetic field showed significant paresthesia improvement compared with the group of patients exposing to placebo. According to our results pulsed magnetic therapy could alleviate paresthesia in MS patients .But trials with more patients and longer duration are mandatory to describe long-term effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  19. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  20. Probable transmission chains of Middle East respiratory syndrome coronavirus and the multiple generations of secondary infection in South Korea

    Directory of Open Access Journals (Sweden)

    Shui Shan Lee

    2015-09-01

    Conclusions: Publicly available data from multiple sources, including the media, are useful to describe the epidemic history of an outbreak. The effective control of MERS-CoV hinges on the upholding of infection control standards and an understanding of health-seeking behaviours in the community.

  1. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  2. An automated processing chains for surface temperature monitoring on Earth's most active volcanoes by optical data from multiple satellites

    Science.gov (United States)

    Silvestri, Malvina; Musacchio, Massimo; Fabrizia Buongiorno, Maria

    2017-04-01

    The Geohazards Exploitation Platform, or GEP is one of six Thematic Exploitation Platforms developed by ESA to serve data user communities. As a new element of the ground segment delivering satellite results to users, these cloud-based platforms provide an online environment to access information, processing tools, computing resources for community collaboration. The aim is to enable the easy extraction of valuable knowledge from vast quantities of satellite-sensed data now being produced by Europe's Copernicus programme and other Earth observation satellites. In this context, the estimation of surface temperature on active volcanoes around the world is considered. E2E processing chains have been developed for different satellite data (ASTER, Landsat8 and Sentinel 3 missions) using thermal infrared (TIR) channels by applying specific algorithms. These chains have been implemented on the GEP platform enabling the use of EO missions and the generation of added value product such as surface temperature map, from not skilled users. This solution will enhance the use of satellite data and improve the dissemination of the results saving valuable time (no manual browsing, downloading or processing is needed) and producing time series data that can be speedily extracted from a single co-registered pixel, to highlight gradual trends within a narrow area. Moreover, thanks to the high-resolution optical imagery of Sentinel 2 (MSI), the detection of lava maps during an eruption can be automatically obtained. The proposed lava detection method is based on a contextual algorithm applied to Sentinel-2 NIR (band 8 - 0.8 micron) and SWIR (band 12 - 2.25 micron) data. Examples derived by last eruptions on active volcanoes are showed.

  3. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  4. Analysis of the multiple forms of the 20,000-dalton myosin light, chain from uterine arterial smooth muscles

    International Nuclear Information System (INIS)

    Barany, K.; Csabina, S.; Mougios, V.; Barany, M.

    1986-01-01

    2D gel electrophoresis resolved the light chain (LC) in four spots, three spots were phosphorylated. Phosphorylation was determined by densitometry (S) and [ 32 P]phosphate incorporation (P). Molar [ 32 P]phosphate incorporation was quantitated for each LC spot. Phosphorylation was low in resting or drug-relaxed muscles and high in contracting or stretched muscles. At low phosphorylation S > P, this discrepancy can be explained by the presence of unphosphorylated isoforms in the phosphorylated spots. Indeed, unphosphorylated uterine LC exhibited three spots on electrophoretograms (distribution: 0, 13, 7, 80%), whereas arterial LC showed two spots (0, 15, 0, 85%). At high phosphorylation P > S, this aberration is caused by diphosphorylation. In the intact tissue of uterus, the ratio of Thr-P to Ser-P, and the level of diphosphorylation are higher than in artery. The molecular weight and isoelectric points of uterine and arterial LC are the same, but the percentage distribution of the unphosphorylated isoforms, the molar phosphate incorporation in the phosphorylated spots, and the extent of diphosphorylation are different

  5. Evaluation of the serum free light chain (sFLC) analysis in prediction of response in symptomatic multiple myeloma patients

    DEFF Research Database (Denmark)

    Toftmann Hansen, Charlotte; Pedersen, Per T; Nielsen, Lars C

    2014-01-01

    BACKGROUND: Observational data from clinical studies indicate that the goal of first-line therapy in newly diagnosed patients with symptomatic multiple myeloma (MM) should be very good partial response (VGPR) or better, preferably before high-dose treatment. We evaluated the value of early...... patients with no response to treatment. The mean per cent reduction in iFLC 3 d after start of treatment was 52.3% and 23.6% (P = 0.021) in patients achieving ≥VGPR and PR, respectively. The mean per cent reduction in M-protein in patients achieving ≥VGPR and PR was not significantly different in the 6-wk...

  6. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  7. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  8. Multiple-Fault Detection Methodology Based on Vibration and Current Analysis Applied to Bearings in Induction Motors and Gearboxes on the Kinematic Chain

    Directory of Open Access Journals (Sweden)

    Juan Jose Saucedo-Dorantes

    2016-01-01

    Full Text Available Gearboxes and induction motors are important components in industrial applications and their monitoring condition is critical in the industrial sector so as to reduce costs and maintenance downtimes. There are several techniques associated with the fault diagnosis in rotating machinery; however, vibration and stator currents analysis are commonly used due to their proven reliability. Indeed, vibration and current analysis provide fault condition information by means of the fault-related spectral component identification. This work presents a methodology based on vibration and current analysis for the diagnosis of wear in a gearbox and the detection of bearing defect in an induction motor both linked to the same kinematic chain; besides, the location of the fault-related components for analysis is supported by the corresponding theoretical models. The theoretical models are based on calculation of characteristic gearbox and bearings fault frequencies, in order to locate the spectral components of the faults. In this work, the influence of vibrations over the system is observed by performing motor current signal analysis to detect the presence of faults. The obtained results show the feasibility of detecting multiple faults in a kinematic chain, making the proposed methodology suitable to be used in the application of industrial machinery diagnosis.

  9. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  10. MRI study of the cuprizone-induced mouse model of multiple sclerosis: demyelination is not found after co-treatment with polyprenols (long-chain isoprenoid alcohols)

    Science.gov (United States)

    Khodanovich, M.; Glazacheva, V.; Pan, E.; Akulov, A.; Krutenkova, E.; Trusov, V.; Yarnykh, V.

    2016-02-01

    Multiple sclerosis is a neurological disorder with poorly understood pathogenic mechanisms and a lack of effective therapies. Therefore, the search for new MS treatments remains very important. This study was performed on a commonly used cuprizone animal model of multiple sclerosis. It evaluated the effect of a plant-derived substance called Ropren® (containing approximately 95% polyprenols or long-chain isoprenoid alcohols) on cuprizone- induced demyelination. The study was performed on 27 eight-week old male CD-1 mice. To induce demyelination mice were fed 0.5% cuprizone in the standard diet for 10 weeks. Ropren® was administered in one daily intraperitoneal injection (12mg/kg), beginning on the 6th week of the experiment. On the 11th week, the corpus callosum in the brain was evaluated in all animals using magnetic resonance imaging with an 11.7 T animal scanner using T2- weighted sequence. Cuprizone treatment successfully induced the model of demyelination with a significant decrease in the size of the corpus callosum compared with the control group (p<0.01). Mice treated with both cuprizone and Ropren® did not exhibit demyelination in the corpus callosum (p<0.01). This shows the positive effect of polyprenols on cuprizone-induced demyelination in mice.

  11. Combined use of Kappa Free Light Chain Index and Isoelectrofocusing of Cerebro-Spinal Fluid in Diagnosing Multiple Sclerosis: Performances and Costs.

    Science.gov (United States)

    Crespi, Ilaria; Sulas, Maria Giovanna; Mora, Riccardo; Naldi, Paola; Vecchio, Domizia; Comi, Cristoforo; Cantello, Roberto; Bellomo, Giorgio

    2017-03-01

    Isoelectrofocusing (IEF) to detect oligoclonal bands (OBCs) in cerebrospinal fluid (CSF) is the gold standard approach for evaluating intrathecal immunoglobulin synthesis in multiple sclerosis (MS) but the kappa free light chain index (KFLCi) is emerging as an alternative marker, and the combined/sequential uses of IEF and KFLCi have never been challenged. CSF and serum albumin, IgG, kFLC and lFLC were measured by nephelometry; albumin, IgG and kFLC quotients as well as Link and kFLC indexes were calculated; OCBs were evaluated by immunofixation. A total of 150 consecutive patients: 48 with MS, 32 with other neurological inflammatory diseases (NID), 62 with neurological non-inflammatory diseases (NNID), and 8 without any detectable neurological disease (NND) were investigated. Both IEF and KFLCi showed a similar accuracy as diagnostic tests for multiple sclerosis. The high sensitivity and specificity associated with the lower cost of KFLCi suggested to use this test first, followed by IEF as a confirmative procedure. The sequential use of IEF and KFLCi showed high diagnostic efficiency with cost reduction of 43 and 21%, if compared to the contemporary use of both tests, or the unique use of IEF in all patients. The "sequential testing" using KFLCi followed by IEF in MS represents an optimal procedure with accurate performance and lower costs.

  12. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  13. CELLS v1.0: updated and parallelized version of an electrical scheme to simulate multiple electrified clouds and flashes over large domains

    Directory of Open Access Journals (Sweden)

    C. Barthe

    2012-01-01

    Full Text Available The paper describes the fully parallelized electrical scheme CELLS which is suitable to simulate explicitly electrified storm systems on parallel computers. Our motivation here is to show that a cloud electricity scheme can be developed for use on large grids with complex terrain. Large computational domains are needed to perform real case meteorological simulations with many independent convective cells.

    The scheme computes the bulk electric charge attached to each cloud particle and hydrometeor. Positive and negative ions are also taken into account. Several parametrizations of the dominant non-inductive charging process are included and an inductive charging process as well. The electric field is obtained by inverting the Gauss equation with an extension to terrain-following coordinates. The new feature concerns the lightning flash scheme which is a simplified version of an older detailed sequential scheme. Flashes are composed of a bidirectional leader phase (vertical extension from the triggering point and a phase obeying a fractal law (with horizontal extension on electrically charged zones. The originality of the scheme lies in the way the branching phase is treated to get a parallel code.

    The complete electrification scheme is tested for the 10 July 1996 STERAO case and for the 21 July 1998 EULINOX case. Flash characteristics are analysed in detail and additional sensitivity experiments are performed for the STERAO case. Although the simulations were run for flat terrain conditions, they show that the model behaves well on multiprocessor computers. This opens a wide area of application for this electrical scheme with the next objective of running real meterological case on large domains.

  14. The Complementary Perspective of System of Systems in Collaboration, Integration, and Logistics: A Value-Chain Based Paradigm of Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Raed Jaradat

    2017-10-01

    Full Text Available The importance and complexity of the problems associated with coordinating multiple organizations to configure value propositions for customers has drawn the attention of multiple disciplines. In an effort to clarify and consolidate terms, this conceptual research examines both supply chain management (SCM and system of systems (SoS literature to postulate, from a value-chain perspective, what roles integration and collaboration play in helping supply chains satisfy customer requirements. A literature review analysis was used to identify the commonalities and differences between supply chain management and system of systems approaches to examining interfirm coordination of value creation efforts. Although a framework of integration and collaboration roles in value creation is proposed, further empirical testing of the concept is required to substantiate initial conclusions. The concepts proposed may help clarify where strategic and operational managers need to focus their efforts in coordinating supply chain member firms. The incorporation of SoS engineering into the supply chain field will draw the linkage between the constituent principles, and concepts of Systems Theory as appropriate for the supply chain management field. This is the first effort to reconcile two separate but parallel scholarship streams examining the coordination of multiple organizations in value creation. This research shows that there are some methodologies, principles, and methods from the SoS field that can supplement supply chain management research. Mainly due to a unit of analysis issue, systems based approaches have not been in the mainstream of supply chain management field development.

  15. Frequent expression loss of Inter-alpha-trypsin inhibitor heavy chain (ITIH) genes in multiple human solid tumors: A systematic expression analysis

    International Nuclear Information System (INIS)

    Hamm, Alexander; Knuechel, Ruth; Dahl, Edgar; Veeck, Juergen; Bektas, Nuran; Wild, Peter J; Hartmann, Arndt; Heindrichs, Uwe; Kristiansen, Glen; Werbowetski-Ogilvie, Tamra; Del Maestro, Rolando

    2008-01-01

    The inter-alpha-trypsin inhibitors (ITI) are a family of plasma protease inhibitors, assembled from a light chain – bikunin, encoded by AMBP – and five homologous heavy chains (encoded by ITIH1, ITIH2, ITIH3, ITIH4, and ITIH5), contributing to extracellular matrix stability by covalent linkage to hyaluronan. So far, ITIH molecules have been shown to play a particularly important role in inflammation and carcinogenesis. We systematically investigated differential gene expression of the ITIH gene family, as well as AMBP and the interacting partner TNFAIP6 in 13 different human tumor entities (of breast, endometrium, ovary, cervix, stomach, small intestine, colon, rectum, lung, thyroid, prostate, kidney, and pancreas) using cDNA dot blot analysis (Cancer Profiling Array, CPA), semiquantitative RT-PCR and immunohistochemistry. We found that ITIH genes are clearly downregulated in multiple human solid tumors, including breast, colon and lung cancer. Thus, ITIH genes may represent a family of putative tumor suppressor genes that should be analyzed in greater detail in the future. For an initial detailed analysis we chose ITIH2 expression in human breast cancer. Loss of ITIH2 expression in 70% of cases (n = 50, CPA) could be confirmed by real-time PCR in an additional set of breast cancers (n = 36). Next we studied ITIH2 expression on the protein level by analyzing a comprehensive tissue micro array including 185 invasive breast cancer specimens. We found a strong correlation (p < 0.001) between ITIH2 expression and estrogen receptor (ER) expression indicating that ER may be involved in the regulation of this ECM molecule. Altogether, this is the first systematic analysis on the differential expression of ITIH genes in human cancer, showing frequent downregulation that may be associated with initiation and/or progression of these malignancies

  16. Frequent expression loss of Inter-alpha-trypsin inhibitor heavy chain (ITIH genes in multiple human solid tumors: A systematic expression analysis

    Directory of Open Access Journals (Sweden)

    Werbowetski-Ogilvie Tamra

    2008-01-01

    Full Text Available Abstract Background The inter-alpha-trypsin inhibitors (ITI are a family of plasma protease inhibitors, assembled from a light chain – bikunin, encoded by AMBP – and five homologous heavy chains (encoded by ITIH1, ITIH2, ITIH3, ITIH4, and ITIH5, contributing to extracellular matrix stability by covalent linkage to hyaluronan. So far, ITIH molecules have been shown to play a particularly important role in inflammation and carcinogenesis. Methods We systematically investigated differential gene expression of the ITIH gene family, as well as AMBP and the interacting partner TNFAIP6 in 13 different human tumor entities (of breast, endometrium, ovary, cervix, stomach, small intestine, colon, rectum, lung, thyroid, prostate, kidney, and pancreas using cDNA dot blot analysis (Cancer Profiling Array, CPA, semiquantitative RT-PCR and immunohistochemistry. Results We found that ITIH genes are clearly downregulated in multiple human solid tumors, including breast, colon and lung cancer. Thus, ITIH genes may represent a family of putative tumor suppressor genes that should be analyzed in greater detail in the future. For an initial detailed analysis we chose ITIH2 expression in human breast cancer. Loss of ITIH2 expression in 70% of cases (n = 50, CPA could be confirmed by real-time PCR in an additional set of breast cancers (n = 36. Next we studied ITIH2 expression on the protein level by analyzing a comprehensive tissue micro array including 185 invasive breast cancer specimens. We found a strong correlation (p Conclusion Altogether, this is the first systematic analysis on the differential expression of ITIH genes in human cancer, showing frequent downregulation that may be associated with initiation and/or progression of these malignancies.

  17. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  18. Analysis of Heat Transfer and Pressure Drop for a Gas Flowing Through a set of Multiple Parallel Flat Plates at High Temperatures

    Science.gov (United States)

    Einstein, Thomas H.

    1961-01-01

    Equations were derived representing heat transfer and pressure drop for a gas flowing in the passages of a heater composed of a series of parallel flat plates. The plates generated heat which was transferred to the flowing gas by convection. The relatively high temperature level of this system necessitated the consideration of heat transfer between the plates by radiation. The equations were solved on an IBM 704 computer, and results were obtained for hydrogen as the working fluid for a series of cases with a gas inlet temperature of 200 R, an exit temperature of 5000 0 R, and exit Mach numbers ranging from 0.2 to O.8. The length of the heater composed of the plates ranged from 2 to 4 feet, and the spacing between the plates was varied from 0.003 to 0.01 foot. Most of the results were for a five- plate heater, but results are also given for nine plates to show the effect of increasing the number of plates. The heat generation was assumed to be identical for each plate but was varied along the length of the plates. The axial variation of power used to obtain the results presented is the so-called "2/3-cosine variation." The boundaries surrounding the set of plates, and parallel to it, were assumed adiabatic, so that all the power generated in the plates went into heating the gas. The results are presented in plots of maximum plate and maximum adiabatic wall temperatures as functions of parameters proportional to f(L/D), for the case of both laminar and turbulent flow. Here f is the Fanning friction factor and (L/D) is the length to equivalent diameter ratio of the passages in the heater. The pressure drop through the heater is presented as a function of these same parameters, the exit Mach number, and the pressure at the exit of the heater.

  19. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  20. On the Organization of Parallel Operation of Some Algorithms for Finding the Shortest Path on a Graph on a Computer System with Multiple Instruction Stream and Single Data Stream

    Directory of Open Access Journals (Sweden)

    V. E. Podol'skii

    2015-01-01

    Full Text Available The paper considers the implementing Bellman-Ford and Lee algorithms to find the shortest graph path on a computer system with multiple instruction stream and single data stream (MISD. The MISD computer is a computer that executes commands of arithmetic-logical processing (on the CPU and commands of structures processing (on the structures processor in parallel on a single data stream. Transformation of sequential programs into the MISD programs is a labor intensity process because it requires a stream of the arithmetic-logical processing to be manually separated from that of the structures processing. Algorithms based on the processing of data structures (e.g., algorithms on graphs show high performance on a MISD computer. Bellman-Ford and Lee algorithms for finding the shortest path on a graph are representatives of these algorithms. They are applied to robotics for automatic planning of the robot movement in-situ. Modification of Bellman-Ford and Lee algorithms for finding the shortest graph path in coprocessor MISD mode and the parallel MISD modification of these algorithms were first obtained in this article. Thus, this article continues a series of studies on the transformation of sequential algorithms into MISD ones (Dijkstra and Ford-Fulkerson 's algorithms and has a pronouncedly applied nature. The article also presents the analysis results of Bellman-Ford and Lee algorithms in MISD mode. The paper formulates the basic trends of a technique for parallelization of algorithms into arithmetic-logical processing stream and structures processing stream. Among the key areas for future research, development of the mathematical approach to provide a subsequently formalized and automated process of parallelizing sequential algorithms between the CPU and structures processor is highlighted. Among the mathematical models that can be used in future studies there are graph models of algorithms (e.g., dependency graph of a program. Due to the high

  1. Blood Mononuclear Cell Mitochondrial Respiratory Chain Complex IV Activity is Decreased in Multiple Sclerosis Patients: Effects of β-Interferon Treatment

    Directory of Open Access Journals (Sweden)

    Iain Hargreaves

    2018-02-01

    Full Text Available Objectives: Evidence of mitochondrial respiratory chain (MRC dysfunction and oxidative stress has been implicated in the pathophysiology of multiple sclerosis (MS. However, at present, there is no reliable low invasive surrogate available to evaluate mitochondrial function in these patients. In view of the particular sensitivity of MRC complex IV to oxidative stress, the aim of this study was to assess blood mononuclear cell (BMNC MRC complex IV activity in MS patients and compare these results to age matched controls and MS patients on β-interferon treatment. Methods: Spectrophotometric enzyme assay was employed to measure MRC complex IV activity in blood mononuclear cell obtained multiple sclerosis patients and aged matched controls. Results: MRC Complex IV activity was found to be significantly decreased (p < 0.05 in MS patients (2.1 ± 0.8 k/nmol × 10−3; mean ± SD] when compared to the controls (7.2 ± 2.3 k/nmol × 10−3. Complex IV activity in MS patients on β-interferon (4.9 ± 1.5 k/nmol × 10−3 was not found to be significantly different from that of the controls. Conclusions: This study has indicated evidence of peripheral MRC complex IV deficiency in MS patients and has highlighted the potential utility of BMNCs as a potential means to evaluate mitochondrial function in this disorder. Furthermore, the reported improvement of complex IV activity may provide novel insights into the mode(s of action of β-interferon.

  2. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  3. Markov Tail Chains

    OpenAIRE

    janssen, Anja; Segers, Johan

    2013-01-01

    The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in Rd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In ...

  4. System performances of optical space code-division multiple-access-based fiber-optic two-dimensional parallel data link.

    Science.gov (United States)

    Nakamura, M; Kitayama, K

    1998-05-10

    Optical space code-division multiple access is a scheme to multiplex and link data between two-dimensional processors such as smart pixels and spatial light modulators or arrays of optical sources like vertical-cavity surface-emitting lasers. We examine the multiplexing characteristics of optical space code-division multiple access by using optical orthogonal signature patterns. The probability density function of interference noise in interfering optical orthogonal signature patterns is calculated. The bit-error rate is derived from the result and plotted as a function of receiver threshold, code length, code weight, and number of users. Furthermore, we propose a prethresholding method to suppress the interference noise, and we experimentally verify that the method works effectively in improving system performance.

  5. An open 8-channel parallel transmission coil for static and dynamic 7T MRI of the knee and ankle joints at multiple postures.

    Science.gov (United States)

    Jin, Jin; Weber, Ewald; Destruel, Aurelien; O'Brien, Kieran; Henin, Bassem; Engstrom, Craig; Crozier, Stuart

    2018-03-01

    We present the initial in vivo imaging results of an open architecture eight-channel parallel transmission (pTx) transceive radiofrequency (RF) coil array that was designed and constructed for static and dynamic 7T MRI of the knee and ankle joints. The pTx coil has a U-shaped dual-row configuration (200 mm overall length longitudinally) that allows static and dynamic imaging of the knee and ankle joints at various postures and during active movements. This coil structure, in combination with B 1 shimming, allows flexible configuration of B 1 transmit profiles, with good homogeneity over 120-mm regions of interest. This coil enabled high-resolution gradient echo (e.g., 3D dual-echo steady state [DESS] and 3D multiecho data image combination [MEDIC]) and turbo spin echo (TSE) imaging (e.g., with proton density weighting [PDw], PDw with fat saturation, and T 1 and T 2 weightings) with local RF energy absorption rates well below regulatory limits. High-resolution 2D and 3D image series (e.g., 0.3 mm in-plane resolution for TSE, 0.47 mm isotropic for DESS and MEDIC) were obtained from the knee and ankle joints with excellent tissue contrast. Dynamic imaging during continuous knee and ankle flexion-extension cycles were successfully acquired. The new open pTx coil array provides versatility for high-quality static and dynamic MRI of the knee and ankle joints at 7T. Magn Reson Med 79:1804-1816, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. Parallel pathways of ethoxylated alcohol biodegradation under aerobic conditions

    International Nuclear Information System (INIS)

    Zembrzuska, Joanna; Budnik, Irena; Lukaszewski, Zenon

    2016-01-01

    Non-ionic surfactants (NS) are a major component of the surfactant flux discharged into surface water, and alcohol ethoxylates (AE) are the major component of this flux. Therefore, biodegradation pathways of AE deserve more thorough investigation. The aim of this work was to investigate the stages of biodegradation of homogeneous oxyethylated dodecanol C_1_2E_9 having 9 oxyethylene subunits, under aerobic conditions. Enterobacter strain Z3 bacteria were chosen as biodegrading organisms under conditions with C_1_2E_9 as the sole source of organic carbon. Bacterial consortia of river water were used in a parallel test as an inoculum for comparison. The LC-MS technique was used to identify the products of biodegradation. Liquid-liquid extraction with ethyl acetate was selected for the isolation of C_1_2E_9 and metabolites from the biodegradation broth. The LC-MS/MS technique operating in the multiple reaction monitoring (MRM) mode was used for quantitative determination of C_1_2E_9, C_1_2E_8, C_1_2E_7 and C_1_2E_6. Apart from the substrate, the homologues C_1_2E_8, C_1_2E_7 and C_1_2E_6, being metabolites of C_1_2E_9 biodegradation by shortening of the oxyethylene chain, as well as intermediate metabolites having a carboxyl end group in the oxyethylene chain (C_1_2E_8COOH, C_1_2E_7COOH, C_1_2E_6COOH and C_1_2E_5COOH), were identified. Poly(ethylene glycols) (E) having 9, 8 and 7 oxyethylene subunits were also identified, indicating parallel central fission of C_1_2E_9 and its metabolites. Similar results were obtained with river water as inoculum. It is concluded that AE, under aerobic conditions, are biodegraded via two parallel pathways: by central fission with the formation of PEG, and by Ω-oxidation of the oxyethylene chain with the formation of carboxylated AE and subsequent shortening of the oxyethylene chain by a single unit. - Highlights: • Two parallel biodegradation pathways of alcohol ethoxylates have been discovered. • Apart from central fission

  7. Parallel Solid-Phase Synthesis Using a New Diethylsilylacetylenic Linker and Leading to Mestranol Derivatives with Potent Antiproliferative Activities on Multiple Cancer Cell Lines.

    Science.gov (United States)

    Dutour, Raphael; Maltais, Rene; Perreault, Martin; Roy, Jenny; Poirier, Donald

    2018-03-07

    RM-133 belongs to a new family of aminosteroid derivatives demonstrating interesting anticancer properties, as confirmed in vivo in four mouse cancer xenograft models. However, the metabolic stability of RM-133 needs to be improved. After investigation, the replacement of its androstane scaffold by a more stable estrane scaffold led to the development of the mestranol derivative RM-581. Using solid-phase strategy involving five steps, we quickly synthesized a series of RM-581 analogs using the recently-developed diethylsilyl acetylenic linker. To establish structure-activity relationships, we then investigated their antiproliferative potency on a panel of cancer cell lines from various cancers (breast, prostate, ovarian and pancreatic). Some of the mestranol derivatives have shown in vitro anticancer activities that are close to, or better than those observed for RM-581. Compound 23, a mestranol derivative having a ((3,5-dimethylbenzoyl)-L-prolyl)piperazine side chain at position C2, was found to be active as an antiproliferative agent (IC50 = 0.38 ± 0.34 to 3.17 ± 0.10 µM) and to be twice as active as RM-581 on LNCaP, PC-3, MCF-7, PANC-1 and OVCAR-3 cancer cells (IC50 = 0.56 ± 0.30, 0.89 ± 0.63, 1.36 ± 0.31, 2.47 ± 0.91 and 3.17 ± 0.10 µM, respectively). Easily synthesized in good yields by both solid-phase organic synthesis and classic solution-phase chemistry, this promising candidate could be used as an antiproliferative agent on a variety of cancers, notably pancreatic and ovarian cancers, both having very bad prognoses. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Impact of initial FDG-PET/CT and serum-free light chain on transformation of conventionally defined solitary plasmacytoma to multiple myeloma.

    Science.gov (United States)

    Fouquet, Guillemette; Guidez, Stéphanie; Herbaux, Charles; Van de Wyngaert, Zoé; Bonnet, Sarah; Beauvais, David; Demarquette, Hélène; Adib, Salim; Hivert, Bénédicte; Wemeau, Mathieu; Berthon, Céline; Terriou, Louis; Coiteux, Valérie; Macro, Margaret; Decaux, Olivier; Facon, Thierry; Huglo, Damien; Leleu, Xavier

    2014-06-15

    Solitary plasmacytoma (SP) is a localized proliferation of monoclonal plasma cells in either bone or soft tissue, without evidence of multiple myeloma (MM), and whose prognosis is marked by a high risk of transformation to MM. We studied the impact of FDG-PET/CT (2[18F]fluoro-2-deoxy-D-glucose positron emission tomography-computed tomography) on the risk of transformation of SP to overt MM among other markers in a series of 43 patients diagnosed with SP. Median age was 57.5 years; 48% of patients had an abnormal involved serum-free light chain (sFLC) value, and 64% had an abnormal sFLC ratio at diagnosis. Thirty-three percent had two or more hypermetabolic lesions on initial PET/CT, and 20% had two or more focal lesions on initial MRI. With a median follow-up of 50 months, 14 patients transformed to MM with a median time (TTMM) of 71 months. The risk factors that significantly shortened TTMM at diagnosis were two or more hypermetabolic lesions on PET/CT, abnormal sFLC ratio and involved sFLC, and to a lesser extent at completion of treatment, absence of normalized involved sFLC and PET/CT or MRI. In a multivariate analysis, abnormal initial involved sFLC [OR = 10; 95% confidence interval (CI), 1-87; P = 0.008] and PET/CT (OR = 5; 95% CI, 0-9; P = 0.032) independently shortened TTMM. An abnormal involved sFLC value and the presence of at least two hypermetabolic lesions on PET/CT at diagnosis of SP were the two predictors of early evolution to myeloma in our series. This data analysis will need confirmation in a larger study, and the study of these two risk factors may lead to a different management of patients with SP in the future. . ©2014 American Association for Cancer Research.

  9. [Establishment of a novel HLA genotyping method for preimplantation genetic diagnonis using multiple displacement amplification-polymerase chain reaction-sequencing based technique].

    Science.gov (United States)

    Zhang, Yinfeng; Luo, Haining; Zhang, Yunshan

    2015-12-01

    To establish a novel HLA genotyping method for preimplantation genetic diagnonis (PGD) using multiple displacement amplification-polymerase chain reaction-sequencing based technique (MDA-PCR-SBT). Peripheral blood samples and 76 1PN, 2PN, 3PN discarded embryos from 9 couples were collected. The alleles of HLA-A, B, DR loci were detected from the MDA product with the PCR-SBT method. The HLA genotypes of the parental peripheral blood samples were analyzed with the same protocol. The genotypes of specific HLA region were evaluated for distinguishing the segregation of haplotypes among the family members, and primary HLA matching was performed between the embryos. The 76 embryos were subjected to MDA and 74 (97.4%) were successfully amplified. For the 34 embryos from the single blastomere group, the amplification rate was 94.1%, and for the 40 embryos in the two blastomeres group, the rate was 100%. The dropout rates for DQ allele and DR allele were 1.3% and 0, respectively. The positive rate for MDA in the single blastomere group was 100%, with the dropout rates for DQ allele and DR allele being 1.5% and 0, respectively. The positive rate of MDA for the two blastomere group was 100%, with the dropout rates for both DQ and DR alleles being 0. The recombination rate of fetal HLA was 20.2% (30/148). Due to the improper classification and abnormal fertilized embryos, the proportion of matched embryos HLA was 20.3% (15/74),which was lower than the theoretical value of 25%. PGD with HLA matching can facilitate creation of a HLA-identical donor (saviour child) for umbilical cord blood or bone marrow stem cells for its affected sibling with a genetic disease. Therefore, preimplantation HLA matching may provide a tool for couples desiring to conceive a potential donor progeny for transplantation for its sibling with a life-threatening disorder.

  10. Investigating the Effects of Information Technology on the Capabilities and Performance of the Supply Chain of Dairy Companies in Fars Province: A Multiple Case Study

    Directory of Open Access Journals (Sweden)

    Ali Mohammadi

    2011-09-01

    Full Text Available Nowadays all organizations are somehow involved with information technology revolutions and the applicable aspects of information technology are evident in any field of supply chain, from the relationship with suppliers and producers to the relationship with the customers. In other words, the application of information technology is influential in the improvement of the supply chain. In this study, the effect of information technology tools on the capabilities and performance of the supply chain in the dairy companies of Fars province is investigated. In this research, in information technology sets of tools ,supply chain communication system (SCCS, electronic data interchange (EDI, electronic mail (Email, bar-coding, and radio frequency identification (RFID, in supply chain capabilities, four dimensions including information exchange, coordination, interfirm activity integration, and supply chain responsiveness, and in performance of supply chain, two variables of performance including marketing performance and financial performance will be examined. The results indicate that using information technology tools is effective on the capabilities and hence performance of the supply chain.

  11. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  12. A new system for parallel drug screening against multiple-resistant HIV mutants based on lentiviral self-inactivating (SIN vectors and multi-colour analyses

    Directory of Open Access Journals (Sweden)

    Prokofjeva Maria M

    2013-01-01

    Full Text Available Abstract Background Despite progress in the development of combined antiretroviral therapies (cART, HIV infection remains a significant challenge for human health. Current problems of cART include multi-drug-resistant virus variants, long-term toxicity and enormous treatment costs. Therefore, the identification of novel effective drugs is urgently needed. Methods We developed a straightforward screening approach for simultaneously evaluating the sensitivity of multiple HIV gag-pol mutants to antiviral drugs in one assay. Our technique is based on multi-colour lentiviral self-inactivating (SIN LeGO vector technology. Results We demonstrated the successful use of this approach for screening compounds against up to four HIV gag-pol variants (wild-type and three mutants simultaneously. Importantly, the technique was adapted to Biosafety Level 1 conditions by utilising ecotropic pseudotypes. This allowed upscaling to a large-scale screening protocol exploited by pharmaceutical companies in a successful proof-of-concept experiment. Conclusions The technology developed here facilitates fast screening for anti-HIV activity of individual agents from large compound libraries. Although drugs targeting gag-pol variants were used here, our approach permits screening compounds that target several different, key cellular and viral functions of the HIV life-cycle. The modular principle of the method also allows the easy exchange of various mutations in HIV sequences. In conclusion, the methodology presented here provides a valuable new approach for the identification of novel anti-HIV drugs.

  13. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  14. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  15. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  16. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  17. An Optimization Algorithm for Multipath Parallel Allocation for Service Resource in the Simulation Task Workflow

    Directory of Open Access Journals (Sweden)

    Zhiteng Wang

    2014-01-01

    Full Text Available Service oriented modeling and simulation are hot issues in the field of modeling and simulation, and there is need to call service resources when simulation task workflow is running. How to optimize the service resource allocation to ensure that the task is complete effectively is an important issue in this area. In military modeling and simulation field, it is important to improve the probability of success and timeliness in simulation task workflow. Therefore, this paper proposes an optimization algorithm for multipath service resource parallel allocation, in which multipath service resource parallel allocation model is built and multiple chains coding scheme quantum optimization algorithm is used for optimization and solution. The multiple chains coding scheme quantum optimization algorithm is to extend parallel search space to improve search efficiency. Through the simulation experiment, this paper investigates the effect for the probability of success in simulation task workflow from different optimization algorithm, service allocation strategy, and path number, and the simulation result shows that the optimization algorithm for multipath service resource parallel allocation is an effective method to improve the probability of success and timeliness in simulation task workflow.

  18. Safety and pharmacokinetics of single and multiple intravenous bolus doses of diclofenac sodium compared with oral diclofenac potassium 50 mg: A randomized, parallel-group, single-center study in healthy subjects.

    Science.gov (United States)

    Munjal, Sagar; Gautam, Anirudh; Okumu, Franklin; McDowell, James; Allenby, Kent

    2016-01-01

    In a randomized, parallel-group, single-center study in 42 healthy adults, the safety and pharmacokinetic parameters of an intravenous formulation of 18.75 and 37.5 mg diclofenac sodium (DFP-08) following single- and multiple-dose bolus administration were compared with diclofenac potassium 50 mg oral tablets. Mean AUC0-inf values for a 50-mg oral tablet and an 18.75-mg intravenous formulation were similar (1308.9 [393.0]) vs 1232.4 [147.6]). As measured by the AUC, DFP-08 18.75 mg and 37.5 mg demonstrated dose proportionality for extent of exposure. One subject in each of the placebo and DFP-08 18.75-mg groups and 2 subjects in the DFP-08 37.5-mg group reported adverse events that were considered by the investigator to be related to the study drug. All were mild in intensity and did not require treatment. Two subjects in the placebo group and 1 subject in the DFP-08 18.75-mg group reported grade 1 thrombophlebitis; no subjects reported higher than grade 1 thrombophlebitis after receiving a single intravenous dose. The 18.75- and 37.5-mg doses of intravenous diclofenac (single and multiple) were well tolerated for 7 days. Additional efficacy and safety studies are required to fully characterize the product. © 2015, The American College of Clinical Pharmacology.

  19. Self-lacing atom chains

    International Nuclear Information System (INIS)

    Zandvliet, Harold J W; Van Houselt, Arie; Poelsema, Bene

    2009-01-01

    The structural and electronic properties of self-lacing atomic chains on Pt modified Ge(001) surfaces have been studied using low-temperature scanning tunnelling microscopy and spectroscopy. The self-lacing chains have a cross section of only one atom, are perfectly straight, thousands of atoms long and virtually defect free. The atomic chains are composed of dimers that have their bonds aligned in a direction parallel to the chain direction. At low temperatures the atomic chains undergo a Peierls transition: the periodicity of the chains doubles from a 2 x to a 4 x periodicity and an energy gap opens up. Furthermore, at low temperatures (T<80 K) novel quasi-one-dimensional electronic states are found. These quasi-one-dimensional electronic states originate from an electronic state of the underlying terrace that is confined between the atomic chains.

  20. The cause multiplicity and the multiple cause style of adverse events in Japanese nuclear power plants

    International Nuclear Information System (INIS)

    Miyazaki, Takamasa

    2008-01-01

    An adverse event in a nuclear power plant occurs due to either one cause or multiple causes. To consider ways of preventing adverse events, it is useful to clarify whether events are caused by single or multiple causes. In this study, the multiple causes is expressed using the cause multiplicity and the multiple cause style. Classified causes of adverse events in Japanese nuclear power plants were analyzed, with the following results: the cause multiplicity of serious adverse events is higher than that of minor adverse events, and the multiple cause style can be expressed by combining two styles: series type and parallel type. Also, for a multiple cause event, a new method of displaying the event is presented as a cause-chain chart where the cause items are arranged in a sequential way and are connected considering the mutual relations among the causes. This new display method shows the whole flow of issues concerning the event more simply than the conventional display method of the chain of phenomena, and would be useful for considering the terminating point of the chain of causes. (author)

  1. A faster, high resolution, mtPA-GFP-based mitochondrial fusion assay acquiring kinetic data of multiple cells in parallel using confocal microscopy.

    Science.gov (United States)

    Lovy, Alenka; Molina, Anthony J A; Cerqueira, Fernanda M; Trudeau, Kyle; Shirihai, Orian S

    2012-07-20

    exposing loaded cells (3-15 nM TMRE) to the imaging parameters that will be used in the assay (perhaps 7 stacks of 6 optical sections in a row), and assessing cell health after 2 hours. If the mitochondria appear too fragmented and cells are dying, other mitochondrial markers, such as dsRED or Mitotracker red could be used instead of TMRE. The mtPAGFP method has revealed details about mitochondrial network behavior that could not be visualized using other methods. For example, we now know that mitochondrial fusion can be full or transient, where matrix content can mix without changing the overall network morphology. Additionally, we know that the probability of fusion is independent of contact duration and organelle dimension, is influenced by organelle motility, membrane potential and history of previous fusion activity. In this manuscript, we describe a methodology for scaling up the previously published protocol using mtPAGFP and 15 nM TMRE in order to examine multiple cells at a time and improve the time efficiency of data collection without sacrificing the subcellular resolution. This has been made possible by the use of an automated microscope stage, and programmable image acquisition software. Zen software from Zeiss allows the user to mark and track several designated cells expressing mtPAGFP. Each of these cells can be photoactivated in a particular region of interest, and stacks of confocal slices can be monitored for mtPAGFP signal as well as TMRE at specified intervals. Other confocal systems could be used to perform this protocol provided there is an automated stage that is programmable, an incubator with CO2, and a means by which to photoactivate the PAGFP; either a multiphoton laser, or a 405 nm diode laser.

  2. Distributed parallel messaging for multiprocessor systems

    Science.gov (United States)

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  3. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  4. Falling chains

    OpenAIRE

    Wong, Chun Wa; Yasui, Kosuke

    2005-01-01

    The one-dimensional fall of a folded chain with one end suspended from a rigid support and a chain falling from a resting heap on a table is studied. Because their Lagrangians contain no explicit time dependence, the falling chains are conservative systems. Their equations of motion are shown to contain a term that enforces energy conservation when masses are transferred between subchains. We show that Cayley's 1857 energy nonconserving solution for a chain falling from a resting heap is inco...

  5. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  6. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  7. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  8. Integration of paper-based microarray and time-of-flight secondary ion mass spectrometry (ToF-SIMS) for parallel detection and quantification of molecules in multiple samples automatically.

    Science.gov (United States)

    Chu, Kuo-Jui; Chen, Po-Chun; You, Yun-Wen; Chang, Hsun-Yun; Kao, Wei-Lun; Chu, Yi-Hsuan; Wu, Chen-Yi; Shyue, Jing-Jong

    2018-04-16

    With its low-cost fabrication and ease of modification, paper-based analytical devices have developed rapidly in recent years. Microarrays allow automatic analysis of multiple samples or multiple reactions with minimal sample consumption. While cellulose paper is generally used, its high backgrounds in spectrometry outside of the visible range has limited its application to be mostly colorimetric analysis. In this work, glass-microfiber paper is used as the substrate for a microarray. The glass-microfiber is essentially chemically inert SiO x , and the lower background from this inorganic microfiber can avoid interference from organic analytes in various spectrometers. However, generally used wax printing fails to wet glass microfibers to form hydrophobic barriers. Therefore, to prepare the hydrophobic-hydrophilic pattern, the glass-microfiber paper was first modified with an octadecyltrichlorosilane (OTS) self-assembled monolayer (SAM) to make the paper hydrophobic. A hydrophilic microarray was then prepared using a CO 2 laser scriber that selectively removed the OTS layer with a designed pattern. One microliter of aqueous drops of peptides at various concentrations were then dispensed inside the round patterns where OTS SAM was removed while the surrounding area with OTS layer served as a barrier to separate each drop. The resulting specimen of multiple spots was automatically analyzed with a time-of-flight secondary ion mass spectrometer (ToF-SIMS), and all of the secondary ions were collected. Among the various cluster ions that have developed over the past decade, pulsed C 60 + was selected as the primary ion because of its high secondary ion intensity in the high mass region, its minimal alteration of the surface when operating within the static-limit and spatial resolution at the ∼μm level. In the resulting spectra, parent ions of various peptides (in the forms [M+H] + and [M+Na] + ) were readily identified for parallel detection of molecules in a mixture

  9. Incidence and outcome of patients starting renal replacement therapy for end-stage renal disease due to multiple myeloma or light-chain deposit disease: an ERA-EDTA Registry study

    DEFF Research Database (Denmark)

    Tsakiris, D.J.; Stel, V.S.; Finne, P.

    2010-01-01

    Background. Information on demographics and survival of patients starting renal replacement therapy (RRT) for end-stage renal disease (ESRD) due to multiple myeloma (MM) or light-chain deposit disease (LCDD) is scarce. The aim of this study was to describe the incidence, characteristics, causes...... causes (non-MM) was observed overtime. Patient survival on RRT was examined, unadjusted and adjusted for age and gender. Results. Of the 159 637 patients on RRT, 2453 (1.54%) had MM or LCDD. The incidence of RRT for ESRD due to MM or LCDD, adjusted for age and gender, increased from 0.70 pmp in 1986...

  10. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  11. Medical Image Retrieval Based On the Parallelization of the Cluster Sampling Algorithm

    OpenAIRE

    Ali, Hesham Arafat; Attiya, Salah; El-henawy, Ibrahim

    2017-01-01

    In this paper we develop parallel cluster sampling algorithms and show that a multi-chain version is embarrassingly parallel and can be used efficiently for medical image retrieval among other applications.

  12. On Production and Green Transportation Coordination in a Sustainable Global Supply Chain

    Directory of Open Access Journals (Sweden)

    Feng Guo

    2017-11-01

    Full Text Available This paper addresses a coordination problem of production and green transportation and the effects of production and transportation coordination on supply chain sustainability in a global supply chain environment with the consideration of important realistic characteristics, including parallel machines, different order processing complexities, fixed delivery departure times, green transportation and multiple transportation modes. We formulate the measurements for carbon emissions of different transportation modes, including air, sea and land transportation. A hybrid genetic algorithm-based optimization approach is developed to handle this problem, in which a hybrid genetic algorithm and heuristic procedures are combined. The effectiveness of the proposed approach is validated by means of various problem instances. We observe that the coordination of production and green transportation has a large effect on the overall supply chain sustainability, which can reduce the total supply chain cost by 9.60% to 21.90%.

  13. Massively Parallel Dimension Independent Adaptive Metropolis

    KAUST Repository

    Chen, Yuxin

    2015-05-14

    This work considers black-box Bayesian inference over high-dimensional parameter spaces. The well-known and widely respected adaptive Metropolis (AM) algorithm is extended herein to asymptotically scale uniformly with respect to the underlying parameter dimension, by respecting the variance, for Gaussian targets. The result- ing algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justified a posteriori). Asymptoti- cally in dimension, this massively parallel dimension-independent adaptive Metropolis (MPDIAM) GPU implementation exhibits a factor of four improvement versus the CPU-based Intel MKL version alone, which is itself already a factor of three improve- ment versus the serial version. The scaling to multiple CPUs and GPUs exhibits a form of strong scaling in terms of the time necessary to reach a certain convergence criterion, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. This is illustrated by e ciently sampling from several Gaussian and non-Gaussian targets for dimension d 1000.

  14. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  15. 3D printed soft parallel actuator

    Science.gov (United States)

    Zolfagharian, Ali; Kouzani, Abbas Z.; Khoo, Sui Yang; Noshadi, Amin; Kaynak, Akif

    2018-04-01

    This paper presents a 3-dimensional (3D) printed soft parallel contactless actuator for the first time. The actuator involves an electro-responsive parallel mechanism made of two segments namely active chain and passive chain both 3D printed. The active chain is attached to the ground from one end and constitutes two actuator links made of responsive hydrogel. The passive chain, on the other hand, is attached to the active chain from one end and consists of two rigid links made of polymer. The actuator links are printed using an extrusion-based 3D-Bioplotter with polyelectrolyte hydrogel as printer ink. The rigid links are also printed by a 3D fused deposition modelling (FDM) printer with acrylonitrile butadiene styrene (ABS) as print material. The kinematics model of the soft parallel actuator is derived via transformation matrices notations to simulate and determine the workspace of the actuator. The printed soft parallel actuator is then immersed into NaOH solution with specific voltage applied to it via two contactless electrodes. The experimental data is then collected and used to develop a parametric model to estimate the end-effector position and regulate kinematics model in response to specific input voltage over time. It is observed that the electroactive actuator demonstrates expected behaviour according to the simulation of its kinematics model. The use of 3D printing for the fabrication of parallel soft actuators opens a new chapter in manufacturing sophisticated soft actuators with high dexterity and mechanical robustness for biomedical applications such as cell manipulation and drug release.

  16. A discrete particle swarm optimization algorithm with local search for a production-based two-echelon single-vendor multiple-buyer supply chain

    Science.gov (United States)

    Seifbarghy, Mehdi; Kalani, Masoud Mirzaei; Hemmati, Mojtaba

    2016-03-01

    This paper formulates a two-echelon single-producer multi-buyer supply chain model, while a single product is produced and transported to the buyers by the producer. The producer and the buyers apply vendor-managed inventory mode of operation. It is assumed that the producer applies economic production quantity policy, which implies a constant production rate at the producer. The operational parameters of each buyer are sales quantity, sales price and production rate. Channel profit of the supply chain and contract price between the producer and each buyer is determined based on the values of the operational parameters. Since the model belongs to nonlinear integer programs, we use a discrete particle swarm optimization algorithm (DPSO) to solve the addressed problem; however, the performance of the DPSO is compared utilizing two well-known heuristics, namely genetic algorithm and simulated annealing. A number of examples are provided to verify the model and assess the performance of the proposed heuristics. Experimental results indicate that DPSO outperforms the rival heuristics, with respect to some comparison metrics.

  17. Screening for single-chain variable fragment antibodies against multiple Cry1 toxins from an immunized mouse phage display antibody library.

    Science.gov (United States)

    Dong, Sa; Bo, Zongyi; Zhang, Cunzheng; Feng, Jianguo; Liu, Xianjin

    2018-04-01

    Single-chain variable fragment (scFv) is a kind of antibody that possess only one chain of the complete antibody while maintaining the antigen-specific binding abilities and can be expressed in prokaryotic system. In this study, scFvs against Cry1 toxins were screened out from an immunized mouse phage displayed antibody library, which was successfully constructed with capacity of 6.25 × 10 7  CFU/mL. Using the mixed and alternative antigen coating strategy and after four rounds of affinity screening, seven positive phage-scFvs against Cry1 toxins were selected and characterized. Among them, clone scFv-3H9 (MG214869) showing relative stable and high binding abilities to six Cry1 toxins was selected for expression and purification. SDS-PAGE indicated that the scFv-3H9 fragments approximately 27 kDa were successfully expressed in Escherichia coli HB2151 strain. The purified scFv-3H9 was used to establish the double antibody sandwich enzyme-linked immunosorbent assay method (DAS-ELISA) for detecting six Cry1 toxins, of which the lowest detectable limits (LOD) and the lowest quantitative limits (LOQ) were 3.14-11.07 and 8.22-39.44 ng mL -1 , respectively, with the correlation coefficient higher than 0.997. The average recoveries of Cry1 toxins from spiked rice leaf samples were ranged from 84 to 95%, with coefficient of variation (CV) less than 8.2%, showing good accuracy for the multi-residue determination of six Cry1 toxins in agricultural samples. This research suggested that the constructed phage display antibody library based on the animal which was immunized with the mixture of several antigens under the same category can be used for the quick and effective screening of generic antibodies.

  18. Parallel Application Development Using Architecture View Driven Model Transformations

    NARCIS (Netherlands)

    Arkin, E.; Tekinerdogan, B.

    2015-01-01

    o realize the increased need for computing performance the current trend is towards applying parallel computing in which the tasks are run in parallel on multiple nodes. On its turn we can observe the rapid increase of the scale of parallel computing platforms. This situation has led to a complexity

  19. Solitons in Granular Chains

    International Nuclear Information System (INIS)

    Manciu, M.; Sen, S.; Hurd, A.J.

    1999-01-01

    The authors consider a chain of elastic (Hertzian) grains that repel upon contact according to the potential V = adelta u , u > 2, where delta is the overlap between the grains. They present numerical and analytical results to show that an impulse initiated at an end of a chain of Hertzian grains in contact eventually propagates as a soliton for all n > 2 and that no solitons are possible for n le 2. Unlike continuous, they find that colliding solitons in discrete media initiative multiple weak solitons at the point of crossing

  20. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  1. Phase transitions and thermal entanglement of the distorted Ising-Heisenberg spin chain: topology of multiple-spin exchange interactions in spin ladders

    Science.gov (United States)

    Arian Zad, Hamid; Ananikian, Nerses

    2017-11-01

    We consider a symmetric spin-1/2 Ising-XXZ double sawtooth spin ladder obtained from distorting a spin chain, with the XXZ interaction between the interstitial Heisenberg dimers (which are connected to the spins based on the legs via an Ising-type interaction), the Ising coupling between nearest-neighbor spins of the legs and rungs spins, respectively, and additional cyclic four-spin exchange (ring exchange) in the square plaquette of each block. The presented analysis supplemented by results of the exact solution of the model with infinite periodic boundary implies a rich ground state phase diagram. As well as the quantum phase transitions, the characteristics of some of the thermodynamic parameters such as heat capacity, magnetization and magnetic susceptibility are investigated. We prove here that among the considered thermodynamic and thermal parameters, solely heat capacity is sensitive versus the changes of the cyclic four-spin exchange interaction. By using the heat capacity function, we obtain a singularity relation between the cyclic four-spin exchange interaction and the exchange coupling between pair spins on each rung of the spin ladder. All thermal and thermodynamic quantities under consideration should be investigated by regarding those points which satisfy the singularity relation. The thermal entanglement within the Heisenberg spin dimers is investigated by using the concurrence, which is calculated from a relevant reduced density operator in the thermodynamic limit.

  2. Parallel pathways of ethoxylated alcohol biodegradation under aerobic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Zembrzuska, Joanna, E-mail: Joanna.Zembrzuska@put.poznan.pl; Budnik, Irena, E-mail: Irena.Budnik@gmail.com; Lukaszewski, Zenon, E-mail: zenon.lukaszewski@put.poznan.pl

    2016-07-01

    Non-ionic surfactants (NS) are a major component of the surfactant flux discharged into surface water, and alcohol ethoxylates (AE) are the major component of this flux. Therefore, biodegradation pathways of AE deserve more thorough investigation. The aim of this work was to investigate the stages of biodegradation of homogeneous oxyethylated dodecanol C{sub 12}E{sub 9} having 9 oxyethylene subunits, under aerobic conditions. Enterobacter strain Z3 bacteria were chosen as biodegrading organisms under conditions with C{sub 12}E{sub 9} as the sole source of organic carbon. Bacterial consortia of river water were used in a parallel test as an inoculum for comparison. The LC-MS technique was used to identify the products of biodegradation. Liquid-liquid extraction with ethyl acetate was selected for the isolation of C{sub 12}E{sub 9} and metabolites from the biodegradation broth. The LC-MS/MS technique operating in the multiple reaction monitoring (MRM) mode was used for quantitative determination of C{sub 12}E{sub 9}, C{sub 12}E{sub 8}, C{sub 12}E{sub 7} and C{sub 12}E{sub 6}. Apart from the substrate, the homologues C{sub 12}E{sub 8}, C{sub 12}E{sub 7} and C{sub 12}E{sub 6}, being metabolites of C{sub 12}E{sub 9} biodegradation by shortening of the oxyethylene chain, as well as intermediate metabolites having a carboxyl end group in the oxyethylene chain (C{sub 12}E{sub 8}COOH, C{sub 12}E{sub 7}COOH, C{sub 12}E{sub 6}COOH and C{sub 12}E{sub 5}COOH), were identified. Poly(ethylene glycols) (E) having 9, 8 and 7 oxyethylene subunits were also identified, indicating parallel central fission of C{sub 12}E{sub 9} and its metabolites. Similar results were obtained with river water as inoculum. It is concluded that AE, under aerobic conditions, are biodegraded via two parallel pathways: by central fission with the formation of PEG, and by Ω-oxidation of the oxyethylene chain with the formation of carboxylated AE and subsequent shortening of the oxyethylene chain by a

  3. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  4. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  5. Alternative derivation of the parallel ion viscosity

    International Nuclear Information System (INIS)

    Bravenec, R.V.; Berk, H.L.; Hammer, J.H.

    1982-01-01

    A set of double-adiabatic fluid equations with additional collisional relaxation between the ion temperatures parallel and perpendicular to a magnetic field are shown to reduce to a set involving a single temperature and a parallel viscosity. This result is applied to a recently published paper [R. V. Bravenec, A. J. Lichtenberg, M. A. Leiberman, and H. L. Berk, Phys. Fluids 24, 1320 (1981)] on viscous flow in a multiple-mirror configuration

  6. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  7. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  8. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  9. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  10. Development of a multiple-marker polymerase chain reaction assay for detection of metastatic melanoma in lymph node aspirates of dogs.

    Science.gov (United States)

    Catchpole, Brian; Gould, Sara M; Kellett-Gregory, Lindsay M; Dobson, Jane M

    2003-05-01

    To develop a reverse transcriptase-polymerase chain reaction (RT-PCR) assay to detect canine melanoma-associated antigens (MAAs) and to use this technique to screen aspirates of lymph nodes (LNs) for evidence of metastatic spread of oral malignant melanoma. 7 dogs with oral malignant melanoma and 4 dogs with multicentric lymphosarcoma. We prepared cDNA from melanoma tumor biopsies and fine-needle aspirates obtained from submandibular LNs of dogs with oral malignant melanoma or multicentric lymphosarcoma. The RT-PCR assay was performed by use of tyrosinase, Melan-A, gp100, tyrosinase-related protein 2 (TRP-2), or melanoma antigen-encoding gene B (MAGE-B)-specific primers. We detected MAGE-B mRNA in canine testicular tissue but not in melanoma biopsy specimens. Tyrosinase, Melan-A, gp100, and TRP-2 mRNAs were detected in tumor biopsy specimens and in 2 of 5 LN aspirates from dogs with melanoma, suggesting metastatic spread in those 2 dogs. We did not detect MAAs in LN aspirates obtained from dogs with multicentric lymphosarcoma. Sequencing of canine Melan-A and gp100 PCR products confirmed the specificity of the assay for these genes. Clinical staging of dogs with oral malignant melanoma is useful to assist in designing appropriate treatments. However, results of histologic examination of LN biopsy specimens can be inconclusive and, in humans, can underestimate the number of patients with metastatic disease. Molecular staging of melanomas in dogs can be achieved by screening LN aspirates for MAA mRNA, and this can be performed in combination with cytologic examination to aid in detection of metastatic disease.

  11. Multiple Actions of Rotenone, an Inhibitor of Mitochondrial Respiratory Chain, on Ionic Currents and Miniature End-Plate Potential in Mouse Hippocampal (mHippoE-14 Neurons

    Directory of Open Access Journals (Sweden)

    Chin-Wei Huang

    2018-05-01

    Full Text Available Background/Aims: Rotenone (Rot is known to suppress the activity of complex I in the mitochondrial chain reaction; however, whether this compound has effects on ion currents in neurons remains largely unexplored. Methods: With the aid of patch-clamp technology and simulation modeling, the effects of Rot on membrane ion currents present in mHippoE-14 cells were investigated. Results: Addition of Rot produced an inhibitory action on the peak amplitude of INa with an IC50 value of 39.3 µM; however, neither activation nor inactivation kinetics of INa was changed during cell exposure to this compound. Addition of Rot produced little or no modifications in the steady-state inactivation curve of INa. Rot increased the amplitude of Ca2+-activated Cl- current in response to membrane depolarization with an EC50 value of 35.4 µM; further addition of niflumic acid reversed Rot-mediated stimulation of this current. Moreover, when these cells were exposed to 10 µM Rot, a specific population of ATP-sensitive K+ channels with a single-channel conductance of 18.1 pS was measured, despite its inability to alter single-channel conductance. Under current clamp condition, the frequency of miniature end-plate potentials in mHippoE-14 cells was significantly raised in the presence of Rot (10 µM with no changes in their amplitude and time course of rise and decay. In simulated model of hippocampal neurons incorporated with chemical autaptic connection, increased autaptic strength to mimic the action of Rot was noted to change the bursting pattern with emergence of subthreshold potentials. Conclusions: The Rot effects presented herein might exert a significant action on functional activities of hippocampal neurons occurring in vivo.

  12. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  13. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  14. CHAIN 2

    International Nuclear Information System (INIS)

    Bailey, D.

    1998-04-01

    The Second Processing Chain (CHAIN2) consists of a suite of ten programs which together provide a full local analysis of the bulk plasma physics within the JET Tokamak. In discussing these ten computational models this report is intended to fulfil two broad purposes. Firstly it is meant to be used as a reference source for any user of CHAIN2 data, and secondly it provides a basic User Manual sufficient to instruct anyone in running the CHAIN2 suite of codes. In the main report text each module is described in terms of its underlying physics and any associated assumptions or limitations, whilst deliberate emphasis is put on highlighting the physics and mathematics of the calculations required in deriving each individual datatype in the standard module PPF output. In fact each datatype of the CHAIN2 PPF output listed in Appendix D is cross referenced to the point in the main text where its evaluation is discussed. An effort is made not only to give the equation used to derive a particular data profile but also to explicitly define which external data sources are involved in the computational calculation

  15. Development of a multiplex polymerase chain reaction-sequence-specific primer method for NKG2D and NKG2F single-nucleotide polymorphism typing using isothermal multiple displacement amplification products.

    Science.gov (United States)

    Kaewmanee, M; Phoksawat, W; Romphruk, A; Romphruk, A V; Jumnainsong, A; Leelayuwat, C

    2013-06-01

    Natural killer group 2 member D (NKG2D) on immune effector cells recognizes multiple stress-inducible ligands. NKG2D single-nucleotide polymorphism (SNP) haplotypes were related to the levels of cytotoxic activity of peripheral blood mononuclear cells. Indeed, these polymorphisms were also located in NKG2F. Isothermal multiple displacement amplification (IMDA) is used for whole genome amplification (WGA) that can amplify very small genomic DNA templates into microgram with whole genome coverage. This is particularly useful in the cases of limited amount of valuable DNA samples requiring multi-locus genotyping. In this study, we evaluated the quality and applicability of IMDA to genetic studies in terms of sensitivity, efficiency of IMDA re-amplification and stability of IMDA products. The smallest amount of DNA to be effectively amplified by IMDA was 200 pg yielding final DNA of approximately 16 µg within 1.5 h. IMDA could be re-amplified only once (second round of amplification), and could be kept for 5 months at 4°C and more than a year at -20°C without loosing genome coverage. The amplified products were used successfully to setup a multiplex polymerase chain reaction-sequence-specific primer for SNP typing of the NKG2D/F genes. The NKG2D/F multiplex polymerase chain reaction (PCR) contained six PCR mixtures for detecting 10 selected SNPs, including 8 NKG2D/F SNP haplotypes and 2 additional NKG2D coding SNPs. This typing procedure will be applicable in both clinical and research laboratories. Thus, our data provide useful information and limitations for utilization of genome-wide amplification using IMDA and its application for multiplex NKG2D/F typing. © 2013 John Wiley & Sons Ltd.

  16. Comparative analysis of human cytomegalovirus a-sequence in multiple clinical isolates by using polymerase chain reaction and restriction fragment length polymorphism assays.

    Science.gov (United States)

    Zaia, J A; Gallez-Hawkins, G; Churchill, M A; Morton-Blackshere, A; Pande, H; Adler, S P; Schmidt, G M; Forman, S J

    1990-01-01

    The human cytomegalovirus (HCMV) a-sequence (a-seq) is located in the joining region between the long (L) and short (S) unique sequences of the virus (L-S junction), and this hypervariable junction has been used to differentiate HCMV strains. The purpose of this study was to investigate whether there are differences among strains of human cytomegalovirus which could be characterized by polymerase chain reaction (PCR) amplification of the a-seq of HCMV DNA and to compare a PCR method of strain differentiation with conventional restriction fragment length polymorphism (RFLP) methodology by using HCMV junction probes. Laboratory strains of HCMV and viral isolates from individuals with HCMV infection were characterized by using both RFLPs and PCR. The PCR assay amplified regions in the major immediate-early gene (IE-1), the 64/65-kDa matrix phosphoprotein (pp65), and the a-seq of the L-S junction region. HCMV laboratory strains Towne, AD169, and Davis were distinguishable, in terms of size of the amplified product, when analyzed by PCR with primers specific for the a-seq but were indistinguishable by using PCR targeted to IE-1 and pp65 sequences. When this technique was applied to a characterization of isolates from individuals with HCMV infection, selected isolates could be readily distinguished. In addition, when the a-seq PCR product was analyzed with restriction enzyme digestion for the presence of specific sequences, these DNA differences were confirmed. PCR analysis across the variable a-seq of HCMV demonstrated differences among strains which were confirmed by RFLP in 38 of 40 isolates analyzed. The most informative restriction enzyme sites in the a-seq for distinguishing HCMV isolates were those of MnlI and BssHII. This indicates that the a-seq of HCMV is heterogeneous among wild strains, and PCR of the a-seq of HCMV is a practical way to characterize differences in strains of HCMV. Images PMID:1980680

  17. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  18. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  19. Scaling up antiretroviral therapy in Uganda: using supply chain management to appraise health systems strengthening.

    Science.gov (United States)

    Windisch, Ricarda; Waiswa, Peter; Neuhann, Florian; Scheibe, Florian; de Savigny, Don

    2011-08-01

    Strengthened national health systems are necessary for effective and sustained expansion of antiretroviral therapy (ART). ART and its supply chain management in Uganda are largely based on parallel and externally supported efforts. The question arises whether systems are being strengthened to sustain access to ART. This study applies systems thinking to assess supply chain management, the role of external support and whether investments create the needed synergies to strengthen health systems. This study uses the WHO health systems framework and examines the issues of governance, financing, information, human resources and service delivery in relation to supply chain management of medicines and the technologies. It looks at links and causal chains between supply chain management for ART and the national supply system for essential drugs. It combines data from the literature and key informant interviews with observations at health service delivery level in a study district. Current drug supply chain management in Uganda is characterized by parallel processes and information systems that result in poor quality and inefficiencies. Less than expected health system performance, stock outs and other shortages affect ART and primary care in general. Poor performance of supply chain management is amplified by weak conditions at all levels of the health system, including the areas of financing, governance, human resources and information. Governance issues include the lack to follow up initial policy intentions and a focus on narrow, short-term approaches. The opportunity and need to use ART investments for an essential supply chain management and strengthened health system has not been exploited. By applying a systems perspective this work indicates the seriousness of missing system prerequisites. The findings suggest that root causes and capacities across the system have to be addressed synergistically to enable systems that can match and accommodate investments in

  20. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  1. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  2. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  3. Tracking human multiple myeloma xenografts in NOD-Rag-1/IL-2 receptor gamma chain-null mice with the novel biomarker AKAP-4

    International Nuclear Information System (INIS)

    Mirandola, Leonardo; Yu, Yuefei; Jenkins, Marjorie R; Chiaramonte, Raffaella; Cobos, Everardo; John, Constance M; Chiriva-Internati, Maurizio

    2011-01-01

    Multiple myeloma (MM) is a fatal malignancy ranking second in prevalence among hematological tumors. Continuous efforts are being made to develop innovative and more effective treatments. The preclinical evaluation of new therapies relies on the use of murine models of the disease. Here we describe a new MM animal model in NOD-Rag1null IL2rgnull (NRG) mice that supports the engraftment of cell lines and primary MM cells that can be tracked with the tumor antigen, AKAP-4. Human MM cell lines, U266 and H929, and primary MM cells were successfully engrafted in NRG mice after intravenous administration, and were found in the bone marrow, blood and spleen of tumor-challenged animals. The AKAP-4 expression pattern was similar to that of known MM markers, such as paraproteins, CD38 and CD45. We developed for the first time a murine model allowing for the growth of both MM cell lines and primary cells in multifocal sites, thus mimicking the disease seen in patients. Additionally, we validated the use of AKAP-4 antigen to track tumor growth in vivo and to specifically identify MM cells in mouse tissues. We expect that our model will significantly improve the pre-clinical evaluation of new anti-myeloma therapies

  4. Icotinib versus whole-brain irradiation in patients with EGFR-mutant non-small-cell lung cancer and multiple brain metastases (BRAIN): a multicentre, phase 3, open-label, parallel, randomised controlled trial.

    Science.gov (United States)

    Yang, Jin-Ji; Zhou, Caicun; Huang, Yisheng; Feng, Jifeng; Lu, Sun; Song, Yong; Huang, Cheng; Wu, Gang; Zhang, Li; Cheng, Ying; Hu, Chengping; Chen, Gongyan; Zhang, Li; Liu, Xiaoqing; Yan, Hong Hong; Tan, Fen Lai; Zhong, Wenzhao; Wu, Yi-Long

    2017-09-01

    For patients with non-small-cell lung cancer (NSCLC) and multiple brain metastases, whole-brain irradiation (WBI) is a standard-of-care treatment, but its effects on neurocognition are complex and concerning. We compared the efficacy of an epidermal growth factor receptor (EGFR)-tyrosine kinase inhibitor (TKI), icotinib, versus WBI with or without chemotherapy in a phase 3 trial of patients with EGFR-mutant NSCLC and multiple brain metastases. We did a multicentre, open-label, parallel randomised controlled trial (BRAIN) at 17 hospitals in China. Eligible participants were patients with NSCLC with EGFR mutations, who were naive to treatment with EGFR-TKIs or radiotherapy, and had at least three metastatic brain lesions. We randomly assigned participants (1:1) to either icotinib 125 mg orally (three times per day) or WBI (30 Gy in ten fractions of 3 Gy) plus concurrent or sequential chemotherapy for 4-6 cycles, until unacceptable adverse events or intracranial disease progression occurred. The randomisation was done by the Chinese Thoracic Oncology Group with a web-based allocation system applying the Pocock and Simon minimisation method; groups were stratified by EGFR gene mutation status, treatment line (first line or second line), brain metastases only versus both intracranial and extracranial metastases, and presence or absence of symptoms of intracranial hypertension. Clinicians and patients were not masked to treatment assignment, but individuals involved in the data analysis did not participate in the treatments and were thus masked to allocation. Patients receiving icotinib who had intracranial progression only were switched to WBI plus either icotinib or chemotherapy until further progression; those receiving icotinib who had extracranial progression only were switched to icotinib plus chemotherapy. Patients receiving WBI who progressed were switched to icotinib until further progression. Icotinib could be continued beyond progression if a clinical benefit

  5. Parallel computing and networking; Heiretsu keisanki to network

    Energy Technology Data Exchange (ETDEWEB)

    Asakawa, E; Tsuru, T [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T [Japan Petroleum Exploration Co. Ltd., Tokyo (Japan)

    1996-05-01

    This paper describes the trend of parallel computers used in geophysical exploration. Around 1993 was the early days when the parallel computers began to be used for geophysical exploration. Classification of these computers those days was mainly MIMD (multiple instruction stream, multiple data stream), SIMD (single instruction stream, multiple data stream) and the like. Parallel computers were publicized in the 1994 meeting of the Geophysical Exploration Society as a `high precision imaging technology`. Concerning the library of parallel computers, there was a shift to PVM (parallel virtual machine) in 1993 and to MPI (message passing interface) in 1995. In addition, the compiler of FORTRAN90 was released with support implemented for data parallel and vector computers. In 1993, networks used were Ethernet, FDDI, CDDI and HIPPI. In 1995, the OC-3 products under ATM began to propagate. However, ATM remains to be an interoffice high speed network because the ATM service has not spread yet for the public network. 1 ref.

  6. Heavy Chain Diseases

    Science.gov (United States)

    ... of heavy chain produced: Alpha Gamma Mu Alpha Heavy Chain Disease Alpha heavy chain disease (IgA heavy ... the disease or lead to a remission. Gamma Heavy Chain Disease Gamma heavy chain disease (IgG heavy ...

  7. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    Science.gov (United States)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  8. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  9. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  10. Supply chain dynamics in healthcare services.

    Science.gov (United States)

    Samuel, Cherian; Gonapa, Kasiviswanadh; Chaudhary, P K; Mishra, Ananya

    2010-01-01

    The purpose of this paper is to analyse health service supply chain systems. A great deal of literature is available on supply chain management in finished goods inventory situations; however, little research exists on managing service capacity when finished goods inventories are absent. System dynamics models for a typical service-oriented supply chain such as healthcare processes are developed, wherein three service stages are presented sequentially. Just like supply chains with finished goods inventory, healthcare service supply chains also show dynamic behaviour. Comparing options, service reduction, and capacity adjustment delays showed that reducing capacity adjustment and service delays gives better results. The study is confined to health service-oriented supply chains. Further work includes extending the study to service-oriented supply chains with parallel processing, i.e. having more than one stage to perform a similar operation and also to study the behaviour in service-oriented supply chains that have re-entrant orders and applications. Specific case studies can also be developed to reveal factors relevant to particular service-oriented supply chains. The paper explains the bullwhip effect in healthcare service-oriented supply chains. Reducing stages and capacity adjustment are strategic options for service-oriented supply chains. The paper throws light on policy options for managing healthcare service-oriented supply chain dynamics.

  11. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  12. Parallel optoelectronic trinary signed-digit division

    Science.gov (United States)

    Alam, Mohammad S.

    1999-03-01

    The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.

  13. Stranger than fiction parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- at least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  14. Stranger that fiction parallel universes beguile science

    CERN Document Server

    2007-01-01

    Is the universe -- correction: 'our' universe -- no more than a speck of cosmic dust amid an infinite number of parallel worlds? A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too.

  15. Stranger than fiction: parallel universes beguile science

    CERN Document Server

    Hautefeuille, Annie

    2007-01-01

    Is the universe-correction: 'our' universe-no more than a speck of cosmic dust amid an infinite number of parallel worlds? A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too.

  16. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  17. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  18. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  19. High Performance Parallel Multigrid Algorithms for Unstructured Grids

    Science.gov (United States)

    Frederickson, Paul O.

    1996-01-01

    We describe a high performance parallel multigrid algorithm for a rather general class of unstructured grid problems in two and three dimensions. The algorithm PUMG, for parallel unstructured multigrid, is related in structure to the parallel multigrid algorithm PSMG introduced by McBryan and Frederickson, for they both obtain a higher convergence rate through the use of multiple coarse grids. Another reason for the high convergence rate of PUMG is its smoother, an approximate inverse developed by Baumgardner and Frederickson.

  20. Parallel transposition of sparse data structures

    DEFF Research Database (Denmark)

    Wang, Hao; Liu, Weifeng; Hou, Kaixi

    2016-01-01

    Many applications in computational sciences and social sciences exploit sparsity and connectivity of acquired data. Even though many parallel sparse primitives such as sparse matrix-vector (SpMV) multiplication have been extensively studied, some other important building blocks, e.g., parallel tr...... transposition in the latest vendor-supplied library on an Intel multicore CPU platform, and the MergeTrans approach achieves on average of 3.4-fold (up to 11.7-fold) speedup on an Intel Xeon Phi many-core processor....

  1. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  2. Analysis of a parallel multigrid algorithm

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1989-01-01

    The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.

  3. Use of parallel counters for triggering

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    Results of investigation of using parallel counters, majority coincidence schemes, parallel compressors for triggering in multichannel high energy spectrometers are described. Concrete examples of methods of constructing fast and economic new devices used to determine multiplicity hits t>900 registered in a hodoscopic plane and a pixel detector are given. For this purpose the author uses the syndrome coding method and cellular arrays. In addition, an effective coding matrix has been created which can be used for light signal coding. For example, such signals are supplied from scintillators to photomultipliers. 23 refs.; 21 figs

  4. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  5. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  6. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  7. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  8. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  9. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  10. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  11. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  12. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  13. Supply Chain Management: Implementation Issues and Research Opportunities

    DEFF Research Database (Denmark)

    Pagh, Janus Dóre; Cooper, Martha

    1998-01-01

    This paper concentrates on operationalizing the supply chain management framework suggested in a 1997 article. Case studies conducted at several companies and involving multiple members of supply chains are used to illustrate the concepts described...

  14. A Soft Parallel Kinematic Mechanism.

    Science.gov (United States)

    White, Edward L; Case, Jennifer C; Kramer-Bottiglio, Rebecca

    2018-02-01

    In this article, we describe a novel holonomic soft robotic structure based on a parallel kinematic mechanism. The design is based on the Stewart platform, which uses six sensors and actuators to achieve full six-degree-of-freedom motion. Our design is much less complex than a traditional platform, since it replaces the 12 spherical and universal joints found in a traditional Stewart platform with a single highly deformable elastomer body and flexible actuators. This reduces the total number of parts in the system and simplifies the assembly process. Actuation is achieved through coiled-shape memory alloy actuators. State observation and feedback is accomplished through the use of capacitive elastomer strain gauges. The main structural element is an elastomer joint that provides antagonistic force. We report the response of the actuators and sensors individually, then report the response of the complete assembly. We show that the completed robotic system is able to achieve full position control, and we discuss the limitations associated with using responsive material actuators. We believe that control demonstrated on a single body in this work could be extended to chains of such bodies to create complex soft robots.

  15. Parallel algorithms on the ASTRA SIMD machine

    International Nuclear Information System (INIS)

    Odor, G.; Rohrbach, F.; Vesztergombi, G.; Varga, G.; Tatrai, F.

    1996-01-01

    In view of the tremendous computing power jump of modern RISC processors the interest in parallel computing seems to be thinning out. Why use a complicated system of parallel processors, if the problem can be solved by a single powerful micro-chip. It is a general law, however, that exponential growth will always end by some kind of a saturation, and then parallelism will again become a hot topic. We try to prepare ourselves for this eventuality. The MPPC project started in 1990 in the keydeys of parallelism and produced four ASTRA machines (presented at CHEP's 92) with 4k processors (which are expandable to 16k) based on yesterday's chip-technology (chip presented at CHEP'91). These machines now provide excellent test-beds for algorithmic developments in a complete, real environment. We are developing for example fast-pattern recognition algorithms which could be used in high-energy physics experiments at the LHC (planned to be operational after 2004 at CERN) for triggering and data reduction. The basic feature of our ASP (Associate String Processor) approach is to use extremely simple (thus very cheap) processor elements but in huge quantities (up to millions of processors) connected together by a very simple string-like communication chain. In this paper we present powerful algorithms based on this architecture indicating the performance perspectives if the hardware quality reaches present or even future technology levels. (author)

  16. Chain reaction

    International Nuclear Information System (INIS)

    Balogh, Brian.

    1991-01-01

    Chain Reaction is a work of recent American political history. It seeks to explain how and why America came to depend so heavily on its experts after World War II, how those experts translated that authority into political clout, and why that authority and political discretion declined in the 1970s. The author's research into the internal memoranda of the Atomic Energy Commission substantiates his argument in historical detail. It was not the ravages of American anti-intellectualism, as so many scholars have argued, that brought the experts back down to earth. Rather, their decline can be traced to the very roots of their success after World War II. The need to over-state anticipated results in order to garner public support, incessant professional and bureaucratic specialization, and the sheer proliferation of expertise pushed arcane and insulated debates between experts into public forums at the same time that a broad cross section of political participants found it easier to gain access to their own expertise. These tendencies ultimately undermined the political influence of all experts. (author)

  17. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  18. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  19. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  20. Compilation Tool Chains and Intermediate Representations

    DEFF Research Database (Denmark)

    Mottin, Julien; Pacull, François; Keryell, Ronan

    2014-01-01

    In SMECY, we believe that an efficient tool chain could only be defined when the type of parallelism required by an application domain and the hardware architecture is fixed. Furthermore, we believe that once a set of tools is available, it is possible with reasonable effort to change hardware ar...

  1. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  2. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  3. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  4. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  5. Boltzmann machines as a model for parallel annealing

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.

    1991-01-01

    The potential of Boltzmann machines to cope with difficult combinatorial optimization problems is investigated. A discussion of various (parallel) models of Boltzmann machines is given based on the theory of Markov chains. A general strategy is presented for solving (approximately) combinatorial

  6. Parallelization of Subchannel Analysis Code MATRA

    International Nuclear Information System (INIS)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk

    2014-01-01

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems

  7. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-01

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

  8. Compositionality for Markov reward chains with fast and silent transitions

    NARCIS (Netherlands)

    Markovski, J.; Sokolova, A.; Trcka, N.; Vink, de E.P.

    2009-01-01

    A parallel composition is defined for Markov reward chains with stochastic discontinuity, and with fast and silent transitions. In this setting, compositionality with respect to the relevant aggregation preorders is established. For Markov reward chains with fast transitions the preorders are

  9. The BLAZE language - A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  10. The BLAZE language: A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  11. A randomized, double-blind, placebo-controlled, multiple-dose, parallel-group clinical trial to assess the effects of teduglutide on gastric emptying of liquids in healthy subjects.

    Science.gov (United States)

    Berg, Jolene Kay; Kim, Eric H; Li, Benjamin; Joelsson, Bo; Youssef, Nader N

    2014-02-12

    Teduglutide, a recombinant analog of human glucagon-like peptide (GLP)-2, is a novel therapy recently approved for the treatment of adult patients with short bowel syndrome who are dependent on parenteral support. Previous studies assessing the effect of GLP-2 on gastric emptying in humans have yielded inconsistent results, with some studies showing no effect and others documenting a GLP-2-dependent delay in gastric emptying. The primary objective of this study was to assess the effect of teduglutide on gastric emptying of liquids in healthy subjects, as measured by the pharmacokinetics of acetaminophen. This double-blind, parallel-group, single-center study enrolled and randomized 36 healthy subjects (22 men, 14 women) to receive subcutaneous doses of teduglutide 4 mg or placebo (2:1 ratio; 23:13) once daily on Days 1 through 10 in the morning. Gastric emptying of a mixed nutrient liquid meal was assessed by measuring acetaminophen levels predose and at 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 3.5, 4, 5, 6, 8, 10, 12, and 14 hours after administration of 1000 mg acetaminophen on Days 0 and 10. The primary study endpoint was a pharmacokinetic analysis of acetaminophen absorption in subjects receiving teduglutide or placebo. No significant differences in gastric emptying of liquids (acetaminophen area under the concentration [AUC] vs time curve from time 0 to the last measurable concentration, AUC extrapolated to infinity, maximum concentration [Cmax], and time to Cmax) were observed on Day 10 in subjects receiving teduglutide 4 mg versus subjects receiving placebo. There were no serious adverse events (AEs), deaths, or discontinuations due to an AE reported during the study. Teduglutide 4 mg/day for 10 days does not affect gastric emptying of liquids in healthy subjects as measured by acetaminophen pharmacokinetics. No unexpected safety signals were observed. This study was registered at ClinicalTrials.gov, identifier NCT01209351.

  12. A double-blind, randomized, multiple-dose, parallel-group study to characterize the occurrence of diarrhea following two different dosing regimens of neratinib, an irreversible pan-ErbB receptor tyrosine kinase inhibitor.

    Science.gov (United States)

    Abbas, Richat; Hug, Bruce A; Leister, Cathie; Sonnichsen, Daryl

    2012-07-01

    Neratinib, a potent, low-molecular-weight, orally administered, irreversible, pan-ErbB receptor tyrosine kinase inhibitor has antitumor activity in ErbB2 + breast cancer. The objective of this study was to characterize the onset, severity, and duration of diarrhea after administration of neratinib 240 mg once daily (QD) and 120 mg twice daily (BID) for ≤14 days in healthy subjects. A randomized, double-blind, parallel-group, inpatient study was conducted in 50 subjects given oral neratinib either 240 mg QD or 120 mg BID with food for ≤14 days. The primary endpoint was the proportion of subjects with diarrhea of at least moderate severity (grade 2; 5-7 loose stools/day). In subjects with grade 2 diarrhea, fecal analytes were determined. Pharmacokinetic profiles were characterized for neratinib on Days 1 and 7. No severe (grade 3) diarrhea was reported. By Day 4, all subjects had grade 1 diarrhea. Grade 2 diarrhea occurred in 11/22 evaluable subjects (50 % [90 % confidence interval (CI): 28-72 %]) in the QD group and 17/23 evaluable subjects (74 % [90 % CI: 52-90 %]) in the BID group (P = 0.130). In fecal analyses, 18 % tested positive for hemoglobin and 46 % revealed fecal lactoferrin. Specimen pH was neutral to slightly alkaline. In pharmacokinetic analyses, Day 1 peak plasma concentration and Day 7 steady-state exposure were higher with the QD regimen than the BID regimen. In an exploratory analysis, ABCG2 genotype showed no correlation with severity or onset of diarrhea. Incidences and onsets of at least grade 1 and at least grade 2 diarrhea were not improved on BID dosing compared with QD dosing.

  13. Green Nanotechnology in Nordic Construction - Eco-innovation strategies and Dynamics in nordic Window Chains

    DEFF Research Database (Denmark)

    Andersen, Maj Munch; Sandén, Björn A.; Palmberg, Christopher

    This project analyzes Nordic trends in the development and industrial uptake of green nanotechno-logy in construction. The project applies an evolutionary economic perspective in analyzing the innovation dynamics and firm strategies in the window value chains in three Nordic countries, Denmark......, Finland and Sweden. Hence the project investigates two pervasive parallel market trends: The emergence of the green market and the emergence of nanotechnology. The analysis investigates how a traditional economic sector such as the construction sector reacts to such major trends. Conclusions are multiple...... of nanotechnology in the construction sector in the Nordic countries we do find quite a high number of nanotech applications in the Nordic window chains. Eco-innovation is influencing strongly on the nanotech development. We see several examples of nano-enabled smart, multifunctional green solutions in the Nordic...

  14. Green nanotechnology in Nordic Construction: Eco-innovation strategies and Dynamics in Nordic Window Value Chains

    DEFF Research Database (Denmark)

    Andersen, Maj Munch

    2010-01-01

    This project analyzes Nordic trends in the development and industrial uptake of green nanotechno-logy in construction. The project applies an evolutionary economic perspective in analyzing the innovation dynamics and firm strategies in the window value chains in three Nordic countries, Denmark......, Finland and Sweden. Hence the project investigates two pervasive parallel market trends: The emergence of the green market and the emergence of nanotechnology. The analysis investigates how a traditional economic sector such as the construction sector reacts to such major trends. Conclusions are multiple...... of nanotechnology in the construction sector in the Nordic countries we do find quite a high number of nanotech applications in the Nordic window chains. Eco-innovation is influencing strongly on the nanotech development. We see several examples of nano-enabled smart, multifunctional green solutions in the Nordic...

  15. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  16. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  17. Linear parallel processing machines I

    Energy Technology Data Exchange (ETDEWEB)

    Von Kunze, M

    1984-01-01

    As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.

  18. [Analysis of Relationship between Serum Total Light Chain κ/λ Ratio and Proportion of Bone Marrow Plasma Cells in Patients with IgG type and IgA type Multiple Myeloma].

    Science.gov (United States)

    Zhu, An-You; Zhu, Fang-Bing; Wang, Feng-Chao; Zhang, Lun-Jun; Ma, Yue; Hu, Jian-Guo

    2017-10-01

    To explore the relationship between serum total light chain κ/λ ratio (sTLC-κ/λ) and proportion of bone marrow plasma cells (BMPC) in patients with IgG type and IgA type multiple myeloma (MM) and its clinical significance. The levels of serum IgG, IgA, κ type and λ type total light chain were detected in 79 newly diagnosed patients with IgG type (n=52) and IgA type (n=27) MM by immuno-nephelometric assay and the sTLC-κ/λ ratio was calculated. The proportion of BMPC was determined by bone marrow smears in the corresponding period, and the changes in sTLC-κ/λ ratio and the proportion of BMPC were observed in 19 patients with IgG type(n=16) and IgA type (n=3) MM undergoing treatment, 26 cases of non-phasmocytic proliferative diseases were enrolled in control group. In MM patients with IgGκ type and IgAκ type, the sTLC-κ/λ ratio was significantly higher than that in the control group (Pratio was significantly lower than that in the control group (Pratio was significantly higher than that in MM patients with IgAκ(Pratio in MM patients with IgGλ was significantly lower than that in MM patients with IgAλ. The sTLC-κ/λ ratios in MM patients with IgGκ and IgAκ were positively correlated with the concentrations of IgG (r=0.778,P=0.000) and IgA (r=0.601,P=0.039), while the sTLC-κ/λ ratios of patients with IgGλ and IgAλ were negativily correlated with the IgG(r=-0.586,P=0.01) and IgA level(r=-0.718,P=0.003). In addition, a correlation between each type MM was not found except the IgGκ type MM which had a positive correlation between the sTLC-κ/λ ratio and proportion of BMPC (r=0.579,P=0.002). Nonetheless, 18 of 19 patients with IgG type and IgA type MM undergoing treatment showed concordance between the sTLC-κ/λ ratio and proportion of BMPC change. There is a lower correlation between the sTLC-κ/λ ratio and the proportion of BMPC in MM patients with IgG type and IgA type, but there is a high concordance between the sTLC-κ/λ ratio and the

  19. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  20. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  1. MASCOT user's guide--Version 2.0: Analytical solutions for multidimensional transport of a four-member radionuclide decay chain in ground water

    International Nuclear Information System (INIS)

    Gureghian, A.B.

    1988-07-01

    The MASCOT code computes the two- and three-dimensional space-time dependent convective-dispersive transport of a four-member radionuclide decay chain in unbounded homogeneous porous media, for constant and radionuclide-dependent release, and assuming steady- state isothermal ground-water flow and parallel streamlines. The model can handle a single or multiple finite line source or a Gaussian distributed source in the two-dimensional case, and a single or multiple patch source or bivariate-normal distributed source in the three-dimensional case. The differential equations are solved by Laplace and Fourier transforms and a Gauss-Legendre integration scheme. 33 figs., 3 tabs

  2. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  3. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  4. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  5. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  6. Neoclassical parallel flow calculation in the presence of external parallel momentum sources in Heliotron J

    Energy Technology Data Exchange (ETDEWEB)

    Nishioka, K.; Nakamura, Y. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Nishimura, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Lee, H. Y. [Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); Kobayashi, S.; Mizuuchi, T.; Nagasaki, K.; Okada, H.; Minami, T.; Kado, S.; Yamamoto, S.; Ohshima, S.; Konoshima, S.; Sano, F. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan)

    2016-03-15

    A moment approach to calculate neoclassical transport in non-axisymmetric torus plasmas composed of multiple ion species is extended to include the external parallel momentum sources due to unbalanced tangential neutral beam injections (NBIs). The momentum sources that are included in the parallel momentum balance are calculated from the collision operators of background particles with fast ions. This method is applied for the clarification of the physical mechanism of the neoclassical parallel ion flows and the multi-ion species effect on them in Heliotron J NBI plasmas. It is found that parallel ion flow can be determined by the balance between the parallel viscosity and the external momentum source in the region where the external source is much larger than the thermodynamic force driven source in the collisional plasmas. This is because the friction between C{sup 6+} and D{sup +} prevents a large difference between C{sup 6+} and D{sup +} flow velocities in such plasmas. The C{sup 6+} flow velocities, which are measured by the charge exchange recombination spectroscopy system, are numerically evaluated with this method. It is shown that the experimentally measured C{sup 6+} impurity flow velocities do not contradict clearly with the neoclassical estimations, and the dependence of parallel flow velocities on the magnetic field ripples is consistent in both results.

  7. Hyper-systolic matrix multiplication

    NARCIS (Netherlands)

    Lippert, Th.; Petkov, N.; Palazzari, P.; Schilling, K.

    A novel parallel algorithm for matrix multiplication is presented. It is based on a 1-D hyper-systolic processor abstraction. The procedure can be implemented on all types of parallel systems. (C) 2001 Elsevier Science B,V. All rights reserved.

  8. CHAINS-PC, Decay Chain Atomic Densities

    International Nuclear Information System (INIS)

    1994-01-01

    1 - Description of program or function: CHAINS computes the atom density of members of a single radioactive decay chain. The linearity of the Bateman equations allows tracing of interconnecting chains by manually accumulating results from separate calculations of single chains. Re-entrant loops can be treated as extensions of a single chain. Losses from the chain are also tallied. 2 - Method of solution: The Bateman equations are solved analytically using double-precision arithmetic. Poles are avoided by small alterations of the loss terms. Multigroup fluxes, cross sections, and self-shielding factors entered as input are used to compute the effective specific reaction rates. The atom densities are computed at any specified times. 3 - Restrictions on the complexity of the problem: Maxima of 100 energy groups, 100 time values, 50 members in a chain

  9. Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python

    Science.gov (United States)

    Laura, Jason R.; Rey, Sergio J.

    2017-01-01

    Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.

  10. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  11. Linkage mechanisms in the vertebrate skull: Structure and function of three-dimensional, parallel transmission systems.

    Science.gov (United States)

    Olsen, Aaron M; Westneat, Mark W

    2016-12-01

    Many musculoskeletal systems, including the skulls of birds, fishes, and some lizards consist of interconnected chains of mobile skeletal elements, analogous to linkage mechanisms used in engineering. Biomechanical studies have applied linkage models to a diversity of musculoskeletal systems, with previous applications primarily focusing on two-dimensional linkage geometries, bilaterally symmetrical pairs of planar linkages, or single four-bar linkages. Here, we present new, three-dimensional (3D), parallel linkage models of the skulls of birds and fishes and use these models (available as free kinematic simulation software), to investigate structure-function relationships in these systems. This new computational framework provides an accessible and integrated workflow for exploring the evolution of structure and function in complex musculoskeletal systems. Linkage simulations show that kinematic transmission, although a suitable functional metric for linkages with single rotating input and output links, can give misleading results when applied to linkages with substantial translational components or multiple output links. To take into account both linear and rotational displacement we define force mechanical advantage for a linkage (analogous to lever mechanical advantage) and apply this metric to measure transmission efficiency in the bird cranial mechanism. For linkages with multiple, expanding output points we propose a new functional metric, expansion advantage, to measure expansion amplification and apply this metric to the buccal expansion mechanism in fishes. Using the bird cranial linkage model, we quantify the inaccuracies that result from simplifying a 3D geometry into two dimensions. We also show that by combining single-chain linkages into parallel linkages, more links can be simulated while decreasing or maintaining the same number of input parameters. This generalized framework for linkage simulation and analysis can accommodate linkages of differing

  12. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  13. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Parallel generation of architecture on the GPU

    KAUST Repository

    Steinberger, Markus

    2014-05-01

    In this paper, we present a novel approach for the parallel evaluation of procedural shape grammars on the graphics processing unit (GPU). Unlike previous approaches that are either limited in the kind of shapes they allow, the amount of parallelism they can take advantage of, or both, our method supports state of the art procedural modeling including stochasticity and context-sensitivity. To increase parallelism, we explicitly express independence in the grammar, reduce inter-rule dependencies required for context-sensitive evaluation, and introduce intra-rule parallelism. Our rule scheduling scheme avoids unnecessary back and forth between CPU and GPU and reduces round trips to slow global memory by dynamically grouping rules in on-chip shared memory. Our GPU shape grammar implementation is multiple orders of magnitude faster than the standard in CPU-based rule evaluation, while offering equal expressive power. In comparison to the state of the art in GPU shape grammar derivation, our approach is nearly 50 times faster, while adding support for geometric context-sensitivity. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  15. Logistic chain modelling

    NARCIS (Netherlands)

    Slats, P.A.; Bhola, B.; Evers, J.J.M.; Dijkhuizen, G.

    1995-01-01

    Logistic chain modelling is very important in improving the overall performance of the total logistic chain. Logistic models provide support for a large range of applications, such as analysing bottlenecks, improving customer service, configuring new logistic chains and adapting existing chains to

  16. Sustainable and responsible supply chain governance: challenges and opportunities

    NARCIS (Netherlands)

    Boström, M.; Jönsson, A.M.; Lockie, S.; Mol, A.P.J.; Oosterveer, P.J.M.

    2015-01-01

    This paper introduces the Special Volume on sustainable and responsible supply chain governance. As globalized supply chains cross multiple regulatory borders, the firms involved in these chains come under increasing pressure from consumers, NGOs and governments to accept responsibility for social

  17. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  18. Scalable parallel prefix solvers for discrete ordinates transport

    International Nuclear Information System (INIS)

    Pautz, S.; Pandya, T.; Adams, M.

    2009-01-01

    The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)

  19. Pharmacokinetic comparison of controlled-release and immediate-release oral formulations of simvastatin in healthy Korean subjects: a randomized, open-label, parallel-group, single- and multiple-dose study.

    Science.gov (United States)

    Jang, Seong Bok; Lee, Yoon Jung; Lim, Lay Ahyoung; Park, Kyung-Mi; Kwon, Bong-Ju; Woo, Jong Soo; Kim, Yong-Il; Park, Min Soo; Kim, Kyung Hwan; Park, Kyungsoo

    2010-01-01

    A controlled-release (CR) formulation of simvastatin was recently developed in Korea. The formulation is expected to yield a lower C(max) and similar AUC values compared with the immediate-release (IR) formulation. The goal of this study was to compare the pharmacokinetics of the new CR formulation and an IR formulation of simvastatin after single- and multiple-dose administration in healthy Korean subjects. This study was developed as part of a product development project at the request of the Korean regulatory agency. This was a randomized, open-label, parallelgroup, 2-part study. Eligible subjects were healthy male or female volunteers between the ages of 19 and 55 years and within 20% of their ideal weight. In part I, each subject received a single dose of the CR or IR formulation of simvastatin 40 mg orally (20 mg x 2 tablets) after fasting. In part II, each subject received the same dose of the CR or IR formulation for 8 consecutive days. Blood samples were obtained for 48 hours after the dose in part I and after the first and the last dose in part II. Pharmacokinetic parameters were determined for both simvastatin (the inactive prodrug) and simvastatin acid (the active moiety). An adverse event (AE) was defined as any unfavorable sign (including an abnormal laboratory finding) or symptom, regardless of whether it had a causal relationship with the study medication. Serious AEs were defined as any events that are considered life threatening, require hospitalization or prolongation of existing hospitalization, cause persistent or significant disability or incapacity, or result in congenital abnormality, birth defect, or death. AEs were determined based on patient interviews and physical examinations. Twenty-four healthy subjects (17 men, 7 women; mean [SD] age, 29 [7] years; age range, 22-50 years) were enrolled in part I, and 29 subjects (17 men, 12 women; mean age, 33 [9] years; age range, 19-55 years) were enrolled in part II. For simvastatin acid, C

  20. Scaling up antiretroviral therapy in Uganda: using supply chain management to appraise health systems strengthening

    Directory of Open Access Journals (Sweden)

    Neuhann Florian

    2011-08-01

    Full Text Available Abstract Background Strengthened national health systems are necessary for effective and sustained expansion of antiretroviral therapy (ART. ART and its supply chain management in Uganda are largely based on parallel and externally supported efforts. The question arises whether systems are being strengthened to sustain access to ART. This study applies systems thinking to assess supply chain management, the role of external support and whether investments create the needed synergies to strengthen health systems. Methods This study uses the WHO health systems framework and examines the issues of governance, financing, information, human resources and service delivery in relation to supply chain management of medicines and the technologies. It looks at links and causal chains between supply chain management for ART and the national supply system for essential drugs. It combines data from the literature and key informant interviews with observations at health service delivery level in a study district. Results Current drug supply chain management in Uganda is characterized by parallel processes and information systems that result in poor quality and inefficiencies. Less than expected health system performance, stock outs and other shortages affect ART and primary care in general. Poor performance of supply chain management is amplified by weak conditions at all levels of the health system, including the areas of financing, governance, human resources and information. Governance issues include the lack to follow up initial policy intentions and a focus on narrow, short-term approaches. Conclusion The opportunity and need to use ART investments for an essential supply chain management and strengthened health system has not been exploited. By applying a systems perspective this work indicates the seriousness of missing system prerequisites. The findings suggest that root causes and capacities across the system have to be addressed synergistically to

  1. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  2. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  3. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  4. Bayesian tomography by interacting Markov chains

    Science.gov (United States)

    Romary, T.

    2017-12-01

    In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.

  5. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  6. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    International Nuclear Information System (INIS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-01-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines

  7. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    Science.gov (United States)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  8. Rubus: A compiler for seamless and extensible parallelism.

    Directory of Open Access Journals (Sweden)

    Muhammad Adnan

    Full Text Available Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU, originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84

  9. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  10. Parallel scalability of Hartree-Fock calculations

    Science.gov (United States)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  11. A parallel robot to assist vitreoretinal surgery

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, Taiga; Sugita, Naohiko; Mitsuishi, Mamoru [University of Tokyo, School of Engineering, Tokyo (Japan); Ueta, Takashi; Tamaki, Yasuhiro [University of Tokyo, Graduate School of Medicine, Tokyo (Japan)

    2009-11-15

    This paper describes the development and evaluation of a parallel prototype robot for vitreoretinal surgery where physiological hand tremor limits performance. The manipulator was specifically designed to meet requirements such as size, precision, and sterilization; this has six-degree-of-freedom parallel architecture and provides positioning accuracy with micrometer resolution within the eye. The manipulator is controlled by an operator with a ''master manipulator'' consisting of multiple joints. Results of the in vitro experiments revealed that when compared to the manual procedure, a higher stability and accuracy of tool positioning could be achieved using the prototype robot. This microsurgical system that we have developed has superior operability as compared to traditional manual procedure and has sufficient potential to be used clinically for vitreoretinal surgery. (orig.)

  12. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  13. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  14. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  15. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  16. Sustainable Supply Chain Design

    DEFF Research Database (Denmark)

    Bals, Lydia; Tate, Wendy

    A significant conceptual and practical challenge is how to integrate triple bottom line (TBL; including economic, social and environmental) sustainability into global supply chains. Although this integration is necessary to slow down global resource depletion, understanding is limited of how...... to implement TBL goals across the supply chain. In supply chain design, the classic economic perspective still dominates, although the idea of the TBL is more widely disseminated. The purpose of this research is to add to the sustainable supply chain management literature (SSCM) research agenda...... by incorporating the physical chain, and the (information and financial) support chains into supply chain design. This manuscript tackles issues of what the chains are designed for and how they are designed structurally. Four sustainable businesses are used as illustrative case examples of innovative supply chain...

  17. Connecting the Production Multiple

    DEFF Research Database (Denmark)

    Lichen, Alex Yu; Mouritsen, Jan

    &OP process itself is a fluid object, but there is still possibility to organise the messy Production. There are connections between the Production multiple and the managerial technology fluid. The fluid enacted the multiplicity of Production thus making it more difficult to be organised because there were...... in opposite directions. They are all part of the fluid object. There is no single chain of circulating references that makes the object a matter of fact. Accounting fluidity means that references drift back and forth and enact new realities also connected to the chain. In this setting future research may......This paper is about objects. It follows post ANT trajectories and finds that objects are multiple and fluid. Extant classic ANT inspired accounting research largely sees accounting inscriptions as immutable mobiles. Although multiplicity of objects upon which accounting acts has been explored...

  18. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  19. Topology of polymer chains under nanoscale confinement.

    Science.gov (United States)

    Satarifard, Vahid; Heidari, Maziar; Mashaghi, Samaneh; Tans, Sander J; Ejtehadi, Mohammad Reza; Mashaghi, Alireza

    2017-08-24

    Spatial confinement limits the conformational space accessible to biomolecules but the implications for bimolecular topology are not yet known. Folded linear biopolymers can be seen as molecular circuits formed by intramolecular contacts. The pairwise arrangement of intra-chain contacts can be categorized as parallel, series or cross, and has been identified as a topological property. Using molecular dynamics simulations, we determine the contact order distributions and topological circuits of short semi-flexible linear and ring polymer chains with a persistence length of l p under a spherical confinement of radius R c . At low values of l p /R c , the entropy of the linear chain leads to the formation of independent contacts along the chain and accordingly, increases the fraction of series topology with respect to other topologies. However, at high l p /R c , the fraction of cross and parallel topologies are enhanced in the chain topological circuits with cross becoming predominant. At an intermediate confining regime, we identify a critical value of l p /R c , at which all topological states have equal probability. Confinement thus equalizes the probability of more complex cross and parallel topologies to the level of the more simple, non-cooperative series topology. Moreover, our topology analysis reveals distinct behaviours for ring- and linear polymers under weak confinement; however, we find no difference between ring- and linear polymers under strong confinement. Under weak confinement, ring polymers adopt parallel and series topologies with equal likelihood, while linear polymers show a higher tendency for series arrangement. The radial distribution analysis of the topology reveals a non-uniform effect of confinement on the topology of polymer chains, thereby imposing more pronounced effects on the core region than on the confinement surface. Additionally, our results reveal that over a wide range of confining radii, loops arranged in parallel and cross

  20. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  1. A Novel Algorithm for the Generation of Distinct Kinematic Chain

    Science.gov (United States)

    Medapati, Sreenivasa Reddy; Kuchibhotla, Mallikarjuna Rao; Annambhotla, Balaji Srinivasa Rao

    2016-07-01

    Generation of distinct kinematic chains is an important topic in the design of mechanisms for various industrial applications i.e., robotic manipulator, tractor, crane etc. Many researchers have intently focused on this area and explained various processes of generating distinct kinematic chains which are laborious and complex. It is desirable to enumerate the kinematic chains systematically to know the inherent characteristics of a chain related to its structure so that all the distinct chains can be analyzed in depth, prior to the selection of a chain for a purpose. This paper proposes a novel and simple method with set of rules defined to eliminate isomorphic kinematic chains generating distinct kinematic chains. Also, this method simplifies the process of generating distinct kinematic chains even at higher levels i.e., 10-link, 11-link with single and multiple degree of freedom.

  2. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  3. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  4. A Massively Parallel Code for Polarization Calculations

    Science.gov (United States)

    Akiyama, Shizuka; Höflich, Peter

    2001-03-01

    We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.

  5. A parallel input composite transimpedance amplifier

    Science.gov (United States)

    Kim, D. J.; Kim, C.

    2018-01-01

    A new approach to high performance current to voltage preamplifier design is presented. The design using multiple operational amplifiers (op-amps) has a parasitic capacitance compensation network and a composite amplifier topology for fast, precision, and low noise performance. The input stage consisting of a parallel linked JFET op-amps and a high-speed bipolar junction transistor (BJT) gain stage driving the output in the composite amplifier topology, cooperating with the capacitance compensation feedback network, ensures wide bandwidth stability in the presence of input capacitance above 40 nF. The design is ideal for any two-probe measurement, including high impedance transport and scanning tunneling microscopy measurements.

  6. Practical parallel processing

    International Nuclear Information System (INIS)

    Arendt, M.L.

    1986-01-01

    ELXSI, a San Jose based computer company, was founded in January of 1979 for the purpose of developing and marketing a tightly-coupled multiple processor system. After five years ELXSI succeeded in making the first commercial installations at Digicon Geophysical, NASA-Dryden, and Sandia National Laboratories. Since that time over fifty-one systems and ninety-three processors have been installed. The commercial success of the ELXSI system 6400(TM) is due to several significant breakthroughs in computer technology including a system bus operating at 320 million bytes per second, a new Message-Based Operating System, EMBOS (TM), and a new system organization which allows for easy expansion in any dimension without changes to the operating system, the user environment, or the application programs. (Auth.)

  7. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Haiyan Gu

    2018-04-01

    Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.

  8. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  9. The Global Value Chain

    DEFF Research Database (Denmark)

    Sørensen, Olav Jull

    The conference paper aims to develop the global value chain concept by including corporate internal value adding activities and competition to the basic framework in order to turn the global value chain into a strategic management tool......The conference paper aims to develop the global value chain concept by including corporate internal value adding activities and competition to the basic framework in order to turn the global value chain into a strategic management tool...

  10. Power Consumption Optimization for Multiple Parallel Centrifugal Pumps

    DEFF Research Database (Denmark)

    Jepsen, Kasper Lund; Hansen, Leif; Mai, Christian

    2017-01-01

    Large amounts of energy is being used in a wide range of applications to transport liquid. This paper proposes a generic solution for minimizing power consumption of a generic pumping station equipped with identical variable speed pumps. The proposed solution consists of two sequential steps; fir...

  11. Parallel Detection of Multiple Biomarkers During Spaceflight, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Maintaining the health of astronauts during extended spaceflight is critical to the success of the mission. Radiation Monitoring Devices, Inc. (RMD) proposes an...

  12. Communication-Avoiding Parallel Recursive Algorithms for Matrix Multiplication

    Science.gov (United States)

    2013-05-17

    processor, and an Nvidia K20 GPU. As of November 2012, it ranked first on the TOP500 list [53], with a LINPACK score of 17.59 Tflop/s. In our... NVIDIA , Oracle, and Samsung, and support from MathWorks. We also acknowledge the support of the US DOE (grants DE- SC0003959, DE-SC0004938, DE...Data Base and a New Technique in File Sequencing. International Business Machines Company , 1966. BIBLIOGRAPHY 83 [55] J.-S. Park, M. Penner, and V. K

  13. Parallel Low-Loss Measurement of Multiple Atomic Qubits.

    Science.gov (United States)

    Kwon, Minho; Ebert, Matthew F; Walker, Thad G; Saffman, M

    2017-11-03

    We demonstrate low-loss measurement of the hyperfine ground state of rubidium atoms by state dependent fluorescence detection in a dipole trap array of five sites. The presence of atoms and their internal states are minimally altered by utilizing circularly polarized probe light and a strictly controlled quantization axis. We achieve mean state detection fidelity of 97% without correcting for imperfect state preparation or background losses, and 98.7% when corrected. After state detection and correction for background losses, the probability of atom loss due to the state measurement is state is preserved with >98% probability.

  14. Parallel computing solution of Boltzmann neutron transport equation

    International Nuclear Information System (INIS)

    Ansah-Narh, T.

    2010-01-01

    The focus of the research was on developing parallel computing algorithm for solving Eigen-values of the Boltzmam Neutron Transport Equation (BNTE) in a slab geometry using multi-grid approach. In response to the problem of slow execution of serial computing when solving large problems, such as BNTE, the study was focused on the design of parallel computing systems which was an evolution of serial computing that used multiple processing elements simultaneously to solve complex physical and mathematical problems. Finite element method (FEM) was used for the spatial discretization scheme, while angular discretization was accomplished by expanding the angular dependence in terms of Legendre polynomials. The eigenvalues representing the multiplication factors in the BNTE were determined by the power method. MATLAB Compiler Version 4.1 (R2009a) was used to compile the MATLAB codes of BNTE. The implemented parallel algorithms were enabled with matlabpool, a Parallel Computing Toolbox function. The option UseParallel was set to 'always' and the default value of the option was 'never'. When those conditions held, the solvers computed estimated gradients in parallel. The parallel computing system was used to handle all the bottlenecks in the matrix generated from the finite element scheme and each domain of the power method generated. The parallel algorithm was implemented on a Symmetric Multi Processor (SMP) cluster machine, which had Intel 32 bit quad-core x 86 processors. Convergence rates and timings for the algorithm on the SMP cluster machine were obtained. Numerical experiments indicated the designed parallel algorithm could reach perfect speedup and had good stability and scalability. (au)

  15. Experimental discovery of nodal chains

    Science.gov (United States)

    Yan, Qinghui; Liu, Rongjuan; Yan, Zhongbo; Liu, Boyuan; Chen, Hongsheng; Wang, Zhong; Lu, Ling

    2018-05-01

    Three-dimensional Weyl and Dirac nodal points1 have attracted widespread interest across multiple disciplines and in many platforms but allow for few structural variations. In contrast, nodal lines2-4 can have numerous topological configurations in momentum space, forming nodal rings5-9, nodal chains10-15, nodal links16-20 and nodal knots21,22. However, nodal lines are much less explored because of the lack of an ideal experimental realization23-25. For example, in condensed-matter systems, nodal lines are often fragile to spin-orbit coupling, located away from the Fermi level, coexist with energy-degenerate trivial bands or have a degeneracy line that disperses strongly in energy. Here, overcoming all these difficulties, we theoretically predict and experimentally observe nodal chains in a metallic-mesh photonic crystal having frequency-isolated linear band-touching rings chained across the entire Brillouin zone. These nodal chains are protected by mirror symmetry and have a frequency variation of less than 1%. We use angle-resolved transmission measurements to probe the projected bulk dispersion and perform Fourier-transformed field scans to map out the dispersion of the drumhead surface state. Our results establish an ideal nodal-line material for further study of topological line degeneracies with non-trivial connectivity and consequent wave dynamics that are richer than those in Weyl and Dirac materials.

  16. Multibus-based parallel processor for simulation

    Science.gov (United States)

    Ogrady, E. P.; Wang, C.-H.

    1983-01-01

    A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.

  17. A multitransputer parallel processing system (MTPPS)

    International Nuclear Information System (INIS)

    Jethra, A.K.; Pande, S.S.; Borkar, S.P.; Khare, A.N.; Ghodgaonkar, M.D.; Bairi, B.R.

    1993-01-01

    This report describes the design and implementation of a 16 node Multi Transputer Parallel Processing System(MTPPS) which is a platform for parallel program development. It is a MIMD machine based on message passing paradigm. The basic compute engine is an Inmos Transputer Ims T800-20. Transputer with local memory constitutes the processing element (NODE) of this MIMD architecture. Multiple NODES can be connected to each other in an identifiable network topology through the high speed serial links of the transputer. A Network Configuration Unit (NCU) incorporates the necessary hardware to provide software controlled network configuration. System is modularly expandable and more NODES can be added to the system to achieve the required processing power. The system is backend to the IBM-PC which has been integrated into the system to provide user I/O interface. PC resources are available to the programmer. Interface hardware between the PC and the network of transputers is INMOS compatible. Therefore, all the commercially available development software compatible to INMOS products can run on this system. While giving the details of design and implementation, this report briefly summarises MIMD Architectures, Transputer Architecture and Parallel Processing Software Development issues. LINPACK performance evaluation of the system and solutions of neutron physics and plasma physics problem have been discussed along with results. (author). 12 refs., 22 figs., 3 tabs., 3 appendixes

  18. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  19. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  20. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  1. Chain transitivity in hyperspaces

    International Nuclear Information System (INIS)

    Fernández, Leobardo; Good, Chris; Puljiz, Mate; Ramírez, Ártico

    2015-01-01

    Given a non-empty compact metric space X and a continuous function f: X → X, we study the dynamics of the induced maps on the hyperspace of non-empty compact subsets of X and on various other invariant subspaces thereof, in particular symmetric products. We show how some important dynamical properties transfer across induced systems. These amongst others include, chain transitivity, chain (weakly) mixing, chain recurrence, exactness by chains. From our main theorem we derive an ε-chain version of Furstenberg’s celebrated 2 implies n Theorem. We also show the implications our results have for dynamics on continua.

  2. Decisive Markov Chains

    OpenAIRE

    Abdulla, Parosh Aziz; Henda, Noomene Ben; Mayr, Richard

    2007-01-01

    We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In part...

  3. Markov processes and controlled Markov chains

    CERN Document Server

    Filar, Jerzy; Chen, Anyue

    2002-01-01

    The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...

  4. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  5. Gushing metal chain

    Science.gov (United States)

    Belyaev, Alexander; Sukhanov, Alexander; Tsvetkov, Alexander

    2016-03-01

    This article addresses the problem in which a chain falls from a glass from some height. This phenomenon demonstrates a paradoxical rise of the chain over the glass. To explain this effect, an initial hypothesis and an appropriate theory are proposed for calculating the steady fall parameters of the chain. For this purpose, the modified Cayley's problem of falling chain given its rise due to the centrifugal force of upward inertia is solved. Results show that the lift caused by an increase in linear density at the part of chain where it is being bent (the upper part) is due to the convergence of the chain balls to one another. The experiments confirm the obtained estimates of the lifting chain.

  6. Comparison of the pharmacokinetics of a new 30 mg modified-release tablet formulation of metoclopramide for once-a-day administration versus 10 mg immediate-release tablets: a single and multiple-dose, randomized, open-label, parallel study in healthy male subjects.

    Science.gov (United States)

    Bernardo-Escudero, Roberto; Alonso-Campero, Rosalba; Francisco-Doce, María Teresa de Jesús; Cortés-Fuentes, Myriam; Villa-Vargas, Miriam; Angeles-Uribe, Juan

    2012-12-01

    The study aimed to assess the pharmacokinetics of a new, modified-release metoclopramide tablet, and compare it to an immediate-release tablet. A single and multiple-dose, randomized, open-label, parallel, pharmacokinetic study was conducted. Investigational products were administered to 26 healthy Hispanic Mexican male volunteers for two consecutive days: either one 30 mg modified-release tablet every 24 h, or one 10 mg immediate-release tablet every 8 h. Blood samples were collected after the first and last doses of metoclopramide. Plasma metoclopramide concentrations were determined by high-performance liquid chromatography. Safety and tolerability were assessed through vital signs measurements, clinical evaluations, and spontaneous reports from study subjects. All 26 subjects were included in the analyses [mean (SD) age: 27 (8) years, range 18-50; BMI: 23.65 (2.22) kg/m², range 18.01-27.47)]. Peak plasmatic concentrations were not statistically different with both formulations, but occurred significantly later (p 0.05)]. One adverse event was reported in the test group (diarrhea), and one in the reference group (headache). This study suggests that the 30 mg modified-release metoclopramide tablets show features compatible with slow-release formulations when compared to immediate-release tablets, and is suitable for once-a-day administration.

  7. Collectively loading programs in a multiple program multiple data environment

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Gooding, Thomas M.; Miller, Samuel J.

    2016-11-08

    Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the program needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program.

  8. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  9. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  10. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  11. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  12. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  13. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  14. SPRINT: A new parallel framework for R

    Directory of Open Access Journals (Sweden)

    Scharinger Florian

    2008-12-01

    Full Text Available Abstract Background Microarray analysis allows the simultaneous measurement of thousands to millions of genes or sequences across tens to thousands of different samples. The analysis of the resulting data tests the limits of existing bioinformatics computing infrastructure. A solution to this issue is to use High Performance Computing (HPC systems, which contain many processors and more memory than desktop computer systems. Many biostatisticians use R to process the data gleaned from microarray analysis and there is even a dedicated group of packages, Bioconductor, for this purpose. However, to exploit HPC systems, R must be able to utilise the multiple processors available on these systems. There are existing modules that enable R to use multiple processors, but these are either difficult to use for the HPC novice or cannot be used to solve certain classes of problems. A method of exploiting HPC systems, using R, but without recourse to mastering parallel programming paradigms is therefore necessary to analyse genomic data to its fullest. Results We have designed and built a prototype framework that allows the addition of parallelised functions to R to enable the easy exploitation of HPC systems. The Simple Parallel R INTerface (SPRINT is a wrapper around such parallelised functions. Their use requires very little modification to existing sequential R scripts and no expertise in parallel computing. As an example we created a function that carries out the computation of a pairwise calculated correlation matrix. This performs well with SPRINT. When executed using SPRINT on an HPC resource of eight processors this computation reduces by more than three times the time R takes to complete it on one processor. Conclusion SPRINT allows the biostatistician to concentrate on the research problems rather than the computation, while still allowing exploitation of HPC systems. It is easy to use and with further development will become more useful as more

  15. Massively parallel sparse matrix function calculations with NTPoly

    Science.gov (United States)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  16. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  17. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  18. A dynamic bead-based microarray for parallel DNA detection

    International Nuclear Information System (INIS)

    Sochol, R D; Lin, L; Casavant, B P; Dueck, M E; Lee, L P

    2011-01-01

    A microfluidic system has been designed and constructed by means of micromachining processes to integrate both microfluidic mixing of mobile microbeads and hydrodynamic microbead arraying capabilities on a single chip to simultaneously detect multiple bio-molecules. The prototype system has four parallel reaction chambers, which include microchannels of 18 × 50 µm 2 cross-sectional area and a microfluidic mixing section of 22 cm length. Parallel detection of multiple DNA oligonucleotide sequences was achieved via molecular beacon probes immobilized on polystyrene microbeads of 16 µm diameter. Experimental results show quantitative detection of three distinct DNA oligonucleotide sequences from the Hepatitis C viral (HCV) genome with single base-pair mismatch specificity. Our dynamic bead-based microarray offers an effective microfluidic platform to increase parallelization of reactions and improve microbead handling for various biological applications, including bio-molecule detection, medical diagnostics and drug screening

  19. Parallel universes may be more than sci-fi daydreams

    CERN Document Server

    2007-01-01

    Is the universe -- correction: "our" universe -- no more than a speck of cosmic dust amid an infinite number of parallel worlds? A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians, cosmologists, and other scientists.

  20. Induction heating using induction coils in series-parallel circuits

    Science.gov (United States)

    Matsen, Marc Rollo; Geren, William Preston; Miller, Robert James; Negley, Mark Alan; Dykstra, William Chet

    2017-11-14

    A part is inductively heated by multiple, self-regulating induction coil circuits having susceptors, coupled together in parallel and in series with an AC power supply. Each of the circuits includes a tuning capacitor that tunes the circuit to resonate at the frequency of AC power supply.

  1. Parallel graded attention in reading: A pupillometric study

    NARCIS (Netherlands)

    Snell, Joshua; Mathot, Sebastiaan; Mirault, Jonathan; Grainger, Jonathan

    2018-01-01

    There are roughly two lines of theory to account for recent evidence that word processing is influenced by adjacent orthographic information. One line assumes that multiple words can be processed simultaneously through a parallel graded distribution of visuo-spatial attention. The other line assumes

  2. A Parallel Algebraic Multigrid Solver on Graphics Processing Units

    KAUST Repository

    Haase, Gundolf; Liebmann, Manfred; Douglas, Craig C.; Plank, Gernot

    2010-01-01

    -vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster

  3. Parallel Task Processing on a Multicore Platform in a PC-based Control System for Parallel Kinematics

    Directory of Open Access Journals (Sweden)

    Harald Michalik

    2009-02-01

    Full Text Available Multicore platforms are such that have one physical processor chip with multiple cores interconnected via a chip level bus. Because they deliver a greater computing power through concurrency, offer greater system density multicore platforms provide best qualifications to address the performance bottleneck encountered in PC-based control systems for parallel kinematic robots with heavy CPU-load. Heavy load control tasks are generated by new control approaches that include features like singularity prediction, structure control algorithms, vision data integration and similar tasks. In this paper we introduce the parallel task scheduling extension of a communication architecture specially tailored for the development of PC-based control of parallel kinematics. The Sche-duling is specially designed for the processing on a multicore platform. It breaks down the serial task processing of the robot control cycle and extends it with parallel task processing paths in order to enhance the overall control performance.

  4. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  5. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  6. Editorial: Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Dimitrios Aidonis

    2017-05-01

    Full Text Available This special issue has followed up the 3rd Olympus International Conference on Supply Chains held on Athens Metropolitan Expo, November 7 & 8 2015, Greece. The Conference was organized by the Department of Logistics Technological Educational Institute of Central Macedonia, in collaboration with the: a Laboratory of Quantitative Analysis, Logistics and Supply Chain Management of the Department of Mechanical Engineering, Aristotle University of Thessaloniki (AUTH, b Greek Association of Supply Chain Management (EEL of Northern Greece and the c Supply Chain & Logistics Journal. During the 2-Days Conference more than 60 research papers were presented covering the following thematic areas: (i Transportation, (ii Best Practices in Logistics, (iii Information and Communication Technologies in Supply Chain Management, (iv Food Logistics, (v New Trends in Business Logistics, and (vi Green Supply Chain Management. Three keynote invited speakers addressed interesting issues for the Operational Research, the Opportunities and Prospects of Greek Ports chaired Round Tables with other Greek and Foreign Scientists and Specialists.

  7. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  8. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  9. Supply Chain Management og Supply Chain costing

    DEFF Research Database (Denmark)

    Nielsen, Steen; Mortensen, Ole

    2002-01-01

    Formålet med denne artikel er at belyse de muligheder som ligger i at integrere virksomhedens økonomiske styring med begrebet Supply Chain Management (SCM). Dette søges belyst ved først at beskrive den teoretiske ramme, hvori SCM indgår. Herefter analyseres begrebet Supply Chain Costing (SCC) som...... Århus. Et resultat er, at via begrebet Supply Chain Costing skabes der mulighed for at måle logistikkædens aktiviteter i kr./øre. Anvendelsen af denne information har også strategisk betydning for at kunne vælge kunde og leverandør. Ved hjælp af integrationen skabes der også helt nye mulighed...

  10. Supply chain components

    OpenAIRE

    Vieraşu, T.; Bălăşescu, M.

    2011-01-01

    In this article I will go through three main logistics components, which are represented by: transportation, inventory and facilities, and the three secondary logistical components: information, production location, price and how they determine performance of any supply chain. I will discuss then how these components are used in the design, planning and operation of a supply chain. I will also talk about some obstacles a supply chain manager may encounter.

  11. Supply chain components

    Directory of Open Access Journals (Sweden)

    Vieraşu, T.

    2011-01-01

    Full Text Available In this article I will go through three main logistics components, which are represented by: transportation, inventory and facilities, and the three secondary logistical components: information, production location, price and how they determine performance of any supply chain. I will discuss then how these components are used in the design, planning and operation of a supply chain. I will also talk about some obstacles a supply chain manager may encounter.

  12. Economy, market and chain

    OpenAIRE

    Sukkel, W.; Hommes, M.

    2009-01-01

    In their pursuit of growth and professionalisation, the Dutch organic sector focuses primarily on market development. But how do you stimulate the market for organic foods? This is the subject of many research projects concerning market, consumer preferences and the supply chain. These projects focus specifically at consumer purchasing behaviour, product development, supply chain formation and minimising cost price. As a rule, this research takes place in close cooperation with chain actors

  13. Crossing of identical solitary waves in a chain of elastic beads

    International Nuclear Information System (INIS)

    Manciu, Marian; Sen, Surajit; Hurd, Alan J.

    2001-01-01

    We consider a chain of elastic beads subjected to vanishingly weak loading conditions, i.e., the beads are barely in contact. The grains repel upon contact via the Hertz-type potential, V∝δ n , n>2, where delta≥0, delta being the grain--grain overlap. Our dynamical simulations build on several earlier studies by Nesterenko, Coste, and Sen and co-workers that have shown that an impulse propagates as a solitary wave of fixed spatial extent (dependent only upon n) through a chain of Hertzian beads and demonstrate, to our knowledge for the first time, that colliding solitary waves in the chain spawn a well-defined hierarchy of multiple secondary solitary waves, which is ∼ 0.5% of the energy of the original solitary waves. Our findings have interesting parallels with earlier observations by Rosenau and colleagues [P. Rosenau and J. M. Hyman, Phys. Rev. Lett. 70, 564 (1993); P. Rosenau, ibid. 73, 1737 (1994); Phys. Lett. A 211, 265 (1996)] regarding colliding compactons. To the best of our knowledge, there is no formal theory that describes the dynamics associated with the formation of secondary solitary waves. Calculations suggest that the formation of secondary solitary waves may be a fundamental property of certain discrete systems

  14. Supply chain planning classification

    Science.gov (United States)

    Hvolby, Hans-Henrik; Trienekens, Jacques; Bonde, Hans

    2001-10-01

    Industry experience a need to shift in focus from internal production planning towards planning in the supply network. In this respect customer oriented thinking becomes almost a common good amongst companies in the supply network. An increase in the use of information technology is needed to enable companies to better tune their production planning with customers and suppliers. Information technology opportunities and supply chain planning systems facilitate companies to monitor and control their supplier network. In spite if these developments, most links in today's supply chains make individual plans, because the real demand information is not available throughout the chain. The current systems and processes of the supply chains are not designed to meet the requirements now placed upon them. For long term relationships with suppliers and customers, an integrated decision-making process is needed in order to obtain a satisfactory result for all parties. Especially when customized production and short lead-time is in focus. An effective value chain makes inventory available and visible among the value chain members, minimizes response time and optimizes total inventory value held throughout the chain. In this paper a supply chain planning classification grid is presented based current manufacturing classifications and supply chain planning initiatives.

  15. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  16. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  17. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  18. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  19. Dark Solitons in FPU Lattice Chain

    Science.gov (United States)

    Wang, Deng-Long; Yang, Ru-Shu; Yang, You-Tian

    2007-11-01

    Based on multiple scales method, we study the nonlinear properties of a new Fermi-Pasta-Ulam lattice model analytically. It is found that the lattice chain exhibits a novel nonlinear elementary excitation, i.e. a dark soliton. Moreover, the modulation depth of dark soliton is increasing as the anharmonic parameter increases.

  20. Dark Solitons in FPU Lattice Chain

    International Nuclear Information System (INIS)

    Wang Denglong; Yang Youtian; Yang Rushu

    2007-01-01

    Based on multiple scales method, we study the nonlinear properties of a new Fermi-Pasta-Ulam lattice model analytically. It is found that the lattice chain exhibits a novel nonlinear elementary excitation, i.e. a dark soliton. Moreover, the modulation depth of dark soliton is increasing as the anharmonic parameter increases.

  1. Age- and Activity-Related Differences in the Abundance of Myosin Essential and Regulatory Light Chains in Human Muscle

    Directory of Open Access Journals (Sweden)

    James N. Cobley

    2016-04-01

    Full Text Available Traditional methods for phenotyping skeletal muscle (e.g., immunohistochemistry are labor-intensive and ill-suited to multixplex analysis, i.e., assays must be performed in a series. Addressing these concerns represents a largely unmet research need but more comprehensive parallel analysis of myofibrillar proteins could advance knowledge regarding age- and activity-dependent changes in human muscle. We report a label-free, semi-automated and time efficient LC-MS proteomic workflow for phenotyping the myofibrillar proteome. Application of this workflow in old and young as well as trained and untrained human skeletal muscle yielded several novel observations that were subsequently verified by multiple reaction monitoring (MRM. We report novel data demonstrating that human ageing is associated with lesser myosin light chain 1 content and greater myosin light chain 3 content, consistent with an age-related reduction in type II muscle fibers. We also disambiguate conflicting data regarding myosin regulatory light chain, revealing that age-related changes in this protein more closely reflect physical activity status than ageing per se. This finding reinforces the need to control for physical activity levels when investigating the natural process of ageing. Taken together, our data confirm and extend knowledge regarding age- and activity-related phenotypes. In addition, the MRM transitions described here provide a methodological platform that can be fine-tuned to suite multiple research needs and thus advance myofibrillar phenotyping.

  2. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  3. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  4. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    Science.gov (United States)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  5. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  6. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  7. Properties of Confined Star-Branched and Linear Chains. A Monte Carlo Simulation Study

    International Nuclear Information System (INIS)

    Romiszowski, P.; Sikorski, A.

    2004-01-01

    A model of linear and star-branched polymer chains confined between two parallel and impenetrable surfaces was built. The polymer chains were restricted to a simple cubic lattice. Two macromolecular architectures of the chain: linear and star branched (consisted of f = 3 branches of equal length) were studied. The excluded volume was the only potential introduced into the model (the athermal system). Monte Carlo simulations were carried out using a sampling algorithm based on chain's local changes of conformation. The simulations were carried out at different confinement conditions: from light to high chain's compression. The scaling of chain's size with the chain length was studied and discussed. The influence of the confinement and the macromolecular architecture on the shape of a chain was studied. The differences in the shape of linear and star-branched chains were pointed out. (author)

  8. Supply Chain Management

    DEFF Research Database (Denmark)

    Wieland, Andreas; Handfield, Robert B.

    Supply chain management has made great strides in becoming a discipline with a standalone body of theories. As part of this evolution, researchers have sought to embed and integrate observed supply chain management phenomena into theoretical statements. In our review, we explore where we have been...

  9. Critical Chain Exercises

    Science.gov (United States)

    Doyle, John Kevin

    2010-01-01

    Critical Chains project management focuses on holding buffers at the project level vs. task level, and managing buffers as a project resource. A number of studies have shown that Critical Chain project management can significantly improve organizational schedule fidelity (i.e., improve the proportion of projects delivered on time) and reduce…

  10. Value Chain Engineering

    DEFF Research Database (Denmark)

    Wæhrens, Brian Vejrum; Slepniov, Dmitrij

    2015-01-01

    This workbook is recommended for the attention of students of and managers in Danish small and medium sized enterprises (SMEs). Danish SMEs are currently facing a number of key challenges related to their position in global value chains. This book provides an insight into value chain management t...

  11. Fields From Markov Chains

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly.......A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly....

  12. Supply chain management in health services : an overview

    NARCIS (Netherlands)

    de Vries, J.; Huijsman, R.

    2011-01-01

    Purpose - This paper seeks to concentrate on the question whether any parallels can be found between the industrial sector and health care services with respect to the developments that have taken place in the area of Supply Chain Management. Starting from an analysis of existing literature, it is

  13. The How and Why of Interactive Markov Chains

    NARCIS (Netherlands)

    Hermanns, H.; Katoen, Joost P.; de Boer, F.S; Bonsangue, S.H.; Leuschel, M

    2010-01-01

    This paper reviews the model of interactive Markov chains (IMCs, for short), an extension of labelled transition systems with exponentially delayed transitions. We show that IMCs are closed under parallel composition and hiding, and show how IMCs can be compositionally aggregated prior to analysis

  14. A birth-death process suggested by a chain sequence

    NARCIS (Netherlands)

    Lenin, R.B.; Parthasarathy, P.R.

    2000-01-01

    We consider a birth-death process whose birth and death rates are suggested by a chain sequence. We use an elegant transformation to find the transition probabilities in a simple closed form. We also find an explicit expression for time-dependent mean. We find parallel results in discrete time.

  15. Parallelizing AT with MatlabMPI

    International Nuclear Information System (INIS)

    2011-01-01

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  16. MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

    Science.gov (United States)

    Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.

    2018-02-01

    We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.

  17. Design, analysis and control of cable-suspended parallel robots and its applications

    CERN Document Server

    Zi, Bin

    2017-01-01

    This book provides an essential overview of the authors’ work in the field of cable-suspended parallel robots, focusing on innovative design, mechanics, control, development and applications. It presents and analyzes several typical mechanical architectures of cable-suspended parallel robots in practical applications, including the feed cable-suspended structure for super antennae, hybrid-driven-based cable-suspended parallel robots, and cooperative cable parallel manipulators for multiple mobile cranes. It also addresses the fundamental mechanics of cable-suspended parallel robots on the basis of their typical applications, including the kinematics, dynamics and trajectory tracking control of the feed cable-suspended structure for super antennae. In addition it proposes a novel hybrid-driven-based cable-suspended parallel robot that uses integrated mechanism design methods to improve the performance of traditional cable-suspended parallel robots. A comparative study on error and performance indices of hybr...

  18. Integrated supply chain risk management

    OpenAIRE

    Riaan Bredell; Jackie Walters

    2007-01-01

    Integrated supply chain risk management (ISCRM) has become indispensable to the theory and practice of supply chain management. The economic and political realities of the modern world require not only a different approach to supply chain management, but also bold steps to secure supply chain performance and sustainable wealth creation. Integrated supply chain risk management provides supply chain organisations with a level of insight into their supply chains yet to be achieved. If correctly ...

  19. Supply Chain Connectivity: Enhancing Participation in the Global Supply Chain

    OpenAIRE

    Patalinghug, Epictetus E.

    2015-01-01

    Supply chain connectivity is vital for the efficient flow of trade among APEC economies. This paper reviews the literature and supply chain management, describes the barriers to enhancing participation in global supply chain, analyzes the various measures of supply chain performance, and suggests steps for the Philippines to fully reap the benefits of the global value chain.

  20. Rubus: A compiler for seamless and extensible parallelism

    Science.gov (United States)

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been

  1. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2004-08-31

    In the probabilistic approach for history matching, the information from the dynamic data is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, fluid properties, well configuration, flow constrains on wells etc. This implies probabilistic approach should then update different regions of the reservoir in different ways. This necessitates delineation of multiple reservoir domains in order to increase the accuracy of the approach. The research focuses on a probabilistic approach to integrate dynamic data that ensures consistency between reservoir models developed from one stage to the next. The algorithm relies on efficient parameterization of the dynamic data integration problem and permits rapid assessment of the updated reservoir model at each stage. The report also outlines various domain decomposition schemes from the perspective of increasing the accuracy of probabilistic approach of history matching. Research progress in three important areas of the project are discussed: {lg_bullet}Validation and testing the probabilistic approach to incorporating production data in reservoir models. {lg_bullet}Development of a robust scheme for identifying reservoir regions that will result in a more robust parameterization of the history matching process. {lg_bullet}Testing commercial simulators for parallel capability and development of a parallel algorithm for history matching.

  2. A Parallel Algebraic Multigrid Solver on Graphics Processing Units

    KAUST Repository

    Haase, Gundolf

    2010-01-01

    The paper presents a multi-GPU implementation of the preconditioned conjugate gradient algorithm with an algebraic multigrid preconditioner (PCG-AMG) for an elliptic model problem on a 3D unstructured grid. An efficient parallel sparse matrix-vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster and a multi-GPU configuration with eight GPUs is about 100 times faster than a typical server CPU core. © 2010 Springer-Verlag.

  3. Supply Chain Simulation using Business Process Modeling in Service Oriented Architecture

    OpenAIRE

    Taejong Yoo

    2015-01-01

    For supply chain optimization, as a key determinant of strategic resources mobility along the value-added chain, simulation is widely used to test the impact on supply chain performance for the strategic level decisions, such as the number of plants, the modes of transport, or the relocation of warehouses. Traditionally, a single centralized model that encompasses multiple participants in the supply chain is built when optimization of the supply chain through simulation is required. However, ...

  4. Memory Retrieval Given Two Independent Cues: Cue Selection or Parallel Access?

    Science.gov (United States)

    Rickard, Timothy C.; Bajic, Daniel

    2004-01-01

    A basic but unresolved issue in the study of memory retrieval is whether multiple independent cues can be used concurrently (i.e., in parallel) to recall a single, common response. A number of empirical results, as well as potentially applicable theories, suggest that retrieval can proceed in parallel, though Rickard (1997) set forth a model that…

  5. Parallel manipulators with two end-effectors : Getting a grip on Jacobian-based stiffness analysis

    NARCIS (Netherlands)

    Hoevenaars, A.G.L.

    2016-01-01

    Robots that are developed for applications which require a high stiffness-over-inertia ratio, such as pick-and-place robots, machining robots, or haptic devices, are often based on parallel manipulators. Parallel manipulators connect an end-effector to an inertial base using multiple serial

  6. Multi-criteria decision making approaches for green supply chains

    NARCIS (Netherlands)

    Banasik, Aleksander; Bloemhof-Ruwaard, Jacqueline M.; Kanellopoulos, Argyris; Claassen, G.D.H.; Vorst, van der Jack G.A.J.

    2016-01-01

    Designing Green Supply Chains (GSCs) requires complex decision-support models that can deal with multiple dimensions of sustainability while taking into account specific characteristics of products and their supply chain. Multi-Criteria Decision Making (MCDM) approaches can be used to quantify

  7. Changing governance arrangements: NTFP value chains in the Congo Basin

    NARCIS (Netherlands)

    Ingram, V.J.

    2017-01-01

    As forest products from Cameroon and DR Congo are commercialised, a value chain is created from harvesters, processors, and retailers to
    consumers worldwide. In contrast to dominant narratives focusing on regulations and customs, these chains are actually governed by dynamic,
    multiple

  8. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  9. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  10. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  11. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  12. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  13. Parallel plate transmission line transformer

    NARCIS (Netherlands)

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

  14. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  15. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  16. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  17. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  18. CS-Studio Scan System Parallelization

    Energy Technology Data Exchange (ETDEWEB)

    Kasemir, Kay [ORNL; Pearson, Matthew R [ORNL

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  19. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  20. Multiple constant multiplication optimizations for field programmable gate arrays

    CERN Document Server

    Kumm, Martin

    2016-01-01

    This work covers field programmable gate array (FPGA)-specific optimizations of circuits computing the multiplication of a variable by several constants, commonly denoted as multiple constant multiplication (MCM). These optimizations focus on low resource usage but high performance. They comprise the use of fast carry-chains in adder-based constant multiplications including ternary (3-input) adders as well as the integration of look-up table-based constant multipliers and embedded multipliers to get the optimal mapping to modern FPGAs. The proposed methods can be used for the efficient implementation of digital filters, discrete transforms and many other circuits in the domain of digital signal processing, communication and image processing. Contents Heuristic and ILP-Based Optimal Solutions for the Pipelined Multiple Constant Multiplication Problem Methods to Integrate Embedded Multipliers, LUT-Based Constant Multipliers and Ternary (3-Input) Adders An Optimized Multiple Constant Multiplication Architecture ...

  1. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  2. Understanding the supply chain

    Directory of Open Access Journals (Sweden)

    Aćimović Slobodan

    2006-01-01

    Full Text Available Supply chain management represents new business philosophy and includes strategically positioned and much wider scope of activity in comparison with its "older brother" - management of logistics. Philosophy of the concept of supply chain is directed to more coordination of key business functions of every link in distribution chain in the process of organization of the flow of both goods and information, while logistic managing instruments are focused on internal optimum of flows of goods and information within one company. Applying the concept of integrated supply chain among several companies makes the importance of operative logistics activity even greater on the level of one company, thus advancing processes of optimum and coordination within and between different companies and confirms the importance of logistics performances for the company’s profitability. Besides the fact that the borders between companies are being deleted, this concept of supply chain in one distribution channel influences increasing of importance of functional, i.e. traditional business managing approaches but instead it points out the importance of process managing approaches. Although the author is aware that "there is nothing harder, more dangerous and with uncertain success, but to find a way for introducing some novelties (Machiavelli, it would be even his additional stimulation for trying to bring closer the concept and goals of supply chain implementation that are identified in key, relevant, modern, theoretical and consulting approaches in order to achieve better understanding of the subject and faster implementation of the concept of supply chain management by domestic companies.

  3. Plastic value chains

    DEFF Research Database (Denmark)

    Baxter, John; Wahlstrom, Margareta; Zu Castell-Rüdenhausen, Malin

    2014-01-01

    Optimizing plastic value chains is regarded as an important measure in order to increase recycling of plastics in an efficient way. This can also lead to improved awareness of the hazardous substances contained in plastic waste, and how to avoid that these substances are recycled. As an example......, plastics from WEEE is chosen as a Nordic case study. The project aims to propose a number of improvements for this value chain together with representatives from Nordic stakeholders. Based on the experiences made, a guide for other plastic value chains shall be developed....

  4. Project Decision Chain

    DEFF Research Database (Denmark)

    Rolstadås, Asbjørn; Pinto, Jeffrey K.; Falster, Peter

    2015-01-01

    To add value to project performance and help obtain project success, a new framework for decision making in projects is defined. It introduces the project decision chain inspired by the supply chain thinking in the manufacturing sector and uses three types of decisions: authorization, selection......, and plan decision. A primitive decision element is defined where all the three decision types can be accommodated. Each task in the primitive element can in itself contain subtasks that in turn will comprise new primitive elements. The primitive elements are nested together in a project decision chain....

  5. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  6. Multi-lane detection based on multiple vanishing points detection

    Science.gov (United States)

    Li, Chuanxiang; Nie, Yiming; Dai, Bin; Wu, Tao

    2015-03-01

    Lane detection plays a significant role in Advanced Driver Assistance Systems (ADAS) for intelligent vehicles. In this paper we present a multi-lane detection method based on multiple vanishing points detection. A new multi-lane model assumes that a single lane, which has two approximately parallel boundaries, may not parallel to others on road plane. Non-parallel lanes associate with different vanishing points. A biological plausibility model is used to detect multiple vanishing points and fit lane model. Experimental results show that the proposed method can detect both parallel lanes and non-parallel lanes.

  7. Natural Hazards and Supply Chain Disruptions

    Science.gov (United States)

    Haraguchi, M.

    2016-12-01

    Natural hazards distress the global economy through disruptions in supply chain networks. Moreover, despite increasing investment to infrastructure for disaster risk management, economic damages and losses caused by natural hazards are increasing. Manufacturing companies today have reduced inventories and streamlined logistics in order to maximize economic competitiveness. As a result, today's supply chains are profoundly susceptible to systemic risks, which are the risk of collapse of an entire network caused by a few node of the network. For instance, the prolonged floods in Thailand in 2011 caused supply chain disruptions in their primary industries, i.e. electronic and automotive industries, harming not only the Thai economy but also the global economy. Similar problems occurred after the Great East Japan Earthquake and Tsunami in 2011, the Mississippi River floods and droughts during 2011 - 2013, and the Earthquake in Kumamoto Japan in 2016. This study attempts to discover what kind of effective measures are available for private companies to manage supply chain disruptions caused by floods. It also proposes a method to estimate potential risks using a Bayesian network. The study uses a Bayesian network to create synthetic networks that include variables associated with the magnitude and duration of floods, major components of supply chains such as logistics, multiple layers of suppliers, warehouses, and consumer markets. Considering situations across different times, our study shows desirable data requirements for the analysis and effective measures to improve Value at Risk (VaR) for private enterprises and supply chains.

  8. Flexibility evaluation of multiechelon supply chains.

    Directory of Open Access Journals (Sweden)

    João Flávio de Freitas Almeida

    Full Text Available Multiechelon supply chains are complex logistics systems that require flexibility and coordination at a tactical level to cope with environmental uncertainties in an efficient and effective manner. To cope with these challenges, mathematical programming models are developed to evaluate supply chain flexibility. However, under uncertainty, supply chain models become complex and the scope of flexibility analysis is generally reduced. This paper presents a unified approach that can evaluate the flexibility of a four-echelon supply chain via a robust stochastic programming model. The model simultaneously considers the plans of multiple business divisions such as marketing, logistics, manufacturing, and procurement, whose goals are often conflicting. A numerical example with deterministic parameters is presented to introduce the analysis, and then, the model stochastic parameters are considered to evaluate flexibility. The results of the analysis on supply, manufacturing, and distribution flexibility are presented. Tradeoff analysis of demand variability and service levels is also carried out. The proposed approach facilitates the adoption of different management styles, thus improving supply chain resilience. The model can be extended to contexts pertaining to supply chain disruptions; for example, the model can be used to explore operation strategies when subtle events disrupt supply, manufacturing, or distribution.

  9. Flexibility evaluation of multiechelon supply chains.

    Science.gov (United States)

    Almeida, João Flávio de Freitas; Conceição, Samuel Vieira; Pinto, Luiz Ricardo; de Camargo, Ricardo Saraiva; Júnior, Gilberto de Miranda

    2018-01-01

    Multiechelon supply chains are complex logistics systems that require flexibility and coordination at a tactical level to cope with environmental uncertainties in an efficient and effective manner. To cope with these challenges, mathematical programming models are developed to evaluate supply chain flexibility. However, under uncertainty, supply chain models become complex and the scope of flexibility analysis is generally reduced. This paper presents a unified approach that can evaluate the flexibility of a four-echelon supply chain via a robust stochastic programming model. The model simultaneously considers the plans of multiple business divisions such as marketing, logistics, manufacturing, and procurement, whose goals are often conflicting. A numerical example with deterministic parameters is presented to introduce the analysis, and then, the model stochastic parameters are considered to evaluate flexibility. The results of the analysis on supply, manufacturing, and distribution flexibility are presented. Tradeoff analysis of demand variability and service levels is also carried out. The proposed approach facilitates the adoption of different management styles, thus improving supply chain resilience. The model can be extended to contexts pertaining to supply chain disruptions; for example, the model can be used to explore operation strategies when subtle events disrupt supply, manufacturing, or distribution.

  10. Editorial: Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Aidonis, D.

    2012-01-01

    Full Text Available This special issue has followed up the 2nd Olympus International Conference on Supply Chains held on October 5-6, 2012, in Katerini, Greece. The Conference was organized by the Department of Logistics of Alexander Technological Educational Institution (ATEI of Thessaloniki, in collaboration with the Laboratory of Quantitative Analysis, Logistics and Supply Chain Management of the Department of Mechanical Engineering, Aristotle University of Thessaloniki (AUTH. During the 2-Days Conference more than 50 research papers were presented covering the following thematic areas: (i Business Logistics, (ii Transportation, Telematics and Distribution Networks, (iii Green Logistics, (iv Information and Communication Technologies in Supply Chain Management, and (v Services and Quality. Three keynote invited speakers addressed interesting issues for the Humanitarian Logistics, Green Supply Chains of the Agrifood Sector and the Opportunities and Prospects of Greek Ports chaired Round Tables with other Greek and Foreign Scientists and Specialists.

  11. Characterizing Oregon's supply chains.

    Science.gov (United States)

    2013-03-01

    In many regions throughout the world, freight models are used to aid infrastructure investment and : policy decisions. Since freight is such an integral part of efficient supply chains, more realistic : transportation models can be of greater assista...

  12. Moldova - Value Chain Training

    Data.gov (United States)

    Millennium Challenge Corporation — The evaluation of the GHS value chain training subactivity wwas designed to measure the extent, if any, to which the training activities improved the productivity...

  13. Fluid dynamics parallel computer development at NASA Langley Research Center

    Science.gov (United States)

    Townsend, James C.; Zang, Thomas A.; Dwoyer, Douglas L.

    1987-01-01

    To accomplish more detailed simulations of highly complex flows, such as the transition to turbulence, fluid dynamics research requires computers much more powerful than any available today. Only parallel processing on multiple-processor computers offers hope for achieving the required effective speeds. Looking ahead to the use of these machines, the fluid dynamicist faces three issues: algorithm development for near-term parallel computers, architecture development for future computer power increases, and assessment of possible advantages of special purpose designs. Two projects at NASA Langley address these issues. Software development and algorithm exploration is being done on the FLEX/32 Parallel Processing Research Computer. New architecture features are being explored in the special purpose hardware design of the Navier-Stokes Computer. These projects are complementary and are producing promising results.

  14. Parallel and distributed processing in power system simulation and control

    Energy Technology Data Exchange (ETDEWEB)

    Falcao, Djalma M [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1994-12-31

    Recent advances in computer technology will certainly have a great impact in the methodologies used in power system expansion and operational planning as well as in real-time control. Parallel and distributed processing are among the new technologies that present great potential for application in these areas. Parallel computers use multiple functional or processing units to speed up computation while distributed processing computer systems are collection of computers joined together by high speed communication networks having many objectives and advantages. The paper presents some ideas for the use of parallel and distributed processing in power system simulation and control. It also comments on some of the current research work in these topics and presents a summary of the work presently being developed at COPPE. (author) 53 refs., 2 figs.

  15. Innovation Across the Supply Chain

    DEFF Research Database (Denmark)

    Druehl, Cheryl; Carrillo, Janice; Hsuan, Juliana

    Innovation is an integral part of every firm’s ongoing operations. Beyond product innovation, supply chain innovations offer a unique source of competitive advantage. We synthesize recent research on innovation in the supply chain, specifically, innovative supply chain processes...

  16. Supply chain risk management

    OpenAIRE

    Christian Hollstein; Frank Himpel

    2013-01-01

    Background: Supply chain risk management increasingly gains prominence in many international industries. In order to strengthen supply chain structures, processes, and networks, adequate potentials for risk management need to be built (focus on effective logistics) and to be utilized (focus on efficient logistics). Natural-based disasters, such as the case of Fukushima, illustrate how crucial risk management is. Method: By aligning a theoretical-conceptual framework with empirical-induct...

  17. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  18. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  19. A tandem parallel plate analyzer

    International Nuclear Information System (INIS)

    Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.

    1996-11-01

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  20. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  1. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  2. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  3. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  4. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  5. MASTERING SUPPLY CHAIN RISKS

    Directory of Open Access Journals (Sweden)

    Borut Jereb

    2012-11-01

    Full Text Available Risks in supply chains represent one of the major business issues today. Since every organizationstrives for success and uninterrupted operations, efficient supply chain risk management is crucial.During supply chain risk research at the Faculty of Logistics in Maribor (Slovenia some keyissues in the field were identified, the major being the lack of instruments which can make riskmanagement in an organization easier and more efficient. Consequently, a model which captures anddescribes risks in an organization and its supply chain was developed. It is in accordance with thegeneral risk management and supply chain security standards, the ISO 31000 and ISO 28000families. It also incorporates recent finding from the risk management field, especially from theviewpoint of segmenting of the public.The model described in this paper focuses on the risks itself by defining them by different keydimensions, so that risk management is simplified and can be undertaken in every supply chain andorganizations within them. Based on our mode and consequent practical research in actualorganizations, a freely accessible risk catalog has been assembled and published online from the risksthat have been identified so far. This catalog can serve as a checklist and a starting point in supplychain risk management in organizations. It also incorporates experts from the field into a community,in order to assemble an ever growing list of possible risks and to provide insight into the model andits value in practice.

  6. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  7. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  8. Multiple Perspectives / Multiple Readings

    Directory of Open Access Journals (Sweden)

    Simon Biggs

    2005-01-01

    Full Text Available People experience things from their own physical point of view. What they see is usually a function of where they are and what physical attitude they adopt relative to the subject. With augmented vision (periscopes, mirrors, remote cameras, etc we are able to see things from places where we are not present. With time-shifting technologies, such as the video recorder, we can also see things from the past; a time and a place we may never have visited.In recent artistic work I have been exploring the implications of digital technology, interactivity and internet connectivity that allow people to not so much space/time-shift their visual experience of things but rather see what happens when everybody is simultaneously able to see what everybody else can see. This is extrapolated through the remote networking of sites that are actual installation spaces; where the physical movements of viewers in the space generate multiple perspectives, linked to other similar sites at remote locations or to other viewers entering the shared data-space through a web based version of the work.This text explores the processes involved in such a practice and reflects on related questions regarding the non-singularity of being and the sense of self as linked to time and place.

  9. Charge distribution in an two-chain dual model

    International Nuclear Information System (INIS)

    Fialkowski, K.; Kotanski, A.

    1983-01-01

    Charge distributions in the multiple production processes are analysed using the dual chain model. A parametrisation of charge distributions for single dual chains based on the νp and anti vp data is proposed. The rapidity charge distributions are then calculated for pp and anti pp collisions and compared with the previous calculations based on the recursive cascade model of single chains. The results differ at the SPS collider energies and in the energy dependence of the net forward charge supplying the useful tests of the dual chain model. (orig.)

  10. Spike propagation in driven chain networks with dominant global inhibition

    International Nuclear Information System (INIS)

    Chang Wonil; Jin, Dezhe Z.

    2009-01-01

    Spike propagation in chain networks is usually studied in the synfire regime, in which successive groups of neurons are synaptically activated sequentially through the unidirectional excitatory connections. Here we study the dynamics of chain networks with dominant global feedback inhibition that prevents the synfire activity. Neural activity is driven by suprathreshold external inputs. We analytically and numerically demonstrate that spike propagation along the chain is a unique dynamical attractor in a wide parameter regime. The strong inhibition permits a robust winner-take-all propagation in the case of multiple chains competing via the inhibition.

  11. Construction of a digital elevation model: methods and parallelization

    International Nuclear Information System (INIS)

    Mazzoni, Christophe

    1995-01-01

    The aim of this work is to reduce the computation time needed to produce the Digital Elevation Models (DEM) by using a parallel machine. It is made in collaboration between the French 'Institut Geographique National' (IGN) and the Laboratoire d'Electronique de Technologie et d'Instrumentation (LETI) of the French Atomic Energy Commission (CEA). The IGN has developed a system which provides DEM that is used to produce topographic maps. The kernel of this system is the correlator, a software which automatically matches pairs of homologous points of a stereo-pair of photographs. Nevertheless the correlator is expensive In computing time. In order to reduce computation time and to produce the DEM with same accuracy that the actual system, we have parallelized the IGN's correlator on the OPENVISION system. This hardware solution uses a SIMD (Single Instruction Multiple Data) parallel machine SYMPATI-2, developed by the LETI that is involved in parallel architecture and image processing. Our analysis of the implementation has demonstrated the difficulty of efficient coupling between scalar and parallel structure. So we propose solutions to reinforce this coupling. In order to accelerate more the processing we evaluate SYMPHONIE, a SIMD calculator, successor of SYMPATI-2. On an other hand, we developed a multi-agent approach for what a MIMD (Multiple Instruction, Multiple Data) architecture is available. At last, we describe a Multi-SIMD architecture that conciliates our two approaches. This architecture offers a capacity to apprehend efficiently multi-level treatment image. It is flexible by its modularity, and its communication network supplies reliability that interest sensible systems. (author) [fr

  12. Evidence for parallel consolidation of motion direction and orientation into visual short-term memory.

    Science.gov (United States)

    Rideaux, Reuben; Apthorp, Deborah; Edwards, Mark

    2015-02-12

    Recent findings have indicated the capacity to consolidate multiple items into visual short-term memory in parallel varies as a function of the type of information. That is, while color can be consolidated in parallel, evidence suggests that orientation cannot. Here we investigated the capacity to consolidate multiple motion directions in parallel and reexamined this capacity using orientation. This was achieved by determining the shortest exposure duration necessary to consolidate a single item, then examining whether two items, presented simultaneously, could be consolidated in that time. The results show that parallel consolidation of direction and orientation information is possible, and that parallel consolidation of direction appears to be limited to two. Additionally, we demonstrate the importance of adequate separation between feature intervals used to define items when attempting to consolidate in parallel, suggesting that when multiple items are consolidated in parallel, as opposed to serially, the resolution of representations suffer. Finally, we used facilitation of spatial attention to show that the deterioration of item resolution occurs during parallel consolidation, as opposed to storage. © 2015 ARVO.

  13. Hydraulic Profiling of a Parallel Channel Type Reactor Core

    International Nuclear Information System (INIS)

    Seo, Kyong-Won; Hwang, Dae-Hyun; Lee, Chung-Chan

    2006-01-01

    An advanced reactor core which consisted of closed multiple parallel channels was optimized to maximize the thermal margin of the core. The closed multiple parallel channel configurations have different characteristics to the open channels of conventional PWRs. The channels, usually assemblies, are isolated hydraulically from each other and there is no cross flow between channels. The distribution of inlet flow rate between channels is a very important design parameter in the core because distribution of inlet flow is directly proportional to a margin for a certain hydraulic parameter. The thermal hydraulic parameter may be the boiling margin, maximum fuel temperature, and critical heat flux. The inlet flow distribution of the core was optimized for the boiling margins by grouping the inlet orifices by several hydraulic regions. The procedure is called a hydraulic profiling

  14. Magnetic ordering in arrays of one-dimensional nanoparticle chains

    International Nuclear Information System (INIS)

    Serantes, D; Baldomir, D; Pereiro, M; Hernando, B; Prida, V M; Sanchez Llamazares, J L; Zhukov, A; Ilyn, M; Gonzalez, J

    2009-01-01

    The magnetic order in parallel-aligned one-dimensional (1D) chains of magnetic nanoparticles is studied using a Monte Carlo technique. If the easy anisotropy axes are collinear along the chains a macroscopic mean-field approach indicates antiferromagnetic (AFM) order even when no interparticle interactions are taken into account, which evidences that a mean-field treatment is inadequate for the study of the magnetic order in these highly anisotropic systems. From the direct microscopic analysis of the evolution of the magnetic moments, we observe spontaneous intra-chain ferromagnetic (FM)-type and inter-chain AFM-type ordering at low temperatures (although not completely regular) for the easy-axes collinear case, whereas a random distribution of the anisotropy axes leads to a sort of intra-chain AFM arrangement with no inter-chain regular order. When the magnetic anisotropy is neglected a perfectly regular intra-chain FM-like order is attained. Therefore it is shown that the magnetic anisotropy, and particularly the spatial distribution of the easy axes, is a key parameter governing the magnetic ordering type of 1D-nanoparticle chains.

  15. Duplex quantum communication through a spin chain

    Science.gov (United States)

    Wang, Zhao-Ming; Bishop, C. Allen; Gu, Yong-Jian; Shao, Bin

    2011-08-01

    Data multiplexing within a quantum computer can allow for the simultaneous transfer of multiple streams of information over a shared medium thereby minimizing the number of channels needed for requisite data transmission. Here, we investigate a two-way quantum communication protocol using a spin chain placed in an external magnetic field. In our scheme, Alice and Bob each play the role of a sender and a receiver as two states, cos((θ1)/(2))0+sin((θ1)/(2))eiφ11 and cos((θ2)/(2))0+sin((θ2)/(2))eiφ21, are transferred through one channel simultaneously. We find that the transmission fidelity at each end of a spin chain can usually be enhanced by the presence of a second party. This is an important result for establishing the viability of duplex quantum communication through spin chain networks.

  16. Rough multiple objective decision making

    CERN Document Server

    Xu, Jiuping

    2011-01-01

    Rough Set TheoryBasic concepts and properties of rough sets Rough Membership Rough Intervals Rough FunctionApplications of Rough SetsMultiple Objective Rough Decision Making Reverse Logistics Problem with Rough Interval Parameters MODM based Rough Approximation for Feasible RegionEVRMCCRMDCRM Reverse Logistics Network Design Problem of Suji Renewable Resource MarketBilevel Multiple Objective Rough Decision Making Hierarchical Supply Chain Planning Problem with Rough Interval Parameters Bilevel Decision Making ModelBL-EVRM BL-CCRMBL-DCRMApplication to Supply Chain Planning of Mianyang Co., LtdStochastic Multiple Objective Rough Decision Multi-Objective Resource-Constrained Project Scheduling UnderRough Random EnvironmentRandom Variable Stochastic EVRM Stochastic CCRM Stochastic DCRM Multi-Objective rc-PSP/mM/Ro-Ra for Longtan Hydropower StationFuzzy Multiple Objective Rough Decision Making Allocation Problem under Fuzzy Environment Fuzzy Variable Fu-EVRM Fu-CCRM Fu-DCRM Earth-Rock Work Allocation Problem.

  17. Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB

    Science.gov (United States)

    Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.

    2017-01-01

    Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.

  18. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    OpenAIRE

    Orts-Escolano, Sergio; Garcia-Rodriguez, Jose; Morell, Vicente; Cazorla, Miguel; Azorin-Lopez, Jorge; García-Chamizo, Juan Manuel

    2014-01-01

    In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mob...

  19. Phasic Triplet Markov Chains.

    Science.gov (United States)

    El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2014-11-01

    Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.

  20. Nodal-chain metals.

    Science.gov (United States)

    Bzdušek, Tomáš; Wu, QuanSheng; Rüegg, Andreas; Sigrist, Manfred; Soluyanov, Alexey A

    2016-10-06

    The band theory of solids is arguably the most successful theory of condensed-matter physics, providing a description of the electronic energy levels in various materials. Electronic wavefunctions obtained from the band theory enable a topological characterization of metals for which the electronic spectrum may host robust, topologically protected, fermionic quasiparticles. Many of these quasiparticles are analogues of the elementary particles of the Standard Model, but others do not have a counterpart in relativistic high-energy theories. A complete list of possible quasiparticles in solids is lacking, even in the non-interacting case. Here we describe the possible existence of a hitherto unrecognized type of fermionic excitation in metals. This excitation forms a nodal chain-a chain of connected loops in momentum space-along which conduction and valence bands touch. We prove that the nodal chain is topologically distinct from previously reported excitations. We discuss the symmetry requirements for the appearance of this excitation and predict that it is realized in an existing material, iridium tetrafluoride (IrF 4 ), as well as in other compounds of this class of materials. Using IrF 4 as an example, we provide a discussion of the topological surface states associated with the nodal chain. We argue that the presence of the nodal-chain fermions will result in anomalous magnetotransport properties, distinct from those of materials exhibiting previously known excitations.