WorldWideScience

Sample records for two-level parallel direct

  1. A two-level parallel direct search implementation for arbitrarily sized objective functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, S.A.; Shadid, N.; Moffat, H.K. [Sandia National Labs., Albuquerque, NM (United States)] [and others

    1994-12-31

    In the past, many optimization schemes for massively parallel computers have attempted to achieve parallel efficiency using one of two methods. In the case of large and expensive objective function calculations, the optimization itself may be run in serial and the objective function calculations parallelized. In contrast, if the objective function calculations are relatively inexpensive and can be performed on a single processor, then the actual optimization routine itself may be parallelized. In this paper, a scheme based upon the Parallel Direct Search (PDS) technique is presented which allows the objective function calculations to be done on an arbitrarily large number (p{sub 2}) of processors. If, p, the number of processors available, is greater than or equal to 2p{sub 2} then the optimization may be parallelized as well. This allows for efficient use of computational resources since the objective function calculations can be performed on the number of processors that allow for peak parallel efficiency and then further speedup may be achieved by parallelizing the optimization. Results are presented for an optimization problem which involves the solution of a PDE using a finite-element algorithm as part of the objective function calculation. The optimum number of processors for the finite-element calculations is less than p/2. Thus, the PDS method is also parallelized. Performance comparisons are given for a nCUBE 2 implementation.

  2. A novel two-level dynamic parallel data scheme for large 3-D SN calculations

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Shedlock, D.; Haghighat, A.; Yi, C.

    2005-01-01

    We introduce a new dynamic parallel memory optimization scheme for executing large scale 3-D discrete ordinates (Sn) simulations on distributed memory parallel computers. In order for parallel transport codes to be truly scalable, they must use parallel data storage, where only the variables that are locally computed are locally stored. Even with parallel data storage for the angular variables, cumulative storage requirements for large discrete ordinates calculations can be prohibitive. To address this problem, Memory Tuning has been implemented into the PENTRAN 3-D parallel discrete ordinates code as an optimized, two-level ('large' array, 'small' array) parallel data storage scheme. Memory Tuning can be described as the process of parallel data memory optimization. Memory Tuning dynamically minimizes the amount of required parallel data in allocated memory on each processor using a statistical sampling algorithm. This algorithm is based on the integral average and standard deviation of the number of fine meshes contained in each coarse mesh in the global problem. Because PENTRAN only stores the locally computed problem phase space, optimal two-level memory assignments can be unique on each node, depending upon the parallel decomposition used (hybrid combinations of angular, energy, or spatial). As demonstrated in the two large discrete ordinates models presented (a storage cask and an OECD MOX Benchmark), Memory Tuning can save a substantial amount of memory per parallel processor, allowing one to accomplish very large scale Sn computations. (authors)

  3. Cross-Circulating Current Suppression Method for Parallel Three-Phase Two-Level Inverters

    DEFF Research Database (Denmark)

    Wei, Baoze; Guerrero, Josep M.; Guo, Xiaoqiang

    2015-01-01

    The parallel architecture is very popular for power inverters to increase the power level. This paper presents a method for the parallel operation of inverters in an ac-distributed system, to suppress the cross-circulating current based on virtual impedance without current-sharing bus...

  4. Direct Power Control for Three-Phase Two-Level Voltage-Source Rectifiers Based on Extended-State Observation

    DEFF Research Database (Denmark)

    Song, Zhanfeng; Tian, Yanjun; Yan, Zhuo

    2016-01-01

    This paper proposed a direct power control strategy for three-phase two-level voltage-source rectifiers based on extended-state observation. Active and reactive powers are directly regulated in the stationary reference frame. Similar to the family of predictive controllers whose inherent characte......This paper proposed a direct power control strategy for three-phase two-level voltage-source rectifiers based on extended-state observation. Active and reactive powers are directly regulated in the stationary reference frame. Similar to the family of predictive controllers whose inherent...

  5. Coherent effects on two-photon correlation and directional emission of two two-level atoms

    International Nuclear Information System (INIS)

    Ooi, C. H. Raymond; Kim, Byung-Gyu; Lee, Hai-Woong

    2007-01-01

    Sub- and superradiant dynamics of spontaneously decaying atoms are manifestations of collective many-body systems. We study the internal dynamics and the radiation properties of two atoms in free space. Interesting results are obtained when the atoms are separated by less than half a wavelength of the atomic transition, where the dipole-dipole interaction gives rise to new coherent effects, such as (a) coherence between two intermediate collective states, (b) oscillations in the two-photon correlation G (2) , (c) emission of two photons by one atom, and (d) the loss of directional correlation. We compare the population dynamics during the two-photon emission process with the dynamics of single-photon emission in the cases of a Λ and a V scheme. We compute the temporal correlation and angular correlation of two successively emitted photons using the G (2) for different values of atomic separation. We find antibunching when the atomic separation is a quarter wavelength λ/4. Oscillations in the temporal correlation provide a useful feature for measuring subwavelength atomic separation. Strong directional correlation between two emitted photons is found for atomic separation larger than a wavelength. We also compare the directionality of a photon spontaneously emitted by the two atoms prepared in phased-symmetric and phased-antisymmetric entangled states vertical bar ±> k 0 =e ik 0 ·r 1 vertical bar a 1 ,b 2 >±e ik 0 ·r 2 vertical bar b 1 ,a 2 > by a laser pulse with wave vector k 0 . Photon emission is directionally suppressed along k 0 for the phased-antisymmetric state. The directionality ceases for interatomic distances less than λ/2

  6. Parallel sparse direct solver for integrated circuit simulation

    CERN Document Server

    Chen, Xiaoming; Yang, Huazhong

    2017-01-01

    This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques. · Introduces complicated algorithms of sparse linear solvers, using concise principles and simple examples, without complex theory or lengthy derivations; · Describes a parallel sparse direct solver that can be adopted to accelerate any SPICE-like integrated circuit simulato...

  7. Parallel alternating direction preconditioner for isogeometric simulations of explicit dynamics

    KAUST Repository

    Łoś, Marcin

    2015-04-27

    In this paper we present a parallel implementation of the alternating direction preconditioner for isogeometric simulations of explicit dynamics. The Alternating Direction Implicit (ADI) algorithm, belongs to the category of matrix-splitting iterative methods, was proposed almost six decades ago for solving parabolic and elliptic partial differential equations, see [1–4]. The new version of this algorithm has been recently developed for isogeometric simulations of two dimensional explicit dynamics [5] and steady-state diffusion equations with orthotropic heterogenous coefficients [6]. In this paper we present a parallel version of the alternating direction implicit algorithm for three dimensional simulations. The algorithm has been incorporated as a part of PETIGA an isogeometric framework [7] build on top of PETSc [8]. We show the scalability of the parallel algorithm on STAMPEDE linux cluster up to 10,000 processors, as well as the convergence rate of the PCG solver with ADI algorithm as preconditioner.

  8. Series-parallel method of direct solar array regulation

    Science.gov (United States)

    Gooder, S. T.

    1976-01-01

    A 40 watt experimental solar array was directly regulated by shorting out appropriate combinations of series and parallel segments of a solar array. Regulation switches were employed to control the array at various set-point voltages between 25 and 40 volts. Regulation to within + or - 0.5 volt was obtained over a range of solar array temperatures and illumination levels as an active load was varied from open circuit to maximum available power. A fourfold reduction in regulation switch power dissipation was achieved with series-parallel regulation as compared to the usual series-only switching for direct solar array regulation.

  9. Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows

    Science.gov (United States)

    Moitra, Stuti; Gatski, Thomas B.

    1997-01-01

    A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.

  10. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  11. Direct drive digital servo press with high parallel control

    Science.gov (United States)

    Murata, Chikara; Yabe, Jun; Endou, Junichi; Hasegawa, Kiyoshi

    2013-12-01

    Direct drive digital servo press has been developed as the university-industry joint research and development since 1998. On the basis of this result, 4-axes direct drive digital servo press has been developed and in the market on April of 2002. This servo press is composed of 1 slide supported by 4 ball screws and each axis has linearscale measuring the position of each axis with high accuracy less than μm order level. Each axis is controlled independently by servo motor and feedback system. This system can keep high level parallelism and high accuracy even with high eccentric load. Furthermore the 'full stroke full power' is obtained by using ball screws. Using these features, new various types of press forming and stamping have been obtained by development and production. The new stamping and forming methods are introduced and 'manufacturing' need strategy of press forming with high added value and also the future direction of press forming are also introduced.

  12. Direct numerical simulation of bubbles with parallelized adaptive mesh refinement

    International Nuclear Information System (INIS)

    Talpaert, A.

    2015-01-01

    The study of two-phase Thermal-Hydraulics is a major topic for Nuclear Engineering for both security and efficiency of nuclear facilities. In addition to experiments, numerical modeling helps to knowing precisely where bubbles appear and how they behave, in the core as well as in the steam generators. This work presents the finest scale of representation of two-phase flows, Direct Numerical Simulation of bubbles. We use the 'Di-phasic Low Mach Number' equation model. It is particularly adapted to low-Mach number flows, that is to say flows which velocity is much slower than the speed of sound; this is very typical of nuclear thermal-hydraulics conditions. Because we study bubbles, we capture the front between vapor and liquid phases thanks to a downward flux limiting numerical scheme. The specific discrete analysis technique this work introduces is well-balanced parallel Adaptive Mesh Refinement (AMR). With AMR, we refined the coarse grid on a batch of patches in order to locally increase precision in areas which matter more, and capture fine changes in the front location and its topology. We show that patch-based AMR is very adapted for parallel computing. We use a variety of physical examples: forced advection, heat transfer, phase changes represented by a Stefan model, as well as the combination of all those models. We will present the results of those numerical simulations, as well as the speed up compared to equivalent non-AMR simulation and to serial computation of the same problems. This document is made up of an abstract and the slides of the presentation. (author)

  13. Directions in parallel processor architecture, and GPUs too

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Modern computing is power-limited in every domain of computing. Performance increments extracted from instruction-level parallelism (ILP) are no longer power-efficient; they haven't been for some time. Thread-level parallelism (TLP) is a more easily exploited form of parallelism, at the expense of programmer effort to expose it in the program. In this talk, I will introduce you to disparate topics in parallel processor architecture that will impact programming models (and you) in both the near and far future. About the speaker Olivier is a senior GPU (SM) architect at NVIDIA and an active participant in the concurrency working group of the ISO C++ committee. He has also worked on very large diesel engines as a mechanical engineer, and taught at McGill University (Canada) as a faculty instructor.

  14. Parallel sparse direct solvers for Poisson's equation in streamer discharges

    NARCIS (Netherlands)

    M. Nool (Margreet); M. Genseberger (Menno); U. M. Ebert (Ute)

    2017-01-01

    textabstractThe aim of this paper is to examine whether a hybrid approach of parallel computing, a combination of the message passing model (MPI) with the threads model (OpenMP) can deliver good performance in streamer discharge simulations. Since one of the bottlenecks of almost all streamer

  15. Parallel alternating direction preconditioner for isogeometric simulations of explicit dynamics

    KAUST Repository

    Łoś, Marcin; Woźniak, Maciej; Paszyński, Maciej; Dalcin, Lisandro; Calo, Victor M.

    2015-01-01

    incorporated as a part of PETIGA an isogeometric framework [7] build on top of PETSc [8]. We show the scalability of the parallel algorithm on STAMPEDE linux cluster up to 10,000 processors, as well as the convergence rate of the PCG solver

  16. Data-parallel tomographic reconstruction : A comparison of filtered backprojection and direct Fourier reconstruction

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Westenberg, M.A

    1998-01-01

    We consider the parallelization of two standard 2D reconstruction algorithms, filtered backprojection and direct Fourier reconstruction, using the data-parallel programming style. The algorithms are implemented on a Connection Machine CM-5 with 16 processors and a peak performance of 2 Gflop/s.

  17. Evidence for parallel consolidation of motion direction and orientation into visual short-term memory.

    Science.gov (United States)

    Rideaux, Reuben; Apthorp, Deborah; Edwards, Mark

    2015-02-12

    Recent findings have indicated the capacity to consolidate multiple items into visual short-term memory in parallel varies as a function of the type of information. That is, while color can be consolidated in parallel, evidence suggests that orientation cannot. Here we investigated the capacity to consolidate multiple motion directions in parallel and reexamined this capacity using orientation. This was achieved by determining the shortest exposure duration necessary to consolidate a single item, then examining whether two items, presented simultaneously, could be consolidated in that time. The results show that parallel consolidation of direction and orientation information is possible, and that parallel consolidation of direction appears to be limited to two. Additionally, we demonstrate the importance of adequate separation between feature intervals used to define items when attempting to consolidate in parallel, suggesting that when multiple items are consolidated in parallel, as opposed to serially, the resolution of representations suffer. Finally, we used facilitation of spatial attention to show that the deterioration of item resolution occurs during parallel consolidation, as opposed to storage. © 2015 ARVO.

  18. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-07

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  19. Comparison of the deflated preconditioned conjugate gradient method and parallel direct solver for composite materials

    NARCIS (Netherlands)

    Jönsthövel, T.B.; Van Gijzen, M.B.; MacLachlan, S.; Vuik, C.; Scarpas, A.

    2011-01-01

    The demand for large FE meshes increases as parallel computing becomes the standard in FE simulations. Direct and iterative solution methods are used to solve the resulting linear systems. Many applications concern composite materials, which are characterized by large discontinuities in the material

  20. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs

    Directory of Open Access Journals (Sweden)

    Vaughn Matthew

    2010-11-01

    Full Text Available Abstract Background Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ messages (Σ being the size of the alphabet. Results In this paper we present a Θ(n/p time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/BBlog(M/B (M being the main memory size and B being the size of the disk block. We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster - both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. Conclusions The bi-directed

  1. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.

    Science.gov (United States)

    Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal

    2010-11-15

    Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for

  2. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    Science.gov (United States)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  3. Direct kinematics solution architectures for industrial robot manipulators: Bit-serial versus parallel

    Science.gov (United States)

    Lee, J.; Kim, K.

    1991-01-01

    A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

  4. Direct kinematics solution architectures for industrial robot manipulators: Bit-serial versus parallel

    Science.gov (United States)

    Lee, J.; Kim, K.

    A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

  5. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    Science.gov (United States)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  6. Bi-directional series-parallel elastic actuator and overlap of the actuation layers.

    Science.gov (United States)

    Furnémont, Raphaël; Mathijssen, Glenn; Verstraten, Tom; Lefeber, Dirk; Vanderborght, Bram

    2016-01-27

    Several robotics applications require high torque-to-weight ratio and energy efficient actuators. Progress in that direction was made by introducing compliant elements into the actuation. A large variety of actuators were developed such as series elastic actuators (SEAs), variable stiffness actuators and parallel elastic actuators (PEAs). SEAs can reduce the peak power while PEAs can reduce the torque requirement on the motor. Nonetheless, these actuators still cannot meet performances close to humans. To combine both advantages, the series parallel elastic actuator (SPEA) was developed. The principle is inspired from biological muscles. Muscles are composed of motor units, placed in parallel, which are variably recruited as the required effort increases. This biological principle is exploited in the SPEA, where springs (layers), placed in parallel, can be recruited one by one. This recruitment is performed by an intermittent mechanism. This paper presents the development of a SPEA using the MACCEPA principle with a self-closing mechanism. This actuator can deliver a bi-directional output torque, variable stiffness and reduced friction. The load on the motor can also be reduced, leading to a lower power consumption. The variable recruitment of the parallel springs can also be tuned in order to further decrease the consumption of the actuator for a given task. First, an explanation of the concept and a brief description of the prior work done will be given. Next, the design and the model of one of the layers will be presented. The working principle of the full actuator will then be given. At the end of this paper, experiments showing the electric consumption of the actuator will display the advantage of the SPEA over an equivalent stiff actuator.

  7. Directional Transport of a Liquid Drop between Parallel-Nonparallel Combinative Plates.

    Science.gov (United States)

    Huang, Yao; Hu, Liang; Chen, Wenyu; Fu, Xin; Ruan, Xiaodong; Xie, Haibo

    2018-04-17

    Liquids confined between two parallel plates can perform the function of transmission, support, or lubrication in many practical applications, due to which to maintain liquids stable within their working area is very important. However, instabilities may lead to the formation of leaking drops outside the bulk liquid, thus it is necessary to transport the detached drops back without overstepping the working area and causing destructive leakage to the system. In this study, we report a novel and facile method to solve this problem by introducing the wedgelike geometry into the parallel gap to form a parallel-nonparallel combinative construction. Transport performances of this structure were investigated. The criterion for self-propelled motion was established, which seemed more difficult to meet than that in the nonparallel gap. Then, we performed a more detailed investigation into the drop dynamics under squeezing and relaxing modes because the drops can surely return in hydrophilic combinative gaps, whereas uncertainties arose in gaps with a weak hydrophobic character. Therefore, through exploration of the transition mechanism of the drop motion state, a crucial factor named turning point was discovered and supposed to be directly related to the final state of the drops. On the basis of the theoretical model of turning point, the criterion to identify whether a liquid drop returns to the parallel part under squeezing and relaxing modes was achieved. These criteria can provide guidance on parameter selection and structural optimization for the combinative gap, so that the destructive leakage in practical productions can be avoided.

  8. Co-ordination of directional overcurrent protection with load current for parallel feeders

    Energy Technology Data Exchange (ETDEWEB)

    Wright, J.W.; Lloyd, G.; Hindle, P.J. [Alstom, Inc., Stafford (United Kingdom). T and D Protection and Control

    1999-11-01

    Directional phase overcurrent relays are commonly applied at the receiving ends of parallel feeders or transformer feeders. Their purpose is to ensure full discrimination of main or back-up power system overcurrent protection for a fault near the receiving end of one feeder. This paper reviews this type of relay application and highlights load current setting constraints for directional protection. Such constraints have not previously been publicized in well-known text books. A directional relay current setting constraint that is suggested in some text books is based purely on thermal rating considerations for older technology relays. This constraint may not exist with modern numerical relays. In the absence of any apparent constraint, there is a temptation to adopt lower current settings with modern directional relays in relation to reverse load current at the receiving ends of parallel feeders. This paper identifies the danger of adopting very low current settings without any special relay feature to ensure protection security with load current during power system faults. A system incident recorded by numerical relays is also offered to highlight this danger. In cases where there is a need to infringe the identified constraints an implemented and testing relaying technique is proposed.

  9. GRAVIDY, a GPU modular, parallel direct-summation N-body integrator: dynamics with softening

    Science.gov (United States)

    Maureira-Fredes, Cristián; Amaro-Seoane, Pau

    2018-01-01

    A wide variety of outstanding problems in astrophysics involve the motion of a large number of particles under the force of gravity. These include the global evolution of globular clusters, tidal disruptions of stars by a massive black hole, the formation of protoplanets and sources of gravitational radiation. The direct-summation of N gravitational forces is a complex problem with no analytical solution and can only be tackled with approximations and numerical methods. To this end, the Hermite scheme is a widely used integration method. With different numerical techniques and special-purpose hardware, it can be used to speed up the calculations. But these methods tend to be computationally slow and cumbersome to work with. We present a new graphics processing unit (GPU), direct-summation N-body integrator written from scratch and based on this scheme, which includes relativistic corrections for sources of gravitational radiation. GRAVIDY has high modularity, allowing users to readily introduce new physics, it exploits available computational resources and will be maintained by regular updates. GRAVIDY can be used in parallel on multiple CPUs and GPUs, with a considerable speed-up benefit. The single-GPU version is between one and two orders of magnitude faster than the single-CPU version. A test run using four GPUs in parallel shows a speed-up factor of about 3 as compared to the single-GPU version. The conception and design of this first release is aimed at users with access to traditional parallel CPU clusters or computational nodes with one or a few GPU cards.

  10. Fabrication of Si-nozzles for parallel mechano-electrospinning direct writing

    International Nuclear Information System (INIS)

    Pan, Yanqiao; Huang, YongAn; Bu, Ningbin; Yin, Zhouping

    2013-01-01

    Nozzles with micro-scale orifices drive high-resolution printing techniques for generating micro- to nano-scale droplets/lines. This paper presents the fabrication and application of Si-nozzles in mechano-electrospinning (MES). The fabrication process mainly consists of photolithography, Au deposition, inductively coupled plasma, and polydimethylsiloxane encapsulation. The 6 wt% polyethylene oxide solution is adopted to study the electrospinning behaviour and the relations between fibre diameter and process parameters in MES. A fibre grid with 250 µm spacing is able to be direct written, and the diameters are less than 3 µm. To improve the printing efficiency, positioning accuracy and flexibility, a rotatable multi-nozzle is adopted. The distance between parallel lines reduces sharply from 4.927 to 0.308 mm with the rotating angle increasing from 0° to 87°, and the fibre grids with tunable distance are achieved. This method paves the way for fabrication of addressable Si-nozzle array in parallel MES direct writing. (paper)

  11. High Performance Computation of a Jet in Crossflow by Lattice Boltzmann Based Parallel Direct Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Jiang Lei

    2015-01-01

    Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.

  12. Algorithm for Solution of Direct Kinematic Problem of Multi-sectional Manipulator with Parallel Structure

    Directory of Open Access Journals (Sweden)

    A. L. Lapikov

    2014-01-01

    Full Text Available The article is aimed at creating techniques to study multi-sectional manipulators with parallel structure. To solve this task the analysis in the field concerned was carried out to reveal both advantages and drawbacks of such executive mechanisms and main problems to be encountered in the course of research. The work shows that it is inefficient to create complete mathematical models of multisectional manipulators, which in the context of solving a direct kinematic problem are to derive a functional dependence of location and orientation of the end effector on all the generalized coordinates of the mechanism. The structure of multisectional manipulators was considered, where the sections are platform manipulators of parallel kinematics with six degrees of freedom. The paper offers an algorithm to define location and orientation of the end effector of the manipulator by means of iterative solution of analytical equation of the moving platform plane for each section. The equation for the unknown plane is derived using three points, which are attachment points of the moving platform joints. To define the values of joint coordinates a system of nine non-linear equations is completed. It is necessary to mention that for completion of the equation system are used the equations with the same type of non-linearity. The physical sense of all nine equations of the system is Euclidean distance between the points of the manipulator. The result of algorithm execution is a matrix of homogenous transformation for each section. The correlations describing transformations between adjoining sections of the manipulator are given. An example of the mechanism consisting of three sections is examined. The comparison of theoretical calculations with results obtained on a 3D-prototype is made. The next step of the work is to conduct research activities both in the field of dynamics of platform parallel kinematics manipulators with six degrees of freedom and in the

  13. Acceleration of cardiovascular MRI using parallel imaging: basic principles, practical considerations, clinical applications and future directions

    International Nuclear Information System (INIS)

    Niendorf, T.; Sodickson, D.

    2006-01-01

    Cardiovascular Magnetic Resonance (CVMR) imaging has proven to be of clinical value for non-invasive diagnostic imaging of cardiovascular diseases. CVMR requires rapid imaging; however, the speed of conventional MRI is fundamentally limited due to its sequential approach to image acquisition, in which data points are collected one after the other in the presence of sequentially-applied magnetic field gradients and radiofrequency coils to acquire multiple data points simultaneously, and thereby to increase imaging speed and efficiency beyond the limits of purely gradient-based approaches. The resulting improvements in imaging speed can be used in various ways, including shortening long examinations, improving spatial resolution and anatomic coverage, improving temporal resolution, enhancing image quality, overcoming physiological constraints, detecting and correcting for physiologic motion, and streamlining work flow. Examples of these strategies will be provided in this review, after some of the fundamentals of parallel imaging methods now in use for cardiovascular MRI are outlined. The emphasis will rest upon basic principles and clinical state-of-the art cardiovascular MRI applications. In addition, practical aspects such as signal-to-noise ratio considerations, tailored parallel imaging protocols and potential artifacts will be discussed, and current trends and future directions will be explored. (orig.)

  14. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  15. A new class of massively parallel direction splitting for the incompressible Navier–Stokes equations

    KAUST Repository

    Guermond, J.L.

    2011-06-01

    We introduce in this paper a new direction splitting algorithm for solving the incompressible Navier-Stokes equations. The main originality of the method consists of using the operator (I-∂xx)(I-∂yy)(I-∂zz) for approximating the pressure correction instead of the Poisson operator as done in all the contemporary projection methods. The complexity of the proposed algorithm is significantly lower than that of projection methods, and it is shown the have the same stability properties as the Poisson-based pressure-correction techniques, either in standard or rotational form. The first-order (in time) version of the method is proved to have the same convergence properties as the classical first-order projection techniques. Numerical tests reveal that the second-order version of the method has the same convergence rate as its second-order projection counterpart as well. The method is suitable for parallel implementation and preliminary tests show excellent parallel performance on a distributed memory cluster of up to 1024 processors. The method has been validated on the three-dimensional lid-driven cavity flow using grids composed of up to 2×109 points. © 2011 Elsevier B.V.

  16. A DIRECT METHOD TO DETERMINE THE PARALLEL MEAN FREE PATH OF SOLAR ENERGETIC PARTICLES WITH ADIABATIC FOCUSING

    International Nuclear Information System (INIS)

    He, H.-Q.; Wan, W.

    2012-01-01

    The parallel mean free path of solar energetic particles (SEPs), which is determined by physical properties of SEPs as well as those of solar wind, is a very important parameter in space physics to study the transport of charged energetic particles in the heliosphere, especially for space weather forecasting. In space weather practice, it is necessary to find a quick approach to obtain the parallel mean free path of SEPs for a solar event. In addition, the adiabatic focusing effect caused by a spatially varying mean magnetic field in the solar system is important to the transport processes of SEPs. Recently, Shalchi presented an analytical description of the parallel diffusion coefficient with adiabatic focusing. Based on Shalchi's results, in this paper we provide a direct analytical formula as a function of parameters concerning the physical properties of SEPs and solar wind to directly and quickly determine the parallel mean free path of SEPs with adiabatic focusing. Since all of the quantities in the analytical formula can be directly observed by spacecraft, this direct method would be a very useful tool in space weather research. As applications of the direct method, we investigate the inherent relations between the parallel mean free path and various parameters concerning physical properties of SEPs and solar wind. Comparisons of parallel mean free paths with and without adiabatic focusing are also presented.

  17. Investigating the dynamics of a direct parallel combination of supercapacitors and polymer electrolyte fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Papra, M.; Buechi, F.N.; Koetz, R. [Electrochemistry Laboratory, Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland)

    2010-10-15

    Hydrogen fuelled vehicles with a fuel cell based powertrain are considered to contribute to sustainable mobility by reducing CO{sub 2} emissions from road transport. In such vehicles the fuel cell system is typically hybridised with an energy storage device such as a battery or a supercapacitor (SC) to allow for recovering braking energy and assist the fuel cell system for peak power. The direct parallel combination of a polymer electrolyte fuel cell (PEFC) and a SC without any control electronics is investigated in the present study. It is demonstrated that the combination enhances the dynamics of the PEFC significantly during load changes. However, due to the lack of a power electronic interface the SC cannot be utilised to its optimum capacity. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  18. Short-term gas dispersion in idealised urban canopy in street parallel with flow direction

    Science.gov (United States)

    Chaloupecká, Hana; Jaňour, Zbyněk; Nosek, Štěpán

    2016-03-01

    Chemical attacks (e.g. Syria 2014-15 chlorine, 2013 sarine or Iraq 2006-7 chlorine) as well as chemical plant disasters (e.g. Spain 2015 nitric oxide, ferric chloride; Texas 2014 methyl mercaptan) threaten mankind. In these crisis situations, gas clouds are released. Dispersion of gas clouds is the issue of interest investigated in this paper. The paper describes wind tunnel experiments of dispersion from ground level point gas source. The source is situated in a model of an idealised urban canopy. The short duration releases of passive contaminant ethane are created by an electromagnetic valve. The gas cloud concentrations are measured in individual places at the height of the human breathing zone within a street parallel with flow direction by Fast-response Ionisation Detector. The simulations of the gas release for each measurement position are repeated many times under the same experimental set up to obtain representative datasets. These datasets are analysed to compute puff characteristics (arrival, leaving time and duration). The results indicate that the mean value of the dimensionless arrival time can be described as a growing linear function of the dimensionless coordinate in the street parallel with flow direction where the gas source is situated. The same might be stated about the dimensionless leaving time as well as the dimensionless duration, however these fits are worse. Utilising a linear function, we might also estimate some other statistical characteristics from datasets than the datasets means (medians, trimeans). The datasets of the dimensionless arrival time, the dimensionless leaving time and the dimensionless duration can be fitted by the generalized extreme value distribution (GEV) in all sampling positions except one.

  19. A parallel direct solver for the self-adaptive hp Finite Element Method

    KAUST Repository

    Paszyński, Maciej R.

    2010-03-01

    In this paper we present a new parallel multi-frontal direct solver, dedicated for the hp Finite Element Method (hp-FEM). The self-adaptive hp-FEM generates in a fully automatic mode, a sequence of hp-meshes delivering exponential convergence of the error with respect to the number of degrees of freedom (d.o.f.) as well as the CPU time, by performing a sequence of hp refinements starting from an arbitrary initial mesh. The solver constructs an initial elimination tree for an arbitrary initial mesh, and expands the elimination tree each time the mesh is refined. This allows us to keep track of the order of elimination for the solver. The solver also minimizes the memory usage, by de-allocating partial LU factorizations computed during the elimination stage of the solver, and recomputes them for the backward substitution stage, by utilizing only about 10% of the computational time necessary for the original computations. The solver has been tested on 3D Direct Current (DC) borehole resistivity measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive h p-FEM, with finite elements of various sizes and polynomial orders of approximation varying from p = 1 to p = 9. From the presented experiments it follows that the parallel solver scales well up to the maximum number of utilized processors. The limit for the solver scalability is the maximum sequential part of the algorithm: the computations of the partial LU factorizations over the longest path, coming from the root of the elimination tree down to the deepest leaf. © 2009 Elsevier Inc. All rights reserved.

  20. The new Exponential Directional Iterative (EDI) 3-D Sn scheme for parallel adaptive differencing

    International Nuclear Information System (INIS)

    Sjoden, G.E.

    2005-01-01

    The new Exponential Directional Iterative (EDI) discrete ordinates (Sn) scheme for 3-D Cartesian Coordinates is presented. The EDI scheme is a logical extension of the positive, efficient Exponential Directional Weighted (EDW) Sn scheme currently used as the third level of the adaptive spatial differencing algorithm in the PENTRAN parallel discrete ordinates solver. Here, the derivation and advantages of the EDI scheme are presented; EDI uses EDW-rendered exponential coefficients as initial starting values to begin a fixed point iteration of the exponential coefficients. One issue that required evaluation was an iterative cutoff criterion to prevent the application of an unstable fixed point iteration; although this was needed in some cases, it was readily treated with a default to EDW. Iterative refinement of the exponential coefficients in EDI typically converged in fewer than four fixed point iterations. Moreover, EDI yielded more accurate angular fluxes compared to the other schemes tested, particularly in streaming conditions. Overall, it was found that the EDI scheme was up to an order of magnitude more accurate than the EDW scheme on a given mesh interval in streaming cases, and is potentially a good candidate as a fourth-level differencing scheme in the PENTRAN adaptive differencing sequence. The 3-D Cartesian computational cost of EDI was only about 20% more than the EDW scheme, and about 40% more than Diamond Zero (DZ). More evaluation and testing are required to determine suitable upgrade metrics for EDI to be fully integrated into the current adaptive spatial differencing sequence in PENTRAN. (author)

  1. Experimental Hamiltonian identification for controlled two-level systems

    International Nuclear Information System (INIS)

    Schirmer, S.G.; Kolli, A.; Oi, D.K.L.

    2004-01-01

    We present a strategy to empirically determine the internal and control Hamiltonians for an unknown two-level system (black box) subject to various (piecewise constant) control fields when direct readout by measurement is limited to a single, fixed observable

  2. A parallel direct-forcing fictitious domain method for simulating microswimmers

    Science.gov (United States)

    Gao, Tong; Lin, Zhaowu

    2017-11-01

    We present a 3D parallel direct-forcing fictitious domain method for simulating swimming micro-organisms at small Reynolds numbers. We treat the motile micro-swimmers as spherical rigid particles using the ``Squirmer'' model. The particle dynamics are solved on the moving Larangian meshes that overlay upon a fixed Eulerian mesh for solving the fluid motion, and the momentum exchange between the two phases is resolved by distributing pseudo body-forces over the particle interior regions which constrain the background fictitious fluids to follow the particle movement. While the solid and fluid subproblems are solved separately, no inner-iterations are required to enforce numerical convergence. We demonstrate the accuracy and robustness of the method by comparing our results with the existing analytical and numerical studies for various cases of single particle dynamics and particle-particle interactions. We also perform a series of numerical explorations to obtain statistical and rheological measurements to characterize the dynamics and structures of Squirmer suspensions. NSF DMS 1619960.

  3. Graph Grammar-Based Multi-Frontal Parallel Direct Solver for Two-Dimensional Isogeometric Analysis

    KAUST Repository

    Kuźnik, Krzysztof

    2012-06-02

    This paper introduces the graph grammar based model for developing multi-thread multi-frontal parallel direct solver for two dimensional isogeometric finite element method. Execution of the solver algorithm has been expressed as the sequence of graph grammar productions. At the beginning productions construct the elimination tree with leaves corresponding to finite elements. Following sequence of graph grammar productions generates element frontal matri-ces at leaf nodes, merges matrices at parent nodes and eliminates rows corresponding to fully assembled degrees of freedom. Finally, there are graph grammar productions responsible for root problem solution and recursive backward substitutions. Expressing the solver algorithm by graph grammar productions allows us to explore the concurrency of the algorithm. The graph grammar productions are grouped into sets of independent tasks that can be executed concurrently. The resulting concurrent multi-frontal solver algorithm is implemented and tested on NVIDIA GPU, providing O(NlogN) execution time complexity where N is the number of degrees of freedom. We have confirmed this complexity by solving up to 1 million of degrees of freedom with 448 cores GPU.

  4. Study Behaviors and USMLE Step 1 Performance: Implications of a Student Self-Directed Parallel Curriculum.

    Science.gov (United States)

    Burk-Rafel, Jesse; Santen, Sally A; Purkiss, Joel

    2017-11-01

    To determine medical students' study behaviors when preparing for the United States Medical Licensing Examination (USMLE) Step 1, and how these behaviors are associated with Step 1 scores when controlling for likely covariates. The authors distributed a study-behaviors survey in 2014 and 2015 at their institution to two cohorts of medical students who had recently taken Step 1. Demographic and academic data were linked to responses. Descriptive statistics, bivariate correlations, and multiple linear regression analyses were performed. Of 332 medical students, 274 (82.5%) participated. Most students (n = 211; 77.0%) began studying for Step 1 during their preclinical curriculum, increasing their intensity during a protected study period during which they averaged 11.0 hours studying per day (standard deviation [SD] 2.1) over a period of 35.3 days (SD 6.2). Students used numerous third-party resources, including reading an exam-specific 700-page review book on average 2.1 times (SD 0.8) and completing an average of 3,597 practice multiple-choice questions (SD 1,611). Initiating study prior to the designated study period, increased review book usage, and attempting more practice questions were all associated with higher Step 1 scores, even when controlling for Medical College Admission Test scores, preclinical exam performance, and self-identified score goal (adjusted R = 0.56, P < .001). Medical students at one public institution engaged in a self-directed, "parallel" Step 1 curriculum using third-party study resources. Several study behaviors were associated with improved USMLE Step 1 performance, informing both institutional- and student-directed preparation for this high-stakes exam.

  5. Direct and iterative algorithms for the parallel solution of the one-dimensional macroscopic Navier-Stokes equations

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1986-01-01

    Current efforts are under way to develop and evaluate numerical algorithms for the parallel solution of the large sparse matrix equations associated with the finite difference representation of the macroscopic Navier-Stokes equations. Previous work has shown that these equations can be cast into smaller coupled matrix equations suitable for solution utilizing multiple computer processors operating in parallel. The individual processors themselves may exhibit parallelism through the use of vector pipelines. This wor, has concentrated on the one-dimensional drift flux form of the Navier-Stokes equations. Direct and iterative algorithms that may be suitable for implementation on parallel computer architectures are evaluated in terms of accuracy and overall execution speed. This work has application to engineering and training simulations, on-line process control systems, and engineering workstations where increased computational speeds are required

  6. Darboux transformation for two-level system

    Energy Technology Data Exchange (ETDEWEB)

    Bagrov, V.; Baldiotti, M.; Gitman, D.; Shamshutdinova, V. [Instituto de Fisica, Universidade de Sao Paulo, Caixa Postal 66318-CEP, 05315-970 Sao Paulo, S.P. (Brazil)

    2005-06-01

    We develop the Darboux procedure for the case of the two-level system. In particular, it is demonstrated that one can construct the Darboux intertwining operator that does not violate the specific structure of the equations of the two-level system, transforming only one real potential into another real potential. We apply the obtained Darboux transformation to known exact solutions of the two-level system. Thus, we find three classes of new solutions for the two-level system and the corresponding new potentials that allow such solutions. (Abstract Copyright [2005], Wiley Periodicals, Inc.)

  7. Convergence analysis of a class of massively parallel direction splitting algorithms for the Navier-Stokes equations in simple domains

    KAUST Repository

    Guermond, Jean-Luc; Minev, Peter D.; Salgado, Abner J.

    2012-01-01

    We provide a convergence analysis for a new fractional timestepping technique for the incompressible Navier-Stokes equations based on direction splitting. This new technique is of linear complexity, unconditionally stable and convergent, and suitable for massive parallelization. © 2012 American Mathematical Society.

  8. Influence of equilibrium shear flow in the parallel magnetic direction on edge localized mode crash

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y.; Xiong, Y. Y. [College of Physical Science and Technology, Sichuan University, 610064 Chengdu (China); Chen, S. Y., E-mail: sychen531@163.com [College of Physical Science and Technology, Sichuan University, 610064 Chengdu (China); Key Laboratory of High Energy Density Physics and Technology of Ministry of Education, Sichuan University, Chengdu 610064 (China); Southwestern Institute of Physics, Chengdu 610041 (China); Huang, J.; Tang, C. J. [College of Physical Science and Technology, Sichuan University, 610064 Chengdu (China); Key Laboratory of High Energy Density Physics and Technology of Ministry of Education, Sichuan University, Chengdu 610064 (China)

    2016-04-15

    The influence of the parallel shear flow on the evolution of peeling-ballooning (P-B) modes is studied with the BOUT++ four-field code in this paper. The parallel shear flow has different effects in linear simulation and nonlinear simulation. In the linear simulations, the growth rate of edge localized mode (ELM) can be increased by Kelvin-Helmholtz term, which can be caused by the parallel shear flow. In the nonlinear simulations, the results accord with the linear simulations in the linear phase. However, the ELM size is reduced by the parallel shear flow in the beginning of the turbulence phase, which is recognized as the P-B filaments' structure. Then during the turbulence phase, the ELM size is decreased by the shear flow.

  9. Two-Level Semantics and Code Generation

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis

    1988-01-01

    A two-level denotational metalanguage that is suitable for defining the semantics of Pascal-like languages is presented. The two levels allow for an explicit distinction between computations taking place at compile-time and computations taking place at run-time. While this distinction is perhaps...... not absolutely necessary for describing the input-output semantics of programming languages, it is necessary when issues such as data flow analysis and code generation are considered. For an example stack-machine, the authors show how to generate code for the run-time computations and still perform the compile...

  10. Highly efficient parallel direct solver for solving dense complex matrix equations from method of moments

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2017-03-01

    Full Text Available Based on the vectorised and cache optimised kernel, a parallel lower upper decomposition with a novel communication avoiding pivoting scheme is developed to solve dense complex matrix equations generated by the method of moments. The fine-grain data rearrangement and assembler instructions are adopted to reduce memory accessing times and improve CPU cache utilisation, which also facilitate vectorisation of the code. Through grouping processes in a binary tree, a parallel pivoting scheme is designed to optimise the communication pattern and thus reduces the solving time of the proposed solver. Two large electromagnetic radiation problems are solved on two supercomputers, respectively, and the numerical results demonstrate that the proposed method outperforms those in open source and commercial libraries.

  11. Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2

    Science.gov (United States)

    Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad

    1995-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.

  12. Current Trends in Numerical Simulation for Parallel Engineering Environments New Directions and Work-in-Progress

    International Nuclear Information System (INIS)

    Trinitis, C; Schulz, M

    2006-01-01

    In today's world, the use of parallel programming and architectures is essential for simulating practical problems in engineering and related disciplines. Remarkable progress in CPU architecture, system scalability, and interconnect technology continues to provide new opportunities, as well as new challenges for both system architects and software developers. These trends are paralleled by progress in parallel algorithms, simulation techniques, and software integration from multiple disciplines. ParSim brings together researchers from both application disciplines and computer science and aims at fostering closer cooperation between these fields. Since its successful introduction in 2002, ParSim has established itself as an integral part of the EuroPVM/MPI conference series. In contrast to traditional conferences, emphasis is put on the presentation of up-to-date results with a short turn-around time. This offers a unique opportunity to present new aspects in this dynamic field and discuss them with a wide, interdisciplinary audience. The EuroPVM/MPI conference series, as one of the prime events in parallel computation, serves as an ideal surrounding for ParSim. This combination enables the participants to present and discuss their work within the scope of both the session and the host conference. This year, eleven papers from authors in nine countries were submitted to ParSim, and we selected five of them. They cover a wide range of different application fields including gas flow simulations, thermo-mechanical processes in nuclear waste storage, and cosmological simulations. At the same time, the selected contributions also address the computer science side of their codes and discuss different parallelization strategies, programming models and languages, as well as the use nonblocking collective operations in MPI. We are confident that this provides an attractive program and that ParSim will be an informal setting for lively discussions and for fostering new

  13. High-throughput fabrication of micrometer-sized compound parabolic mirror arrays by using parallel laser direct-write processing

    International Nuclear Information System (INIS)

    Yan, Wensheng; Gu, Min; Cumming, Benjamin P

    2015-01-01

    Micrometer-sized parabolic mirror arrays have significant applications in both light emitting diodes and solar cells. However, low fabrication throughput has been identified as major obstacle for the mirror arrays towards large-scale applications due to the serial nature of the conventional method. Here, the mirror arrays are fabricated by using a parallel laser direct-write processing, which addresses this barrier. In addition, it is demonstrated that the parallel writing is able to fabricate complex arrays besides simple arrays and thus offers wider applications. Optical measurements show that each single mirror confines the full-width at half-maximum value to as small as 17.8 μm at the height of 150 μm whilst providing a transmittance of up to 68.3% at a wavelength of 633 nm in good agreement with the calculation values. (paper)

  14. Characterization of growth sectors in synthetic quartz grown from cylindrical seeds parallel to [0001] direction

    Directory of Open Access Journals (Sweden)

    Pedro Luiz Guzzo

    2004-06-01

    Full Text Available In the present study, the morphology and the impurity distribution were investigated in growth sectors formed around the [0001] axis of synthetic quartz crystals. Plates containing cylindrical holes and cylindrical bars parallel to [0001] were prepared by ultrasonic machining and further used as seed-crystals. The hydrothermal growth of synthetic quartz was carried out in a commercial autoclave under NaOH solution during 50 days. The morphologies of crystals grown from cylindrical seeds were characterized by X-ray diffraction topography. For both types of crystals, +X- and X- growth sectors were distinctly observed. Infrared spectroscopy and ionizing radiation were adopted to reveal the distribution of point defects related to Si-Al substitution and OH-species. It was found a different distribution of Al-related centers in relation to the crystals grown from conventional Y-bar and Z-plate seeds.

  15. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    Science.gov (United States)

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  16. Effective Five Directional Partial Derivatives-Based Image Smoothing and a Parallel Structure Design.

    Science.gov (United States)

    Choongsang Cho; Sangkeun Lee

    2016-04-01

    Image smoothing has been used for image segmentation, image reconstruction, object classification, and 3D content generation. Several smoothing approaches have been used at the pre-processing step to retain the critical edge, while removing noise and small details. However, they have limited performance, especially in removing small details and smoothing discrete regions. Therefore, to provide fast and accurate smoothing, we propose an effective scheme that uses a weighted combination of the gradient, Laplacian, and diagonal derivatives of a smoothed image. In addition, to reduce computational complexity, we designed and implemented a parallel processing structure for the proposed scheme on a graphics processing unit (GPU). For an objective evaluation of the smoothing performance, the images were linearly quantized into several layers to generate experimental images, and the quantized images were smoothed using several methods for reconstructing the smoothly changed shape and intensity of the original image. Experimental results showed that the proposed scheme has higher objective scores and better successful smoothing performance than similar schemes, while preserving and removing critical and trivial details, respectively. For computational complexity, the proposed smoothing scheme running on a GPU provided 18 and 16 times lower complexity than the proposed smoothing scheme running on a CPU and the L0-based smoothing scheme, respectively. In addition, a simple noise reduction test was conducted to show the characteristics of the proposed approach; it reported that the presented algorithm outperforms the state-of-the art algorithms by more than 5.4 dB. Therefore, we believe that the proposed scheme can be a useful tool for efficient image smoothing.

  17. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  18. Solar power satellite rectenna design study: Directional receiving elements and parallel-series combining analysis

    Science.gov (United States)

    Gutmann, R. J.; Borrego, J. M.

    1978-01-01

    Rectenna conversion efficiencies (RF to dc) approximating 85 percent were demonstrated on a small scale, clearly indicating the feasibility and potential of efficiency of microwave power to dc. The overall cost estimates of the solar power satellite indicate that the baseline rectenna subsystem will be between 25 to 40 percent of the system cost. The directional receiving elements and element extensions were studied, along with power combining evaluation and evaluation extensions.

  19. Primal Domain Decomposition Method with Direct and Iterative Solver for Circuit-Field-Torque Coupled Parallel Finite Element Method to Electric Machine Modelling

    Directory of Open Access Journals (Sweden)

    Daniel Marcsa

    2015-01-01

    Full Text Available The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.

  20. Start-up flow in a three-dimensional lid-driven cavity by means of a massively parallel direction splitting algorithm

    KAUST Repository

    Guermond, J. L.; Minev, P. D.

    2011-01-01

    The purpose of this paper is to validate a new highly parallelizable direction splitting algorithm. The parallelization capabilities of this algorithm are illustrated by providing a highly accurate solution for the start-up flow in a three

  1. Parallel electric fields accelerating ions and electrons in the same direction

    International Nuclear Information System (INIS)

    Hultqvist, B; Lundin, R.

    1988-01-01

    In this contribution the authors present Viking observations of electrons and positive ions which move upward along the magnetic field lines with energies of the same order of magnitude. The authors propose that both ions and electrons are accelerated by an electric field which has low-frequency temporal variations such that the ions experience and average electrostatic potential drop along the magnetic field lines whereas the upward streaming electrons are accelerated in periods of downward pointing electric field which is quasi-static for the electrons and forces them to beam out of the field region before the field changes direction

  2. Solidification microstructures and solid-state parallels: Recent developments, future directions

    Energy Technology Data Exchange (ETDEWEB)

    Asta, M. [Department of Chemical Engineering and Materials Science, University of California at Davis, Davis, CA 95616 (United States); Beckermann, C. [Department of Mechanical and Industrial Engineering, University of Iowa, Iowa City, IA 52242 (United States); Karma, A. [Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115 (United States); Kurz, W. [Institute of Materials, Ecole Polytechnique Federale de Lausanne (EPFL), 1015 Lausanne (Switzerland)], E-mail: wilfried.kurz@epfl.ch; Napolitano, R. [Department of Materials Science and Engineering, Iowa State University, and Ames Laboratory USDOE, Ames, IA 50011 (United States); Plapp, M. [Physique de la Matiere Condensee, Ecole Polytechnique, CNRS, 91128 Palaiseau (France); Purdy, G. [Department of Materials Science and Engineering, McMaster University, Hamilton, Ont., L8S 4L7 (Canada); Rappaz, M. [Institute of Materials, Ecole Polytechnique Federale de Lausanne (EPFL), 1015 Lausanne (Switzerland); Trivedi, R. [Department of Materials Science and Engineering, Iowa State University, and Ames Laboratory USDOE, Ames, IA 50011 (United States)

    2009-02-15

    Rapid advances in atomistic and phase-field modeling techniques as well as new experiments have led to major progress in solidification science during the first years of this century. Here we review the most important findings in this technologically important area that impact our quantitative understanding of: (i) key anisotropic properties of the solid-liquid interface that govern solidification pattern evolution, including the solid-liquid interface free energy and the kinetic coefficient; (ii) dendritic solidification at small and large growth rates, with particular emphasis on orientation selection; (iii) regular and irregular eutectic and peritectic microstructures; (iv) effects of convection on microstructure formation; (v) solidification at a high volume fraction of solid and the related formation of pores and hot cracks; and (vi) solid-state transformations as far as they relate to solidification models and techniques. In light of this progress, critical issues that point to directions for future research in both solidification and solid-state transformations are identified.

  3. Pros and cons of rotating ground motion records to fault-normal/parallel directions for response history analysis of buildings

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2014-01-01

    According to the regulatory building codes in the United States (e.g., 2010 California Building Code), at least two horizontal ground motion components are required for three-dimensional (3D) response history analysis (RHA) of building structures. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHAs should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here, for the first time, using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak values of engineering demand parameters (EDPs) were computed for rotation angles ranging from 0 through 180° to quantify the difference between peak values of EDPs over all rotation angles and those due to FN/FP direction rotated motions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  4. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations

    KAUST Repository

    Bisetti, Fabrizio

    2014-07-14

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. © 2014 The Author(s) Published by the Royal Society.

  5. Two-level method with coarse space size independent convergence

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Brezina, M. [Univ. of Colorado, Denver, CO (United States); Tezaur, R.; Krizkova, J. [UWB, Plzen (Czech Republic)

    1996-12-31

    The basic disadvantage of the standard two-level method is the strong dependence of its convergence rate on the size of the coarse-level problem. In order to obtain the optimal convergence result, one is limited to using a coarse space which is only a few times smaller than the size of the fine-level one. Consequently, the asymptotic cost of the resulting method is the same as in the case of using a coarse-level solver for the original problem. Today`s two-level domain decomposition methods typically offer an improvement by yielding a rate of convergence which depends on the ratio of fine and coarse level only polylogarithmically. However, these methods require the use of local subdomain solvers for which straightforward application of iterative methods is problematic, while the usual application of direct solvers is expensive. We suggest a method diminishing significantly these difficulties.

  6. Structural Directed Growth of Ultrathin Parallel Birnessite on β-MnO2 for High-Performance Asymmetric Supercapacitors.

    Science.gov (United States)

    Zhu, Shijin; Li, Li; Liu, Jiabin; Wang, Hongtao; Wang, Tian; Zhang, Yuxin; Zhang, Lili; Ruoff, Rodney S; Dong, Fan

    2018-02-27

    Two-dimensional birnessite has attracted attention for electrochemical energy storage because of the presence of redox active Mn 4+ /Mn 3+ ions and spacious interlayer channels available for ions diffusion. However, current strategies are largely limited to enhancing the electrical conductivity of birnessite. One key limitation affecting the electrochemical properties of birnessite is the poor utilization of the MnO 6 unit. Here, we assemble β-MnO 2 /birnessite core-shell structure that exploits the exposed crystal face of β-MnO 2 as the core and ultrathin birnessite sheets that have the structure advantage to enhance the utilization efficiency of the Mn from the bulk. Our birnessite that has sheets parallel to each other is found to have unusual crystal structure with interlayer spacing, Mn(III)/Mn(IV) ratio and the content of the balancing cations differing from that of the common birnessite. The substrate directed growth mechanism is carefully investigated. The as-prepared core-shell nanostructures enhance the exposed surface area of birnessite and achieve high electrochemical performances (for example, 657 F g -1 in 1 M Na 2 SO 4 electrolyte based on the weight of parallel birnessite) and excellent rate capability over a potential window of up to 1.2 V. This strategy opens avenues for fundamental studies of birnessite and its properties and suggests the possibility of its use in energy storage and other applications. The potential window of an asymmetric supercapacitor that was assembled with this material can be enlarged to 2.2 V (in aqueous electrolyte) with a good cycling ability.

  7. The effect of the flow direction inside the header on two-phase flow distribution in parallel vertical channels

    International Nuclear Information System (INIS)

    Marchitto, A.; Fossa, M.; Guglielmini, G.

    2012-01-01

    Uniform fluid distribution is essential for efficient operation of chemical-processing equipment such as contactors, reactors, mixers, burners and in most refrigeration equipment, where two phases are acting together. To obtain optimum distribution, proper consideration must be given to flow behaviour in the distributor, flow conditions upstream and downstream of the distributor, and the distribution requirements (fluid or phase) of the equipment. Even though the principles of single phase distribution have been well developed for more than three decades, they are frequently not taken in the right account by equipment designers when a mixture is present, and a significant fraction of process equipment consequently suffers from maldistribution. The experimental investigation presented in this paper is aimed at understanding the main mechanisms which drive the flow distribution inside a two-phase horizontal header in order to design improved distributors and to optimise the flow distribution inside compact heat exchanger. Experimentation was devoted to establish the influence of the inlet conditions and of the channel/distributor geometry on the phase/mass distribution into parallel vertical channels. The study is carried out with air–water mixtures and it is based on the measurement of component flow rates in individual channels and on pressure drops across the distributor. The effects of the operating conditions, the header geometry and the inlet port nozzle were investigated in the ranges of liquid and gas superficial velocities of 0.2–1.2 and 1.5–16.5 m/s, respectively. In order to control the main flow direction inside the header, different fitting devices were tested; the insertion of a co-axial, multi-hole distributor inside the header has confirmed the possibility of greatly improving the liquid and gas flow distribution by the proper selection of position, diameter and number of the flow openings between the supplying distributor and the system of

  8. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    Science.gov (United States)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  9. Efficient two-level preconditionined conjugate gradient method on the GPU

    NARCIS (Netherlands)

    Gupta, R.; Van Gijzen, M.B.; Vuik, K.

    2011-01-01

    We present an implementation of Two-Level Preconditioned Conjugate Gradient Method for the GPU. We investigate a Truncated Neumann Series based preconditioner in combination with deflation and compare it with Block Incomplete Cholesky schemes. This combination exhibits fine-grain parallelism and

  10. Testing a Quantum Heat Pump with a Two-Level Spin

    Directory of Open Access Journals (Sweden)

    Luis A. Correa

    2016-04-01

    Full Text Available Once in its non-equilibrium steady state, a nanoscale system coupled to several heat baths may be thought of as a “quantum heat pump”. Depending on the direction of its stationary heat flows, it may function as, e.g., a refrigerator or a heat transformer. These continuous heat devices can be arbitrarily complex multipartite systems, and yet, their working principle is always the same: they are made up of several elementary three-level stages operating in parallel. As a result, it is possible to devise external “black-box” testing strategies to learn about their functionality and performance regardless of any internal details. In particular, one such heat pump can be tested by coupling a two-level spin to one of its “contact transitions”. The steady state of this external probe contains information about the presence of heat leaks and internal dissipation in the device and, also, about the direction of its steady-state heat currents. Provided that the irreversibility of the heat pump is low, one can further estimate its coefficient of performance. These techniques may find applications in the emerging field of quantum thermal engineering, as they facilitate the diagnosis and design optimization of complex thermodynamic cycles.

  11. A Direct Kinematics Problem Solution for the Three-degree-of-freedom Parallel Structure Manipulator Based on Crank Mechanism

    Directory of Open Access Journals (Sweden)

    V. N. Paschenko

    2015-01-01

    Full Text Available The paper describes a mechanism representing a kind of mechanisms of parallel kinematics with three degrees of freedom based on the crank mechanism. This mechanism consists of two platforms, namely: the lower fixed and the upper movable. The upper platform is connected to the lower one by six movable elements, three of which are rods attached to the bases by means of spherical joints, and another three have a crank structure.The paper shows an approach to the solution of a direct task of kinematics based on mathematical modeling. The inverse problem of kinematics is formulated as follows: at specified angles of rotation drive (the values of generalized coordinates to determine the position of the top mobile platform.To solve this problem has been used a mathematical model describing the proposed system. On the basis of the constructed model were made the necessary calculations that allowed us using the values of crank angles connected with the engines to determine the position of the platform in space. To solve the problem we used the method of virtual points to reduce the number of equations and unknowns, which determine the position of the upper platform in space, at a crucial system from eighteen to nine, thus simplifying the solution.To check the solution correctness was carried out numerical experiment. Each generalized coordinate took on values in the range from -30 ° to 30 °; for them a direct positional problem was solved, and its result was inserted, as initial data, in the previous solved and proven inverse problem on the position of the platform under study.The paper presents comparative results of measurements with the calculated values of the generalized coordinates and draws the appropriate conclusions, that this model is in good compliance with the results observed in practice. One of the distinctive features of the proposed approach is that rotation angles of engines are used as the generalized coordinates. This allowed us

  12. Start-up flow in a three-dimensional lid-driven cavity by means of a massively parallel direction splitting algorithm

    KAUST Repository

    Guermond, J. L.

    2011-05-04

    The purpose of this paper is to validate a new highly parallelizable direction splitting algorithm. The parallelization capabilities of this algorithm are illustrated by providing a highly accurate solution for the start-up flow in a three-dimensional impulsively started lid-driven cavity of aspect ratio 1×1×2 at Reynolds numbers 1000 and 5000. The computations are done in parallel (up to 1024 processors) on adapted grids of up to 2 billion nodes in three space dimensions. Velocity profiles are given at dimensionless times t=4, 8, and 12; at least four digits are expected to be correct at Re=1000. © 2011 John Wiley & Sons, Ltd.

  13. GPU-based, parallel-line, omni-directional integration of measured acceleration field to obtain the 3D pressure distribution

    Science.gov (United States)

    Wang, Jin; Zhang, Cao; Katz, Joseph

    2016-11-01

    A PIV based method to reconstruct the volumetric pressure field by direct integration of the 3D material acceleration directions has been developed. Extending the 2D virtual-boundary omni-directional method (Omni2D, Liu & Katz, 2013), the new 3D parallel-line omni-directional method (Omni3D) integrates the material acceleration along parallel lines aligned in multiple directions. Their angles are set by a spherical virtual grid. The integration is parallelized on a Tesla K40c GPU, which reduced the computing time from three hours to one minute for a single realization. To validate its performance, this method is utilized to calculate the 3D pressure fields in isotropic turbulence and channel flow using the JHU DNS Databases (http://turbulence.pha.jhu.edu). Both integration of the DNS acceleration as well as acceleration from synthetic 3D particles are tested. Results are compared to other method, e.g. solution to the Pressure Poisson Equation (e.g. PPE, Ghaemi et al., 2012) with Bernoulli based Dirichlet boundary conditions, and the Omni2D method. The error in Omni3D prediction is uniformly low, and its sensitivity to acceleration errors is local. It agrees with the PPE/Bernoulli prediction away from the Dirichlet boundary. The Omni3D method is also applied to experimental data obtained using tomographic PIV, and results are correlated with deformation of a compliant wall. ONR.

  14. Static and kinetic friction coefficients of Scots pine (Pinus sylvestris L., parallel and perpendicular to grain direction

    Directory of Open Access Journals (Sweden)

    Aira, J. R.

    2014-09-01

    Full Text Available In this study the static (µe and kinetic (µd coefficients of friction were obtained for Pinus sylvestis L. sawn timber of Spanish origin. Friction between transverse surfaces sliding perpendicular to the grain (tangential direction and radial surfaces sliding parallel to the grain was analyzed. A specifically designed device was used for tests, which makes it possible to apply contact pressure and measure displacements and applied loads simultaneously. Coefficients of friction between transverse surfaces (µe = 0,24; µd = 0,17 were about twice of the coefficients of friction between radial surfaces (µe = 0,12; µd = 0,08. Furthermore, these values are located within normal values of those commonly reported for softwood. The results are considered preliminary due to the small number of specimens.En este estudio se determinaron los coeficientes de rozamiento, estático (µe y dinámico (µd, en madera aserrada de Pinus sylvestris L. de procedencia española, diferenciando si se produce el contacto entre secciones de corte transversal con deslizamiento en dirección perpendicular a la fibra (en dirección tangencial, o entre secciones de corte radial con deslizamiento paralelo a la fibra. Para la realización de los ensayos se ha utilizado un dispositivo, diseñado específicamente, que posibilita la aplicación de una presión de contacto y la medición del desplazamiento y de la fuerza aplicada de manera simultánea, permitiendo la obtención de los coeficientes de rozamiento estático y dinámico. Los coeficientes de rozamiento obtenidos entre secciones transversales (µe = 0.24; µd = 0.17 fueron del orden del doble de los coeficientes de rozamiento entre secciones radiales (µe = 0.12; µd = 0.08. Además, estos valores se encuentran dentro de los valores que aparecen habitualmente en la bibliografía para madera de coníferas. Debido al escaso tamaño de la muestra los resultados se consideran preliminares.

  15. A two-level real-time vision machine combining coarse and fine grained parallelism

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Kjær-Nielsen, Anders; Pauwels, Karl

    2010-01-01

    In this paper, we describe a real-time vision machine having a stereo camera as input generating visual information on two different levels of abstraction. The system provides visual low-level and mid-level information in terms of dense stereo and optical flow, egomotion, indicating areas...... a factor 90 and a reduction of latency of a factor 26 compared to processing on a single CPU--core. Since the vision machine provides generic visual information it can be used in many contexts. Currently it is used in a driver assistance context as well as in two robotic applications....

  16. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  17. Novel encoding and updating of positional, or directional, spatial cues are processed by distinct hippocampal subfields: Evidence for parallel information processing and the "what" stream.

    Science.gov (United States)

    Hoang, Thu-Huong; Aliane, Verena; Manahan-Vaughan, Denise

    2018-05-01

    The specific roles of hippocampal subfields in spatial information processing and encoding are, as yet, unclear. The parallel map theory postulates that whereas the CA1 processes discrete environmental features (positional cues used to generate a "sketch map"), the dentate gyrus (DG) processes large navigation-relevant landmarks (directional cues used to generate a "bearing map"). Additionally, the two-streams hypothesis suggests that hippocampal subfields engage in differentiated processing of information from the "where" and the "what" streams. We investigated these hypotheses by analyzing the effect of exploration of discrete "positional" features and large "directional" spatial landmarks on hippocampal neuronal activity in rats. As an indicator of neuronal activity we measured the mRNA induction of the immediate early genes (IEGs), Arc and Homer1a. We observed an increase of this IEG mRNA in CA1 neurons of the distal neuronal compartment and in proximal CA3, after novel spatial exploration of discrete positional cues, whereas novel exploration of directional cues led to increases in IEG mRNA in the lower blade of the DG and in proximal CA3. Strikingly, the CA1 did not respond to directional cues and the DG did not respond to positional cues. Our data provide evidence for both the parallel map theory and the two-streams hypothesis and suggest a precise compartmentalization of the encoding and processing of "what" and "where" information occurs within the hippocampal subfields. © 2018 The Authors. Hippocampus Published by Wiley Periodicals, Inc.

  18. Renal magnetic resonance angiography at 3.0 Tesla using a 32-element phased-array coil system and parallel imaging in 2 directions.

    Science.gov (United States)

    Fenchel, Michael; Nael, Kambiz; Deshpande, Vibhas S; Finn, J Paul; Kramer, Ulrich; Miller, Stephan; Ruehm, Stefan; Laub, Gerhard

    2006-09-01

    The aim of the present study was to assess the feasibility of renal magnetic resonance angiography at 3.0 T using a phased-array coil system with 32-coil elements. Specifically, high parallel imaging factors were used for an increased spatial resolution and anatomic coverage of the whole abdomen. Signal-to-noise values and the g-factor distribution of the 32 element coil were examined in phantom studies for the magnetic resonance angiography (MRA) sequence. Eleven volunteers (6 men, median age of 30.0 years) were examined on a 3.0-T MR scanner (Magnetom Trio, Siemens Medical Solutions, Malvern, PA) using a 32-element phased-array coil (prototype from In vivo Corp.). Contrast-enhanced 3D-MRA (TR 2.95 milliseconds, TE 1.12 milliseconds, flip angle 25-30 degrees , bandwidth 650 Hz/pixel) was acquired with integrated generalized autocalibrating partially parallel acquisition (GRAPPA), in both phase- and slice-encoding direction. Images were assessed by 2 independent observers with regard to image quality, noise and presence of artifacts. Signal-to-noise levels of 22.2 +/- 22.0 and 57.9 +/- 49.0 were measured with (GRAPPAx6) and without parallel-imaging, respectively. The mean g-factor of the 32-element coil for GRAPPA with an acceleration of 3 and 2 in the phase-encoding and slice-encoding direction, respectively, was 1.61. High image quality was found in 9 of 11 volunteers (2.6 +/- 0.8) with good overall interobserver agreement (k = 0.87). Relatively low image quality with higher noise levels were encountered in 2 volunteers. MRA at 3.0 T using a 32-element phased-array coil is feasible in healthy volunteers. High diagnostic image quality and extended anatomic coverage could be achieved with application of high parallel imaging factors.

  19. Parallel direct numerical simulation of turbulent flows in rotor-stator cavities. Comparison with k-{epsilon} modeling; Simulation numerique directe parallele d`ecoulements turbulents en cavites rotor-stator comparaisons avec les modilisations k-{epsilon}

    Energy Technology Data Exchange (ETDEWEB)

    Jacques, R.; Le Quere, P.; Daube, O. [Centre National de la Recherche Scientifique (CNRS), 91 - Orsay (France)

    1997-12-31

    Turbulent flows between a fixed disc and a rotating disc are encountered in various applications like turbo-machineries or torque converters of automatic gear boxes. These flows are characterised by particular physical phenomena mainly due to the effects of rotation (Coriolis and inertia forces) and thus, classical k-{epsilon}-type modeling gives approximative results. The aim of this work is to study these flows using direct numerical simulation in order to provide precise information about the statistical turbulent quantities and to improve the k-{epsilon} modeling in the industrial MATHILDA code of the ONERA and used by SNECMA company (aerospace industry). The results presented are restricted to the comparison between results obtained with direct simulation and results obtained with the MATHILDA code in the same configuration. (J.S.) 8 refs.

  20. Parallel and convergent processing in grid cell, head-direction cell, boundary cell, and place cell networks.

    Science.gov (United States)

    Brandon, Mark P; Koenig, Julie; Leutgeb, Stefan

    2014-03-01

    The brain is able to construct internal representations that correspond to external spatial coordinates. Such brain maps of the external spatial topography may support a number of cognitive functions, including navigation and memory. The neuronal building block of brain maps are place cells, which are found throughout the hippocampus of rodents and, in a lower proportion, primates. Place cells typically fire in one or few restricted areas of space, and each area where a cell fires can range, along the dorsoventral axis of the hippocampus, from 30 cm to at least several meters. The sensory processing streams that give rise to hippocampal place cells are not fully understood, but substantial progress has been made in characterizing the entorhinal cortex, which is the gateway between neocortical areas and the hippocampus. Entorhinal neurons have diverse spatial firing characteristics, and the different entorhinal cell types converge in the hippocampus to give rise to a single, spatially modulated cell type-the place cell. We therefore suggest that parallel information processing in different classes of cells-as is typically observed at lower levels of sensory processing-continues up into higher level association cortices, including those that provide the inputs to hippocampus. WIREs Cogn Sci 2014, 5:207-219. doi: 10.1002/wcs.1272 Conflict of interest: The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2013 John Wiley & Sons, Ltd.

  1. Strong nonlinearity-induced correlations for counterpropagating photons scattering on a two-level emitter

    DEFF Research Database (Denmark)

    Nysteen, Anders; McCutcheon, Dara; Mørk, Jesper

    2015-01-01

    We analytically treat the scattering of two counterpropagating photons on a two-level emitter embedded in an optical waveguide. We find that the nonlinearity of the emitter can give rise to significant pulse-dependent directional correlations in the scattered photonic state, which could be quanti......We analytically treat the scattering of two counterpropagating photons on a two-level emitter embedded in an optical waveguide. We find that the nonlinearity of the emitter can give rise to significant pulse-dependent directional correlations in the scattered photonic state, which could...

  2. The geometric phase in two-level atomic systems

    International Nuclear Information System (INIS)

    Tian Mingzhen; Barber, Zeb W.; Fischer, Joe A.; Randall Babbitt, Wm.

    2004-01-01

    We report the observation of the geometric phase in a closed two-level atomic system using stimulated photon echoes. The two-level system studied consists of the two-electronic energy levels ( 3 H 4 and 3 H 6 ) of Tm 3+ doped in YAG crystal. When a two-level atom at an arbitrary superposition state is excited by a pair of specially designed laser pulses, the excited state component gains a relative phase with respect to the ground state component. We identified the phase shift to be of pure geometric nature. The dynamic phase associated to the driving Hamiltonian is unchanged. The experiment results of the phase change agree with the theory to the extent of the measurement limit

  3. Two-level convolution formula for nuclear structure function

    Science.gov (United States)

    Ma, Boqiang

    1990-05-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions.

  4. Two-level convolution formula for nuclear structure function

    International Nuclear Information System (INIS)

    Ma Boqiang

    1990-01-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions

  5. Stationary states of two-level open quantum systems

    International Nuclear Information System (INIS)

    Gardas, Bartlomiej; Puchala, Zbigniew

    2011-01-01

    A problem of finding stationary states of open quantum systems is addressed. We focus our attention on a generic type of open system: a qubit coupled to its environment. We apply the theory of block operator matrices and find stationary states of two-level open quantum systems under certain conditions applied on both the qubit and the surrounding.

  6. Evaluation of fault-normal/fault-parallel directions rotated ground motions for response history analysis of an instrumented six-story building

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2012-01-01

    According to regulatory building codes in United States (for example, 2010 California Building Code), at least two horizontal ground-motion components are required for three-dimensional (3D) response history analysis (RHA) of buildings. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHA analyses should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak responses of engineering demand parameters (EDPs) were obtained for rotation angles ranging from 0° through 180° for evaluating the FN/FP directions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  7. Memory Effects in the Two-Level Model for Glasses

    Science.gov (United States)

    Aquino, Gerardo; Allahverdyan, Armen; Nieuwenhuizen, Theo M.

    2008-07-01

    We study an ensemble of two-level systems interacting with a thermal bath. This is a well-known model for glasses. The origin of memory effects in this model is a quasistationary but nonequilibrium state of a single two-level system, which is realized due to a finite-rate cooling and slow thermally activated relaxation. We show that single-particle memory effects, such as negativity of the specific heat under reheating, vanish for a sufficiently disordered ensemble. In contrast, a disordered ensemble displays a collective memory effect [similar to the Kovacs effect], where nonequilibrium features of the ensemble are monitored via a macroscopic observable. An experimental realization of the effect can be used to further assess the consistency of the model.

  8. Franson Interference Generated by a Two-Level System

    Science.gov (United States)

    Peiris, M.; Konthasinghe, K.; Muller, A.

    2017-01-01

    We report a Franson interferometry experiment based on correlated photon pairs generated via frequency-filtered scattered light from a near-resonantly driven two-level semiconductor quantum dot. In contrast to spontaneous parametric down-conversion and four-wave mixing, this approach can produce single pairs of correlated photons. We have measured a Franson visibility as high as 66%, which goes beyond the classical limit of 50% and approaches the limit of violation of Bell's inequalities (70.7%).

  9. Mixing phases of unstable two-level systems

    International Nuclear Information System (INIS)

    Sokolov, V.V.; Brentano, P. von.

    1993-01-01

    An unstable two-level system decaying into an arbitrary number of channels is considered. It is shown that the mixing phases of the two overlapping resonances can be expressed in the terms of their partial widths and one additional universal mixing parameter. Some applications to a doublet of 2 + resonances in 8 Be and to the ρ-ω systems are considered. 18 refs

  10. Two-level systems driven by large-amplitude fields

    Science.gov (United States)

    Nori, F.; Ashhab, S.; Johansson, J. R.; Zagoskin, A. M.

    2009-03-01

    We analyze the dynamics of a two-level system subject to driving by large-amplitude external fields, focusing on the resonance properties in the case of driving around the region of avoided level crossing. In particular, we consider three main questions that characterize resonance dynamics: (1) the resonance condition, (2) the frequency of the resulting oscillations on resonance, and (3) the width of the resonance. We identify the regions of validity of different approximations. In a large region of the parameter space, we use a geometric picture in order to obtain both a simple understanding of the dynamics and quantitative results. The geometric approach is obtained by dividing the evolution into discrete time steps, with each time step described by either a phase shift on the basis states or a coherent mixing process corresponding to a Landau-Zener crossing. We compare the results of the geometric picture with those of a rotating wave approximation. We also comment briefly on the prospects of employing strong driving as a useful tool to manipulate two-level systems. S. Ashhab, J.R. Johansson, A.M. Zagoskin, F. Nori, Two-level systems driven by large-amplitude fields, Phys. Rev. A 75, 063414 (2007). S. Ashhab et al, unpublished.

  11. Performance of a Two-Level Call Admission Control Scheme for DS-CDMA Wireless Networks

    Directory of Open Access Journals (Sweden)

    Fapojuwo Abraham O

    2007-01-01

    Full Text Available We propose a two-level call admission control (CAC scheme for direct sequence code division multiple access (DS-CDMA wireless networks supporting multimedia traffic and evaluate its performance. The first-level admission control assigns higher priority to real-time calls (also referred to as class 0 calls in gaining access to the system resources. The second level admits nonreal-time calls (or class 1 calls based on the resources remaining after meeting the resource needs for real-time calls. However, to ensure some minimum level of performance for nonreal-time calls, the scheme reserves some resources for such calls. The proposed two-level CAC scheme utilizes the delay-tolerant characteristic of non-real-time calls by incorporating a queue to temporarily store those that cannot be assigned resources at the time of initial access. We analyze and evaluate the call blocking, outage probability, throughput, and average queuing delay performance of the proposed two-level CAC scheme using Markov chain theory. The analytic results are validated by simulation results. The numerical results show that the proposed two-level CAC scheme provides better performance than the single-level CAC scheme. Based on these results, it is concluded that the proposed two-level CAC scheme serves as a good solution for supporting multimedia applications in DS-CDMA wireless communication systems.

  12. Performance of a Two-Level Call Admission Control Scheme for DS-CDMA Wireless Networks

    Directory of Open Access Journals (Sweden)

    Abraham O. Fapojuwo

    2007-11-01

    Full Text Available We propose a two-level call admission control (CAC scheme for direct sequence code division multiple access (DS-CDMA wireless networks supporting multimedia traffic and evaluate its performance. The first-level admission control assigns higher priority to real-time calls (also referred to as class 0 calls in gaining access to the system resources. The second level admits nonreal-time calls (or class 1 calls based on the resources remaining after meeting the resource needs for real-time calls. However, to ensure some minimum level of performance for nonreal-time calls, the scheme reserves some resources for such calls. The proposed two-level CAC scheme utilizes the delay-tolerant characteristic of non-real-time calls by incorporating a queue to temporarily store those that cannot be assigned resources at the time of initial access. We analyze and evaluate the call blocking, outage probability, throughput, and average queuing delay performance of the proposed two-level CAC scheme using Markov chain theory. The analytic results are validated by simulation results. The numerical results show that the proposed two-level CAC scheme provides better performance than the single-level CAC scheme. Based on these results, it is concluded that the proposed two-level CAC scheme serves as a good solution for supporting multimedia applications in DS-CDMA wireless communication systems.

  13. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  14. Perturbation Theory for Open Two-Level Nonlinear Quantum Systems

    International Nuclear Information System (INIS)

    Zhang Zhijie; Jiang Dongguang; Wang Wei

    2011-01-01

    Perturbation theory is an important tool in quantum mechanics. In this paper, we extend the traditional perturbation theory to open nonlinear two-level systems, treating decoherence parameter γ as a perturbation. By this virtue, we give a perturbative solution to the master equation, which describes a nonlinear open quantum system. The results show that for small decoherence rate γ, the ratio of the nonlinear rate C to the tunneling coefficient V (i.e., r = C/V) determines the validity of the perturbation theory. For small ratio r, the perturbation theory is valid, otherwise it yields wrong results. (general)

  15. Modal intersection types, two-level languages, and staged synthesis

    DEFF Research Database (Denmark)

    Henglein, Fritz; Rehof, Jakob

    2016-01-01

    -linguistic framework for staged program synthesis, where metaprograms are automatically synthesized which, when executed, generate code in a target language. We survey the basic theory of staged synthesis and illustrate by example how a two-level language theory specialized from λ∩ ⎕ can be used to understand......A typed λ-calculus, λ∩ ⎕, is introduced, combining intersection types and modal types. We develop the metatheory of λ∩ ⎕, with particular emphasis on the theory of subtyping and distributivity of the modal and intersection type operators. We describe how a stratification of λ∩ ⎕ leads to a multi...... the process of staged synthesis....

  16. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  17. Operation and Control of a Direct-Driven PMSG-Based Wind Turbine System with an Auxiliary Parallel Grid-Side Converter

    Directory of Open Access Journals (Sweden)

    Jiawei Chu

    2013-07-01

    Full Text Available In this paper, based on the similarity, in structure and principle, between a grid-connected converter for a direct-driven permanent magnet synchronous generator (D-PMSG and an active power filter (APF, a new D-PMSG-based wind turbine (WT system configuration that includes not only an auxiliary converter in parallel with the grid-side converter, but also a coordinated control strategy, is proposed to enhance the low voltage ride through (LVRT capability and improve power quality. During normal operation, the main grid-side converter maintains the DC-link voltage constant, whereas the auxiliary grid-side converter functions as an APF with harmonic suppression and reactive power compensation to improve the power quality. During grid faults, a hierarchical coordinated control scheme for the generator-side converter, main grid-side converter and auxiliary grid-side converter, depending on the grid voltage sags, is presented to enhance the LVRT capability of the direct-driven PMSG WT. The feasibility and the effectiveness of the proposed system’s topology and hierarchical coordinated control strategy were verified using MATLAB/Simulink simulations.

  18. Two-level schemes for the advection equation

    Science.gov (United States)

    Vabishchevich, Petr N.

    2018-06-01

    The advection equation is the basis for mathematical models of continuum mechanics. In the approximate solution of nonstationary problems it is necessary to inherit main properties of the conservatism and monotonicity of the solution. In this paper, the advection equation is written in the symmetric form, where the advection operator is the half-sum of advection operators in conservative (divergent) and non-conservative (characteristic) forms. The advection operator is skew-symmetric. Standard finite element approximations in space are used. The standard explicit two-level scheme for the advection equation is absolutely unstable. New conditionally stable regularized schemes are constructed, on the basis of the general theory of stability (well-posedness) of operator-difference schemes, the stability conditions of the explicit Lax-Wendroff scheme are established. Unconditionally stable and conservative schemes are implicit schemes of the second (Crank-Nicolson scheme) and fourth order. The conditionally stable implicit Lax-Wendroff scheme is constructed. The accuracy of the investigated explicit and implicit two-level schemes for an approximate solution of the advection equation is illustrated by the numerical results of a model two-dimensional problem.

  19. Two-level systems driven by large-amplitude fields

    International Nuclear Information System (INIS)

    Ashhab, S.; Johansson, J. R.; Zagoskin, A. M.; Nori, Franco

    2007-01-01

    We analyze the dynamics of a two-level system subject to driving by large-amplitude external fields, focusing on the resonance properties in the case of driving around the region of avoided level crossing. In particular, we consider three main questions that characterize resonance dynamics: (1) the resonance condition (2) the frequency of the resulting oscillations on resonance, and (3) the width of the resonance. We identify the regions of validity of different approximations. In a large region of the parameter space, we use a geometric picture in order to obtain both a simple understanding of the dynamics and quantitative results. The geometric approach is obtained by dividing the evolution into discrete time steps, with each time step described by either a phase shift on the basis states or a coherent mixing process corresponding to a Landau-Zener crossing. We compare the results of the geometric picture with those of a rotating wave approximation. We also comment briefly on the prospects of employing strong driving as a useful tool to manipulate two-level systems

  20. Atomistic study of two-level systems in amorphous silica

    Science.gov (United States)

    Damart, T.; Rodney, D.

    2018-01-01

    Internal friction is analyzed in an atomic-scale model of amorphous silica. The potential energy landscape of more than 100 glasses is explored to identify a sample of about 700 two-level systems (TLSs). We discuss the properties of TLSs, particularly their energy asymmetry and barrier as well as their deformation potential, computed as longitudinal and transverse averages of the full deformation potential tensors. The discrete sampling is used to predict dissipation in the classical regime. Comparison with experimental data shows a better agreement with poorly relaxed thin films than well relaxed vitreous silica, as expected from the large quench rates used to produce numerical glasses. The TLSs are categorized in three types that are shown to affect dissipation in different temperature ranges. The sampling is also used to discuss critically the usual approximations employed in the literature to represent the statistical properties of TLSs.

  1. Two-level modelling of real estate taxtation

    DEFF Research Database (Denmark)

    Gall, Jaroslav; Stubkjær, Erik

    2006-01-01

    Real estate taxes recurrently attract attention, because they are a source of potentially increased revenue for local and national government. Most experts agree that it is necessary to switch from using normative values for taxation to a market-value-based taxation of real property with computer......-assisted mass valuation, witch benefit from use of value maps. In Czech Republic, efforts have been made to adopt current tax policy goals, but improvements are still needed. The paper aims at supporting the current improvement process towards a market based system. It presents models, which describe aspects...... of the present Czech property tax system. A proposal for the future system focuses on the value map component. The described change depends on political involvement. This political activity is modelled as well. The hypothesis is that the two-level modelling effort enhances the change process by providing...

  2. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  3. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  4. Poiseuille, thermal transpiration and Couette flows of a rarefied gas between plane parallel walls with nonuniform surface properties in the transverse direction and their reciprocity relations

    Science.gov (United States)

    Doi, Toshiyuki

    2018-04-01

    Slow flows of a rarefied gas between two plane parallel walls with nonuniform surface properties are studied based on kinetic theory. It is assumed that one wall is a diffuse reflection boundary and the other wall is a Maxwell-type boundary whose accommodation coefficient varies periodically in the direction perpendicular to the flow. The time-independent Poiseuille, thermal transpiration and Couette flows are considered. The flow behavior is numerically studied based on the linearized Bhatnagar-Gross-Krook-Welander model of the Boltzmann equation. The flow field, the mass and heat flow rates in the gas, and the tangential force acting on the wall surface are studied over a wide range of the gas rarefaction degree and the parameters characterizing the distribution of the accommodation coefficient. The locally convex velocity distribution is observed in Couette flow of a highly rarefied gas, similarly to Poiseuille flow and thermal transpiration. The reciprocity relations are numerically confirmed over a wide range of the flow parameters.

  5. Processor farming in two-level analysis of historical bridge

    Science.gov (United States)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  6. 两层供应链系统最优广告努力水平与直接价格折扣的博弈分析%A Game Analysis of Optimal Advertising Efforts and Direct Price Discount Strategy for the Two-level Supply Chain

    Institute of Scientific and Technical Information of China (English)

    何丽红; 廖茜; 刘蒙蒙; 苑春

    2017-01-01

    在两层供应链中,考虑需求受广告努力水平和制造商提供的直接价格折扣的联合影响,建立供应链成员的合作广告努力水平与价格折扣策略模型,通过比较Nash均衡模型、制造商主导的Stackelberg博弈模型、零售商主导的Stackelberg博弈模型和合作博弈模型,得到不同情形下制造商与零售商的最优广告努力水平与制造商提供给消费者的最优价格折扣策略.结果表明,仅当商品价格弹性满足一定水平时,制造商才可能给予消费者一定的价格折扣;价格弹性越大,制造商可给与的直接价格折扣就越大,且消费者在合作博弈下可以得到更实惠的价格.当制造商给予消费者最优直接价格折扣时,制造商和零售商的广告努力积极性与价格弹性正相关.此外,制造商和零售商的广告成本存在比例关系,一方可以通过自身广告成本投入来估算另一方的广告成本.最后,运用帕累托改进对合作博弈下供应链系统的最大利润进行充分协调,以实现供应链参与双方和消费者的“三赢”局面.上述结论对供应链参与方合作模式的选择、最优广告努力水平与直接价格折扣策略的制定具有指导意义.%When the market demand is sensitive to sales price and the advertising efforts,both manufacturer and retailer in a supply chain must make decisions on their advertising effort levels and price discount.Based on the new demand function which is simultaneously affected by advertising efforts and direct price discount offered by the manufacturer to consumers,the optimal advertising efforts strategy of the supply chain and direct price discount of the manufacturer are mainly discussed by comparing four game models,which are Nash Equilibrium,Stackelberg game model in which the manufacturer is leader,Stackelberg game model in which the retailer is leader and cooperative game.The study finds that only when price elasticity meets a certain

  7. Two-level tunneling systems in amorphous alumina

    Science.gov (United States)

    Lebedeva, Irina V.; Paz, Alejandro P.; Tokatly, Ilya V.; Rubio, Angel

    2014-03-01

    The decades of research on thermal properties of amorphous solids at temperatures below 1 K suggest that their anomalous behaviour can be related to quantum mechanical tunneling of atoms between two nearly equivalent states that can be described as a two-level system (TLS). This theory is also supported by recent studies on microwave spectroscopy of superconducting qubits. However, the microscopic nature of the TLS remains unknown. To identify structural motifs for TLSs in amorphous alumina we have performed extensive classical molecular dynamics simulations. Several bistable motifs with only one or two atoms jumping by considerable distance ~ 0.5 Å were found at T=25 K. Accounting for the surrounding environment relaxation was shown to be important up to distances ~ 7 Å. The energy asymmetry and barrier for the detected motifs lied in the ranges 0.5 - 2 meV and 4 - 15 meV, respectively, while their density was about 1 motif per 10 000 atoms. Tuning of motif asymmetry by strain was demonstrated with the coupling coefficient below 1 eV. The tunnel splitting for the symmetrized motifs was estimated on the order of 0.1 meV. The discovered motifs are in good agreement with the available experimental data. The financial support from the Marie Curie Fellowship PIIF-GA-2012-326435 (RespSpatDisp) is gratefully acknowledged.

  8. On Two-Level State-Dependent Routing Polling Systems with Mixed Service

    Directory of Open Access Journals (Sweden)

    Guan Zheng

    2015-01-01

    Full Text Available Based on priority differentiation and efficiency of the system, we consider an N+1 queues’ single-server two-level polling system which consists of one key queue and N normal queues. The novel contribution of the present paper is that we consider that the server just polls active queues with customers waiting in the queue. Furthermore, key queue is served with exhaustive service and normal queues are served with 1-limited service in a parallel scheduling. For this model, we derive an expression for the probability generating function of the joint queue length distribution at polling epochs. Based on these results, we derive the explicit closed-form expressions for the mean waiting time. Numerical examples demonstrate that theoretical and simulation results are identical and the new system is efficient both at key queue and normal queues.

  9. Feed-forward volume rendering algorithm for moderately parallel MIMD machines

    Science.gov (United States)

    Yagel, Roni

    1993-01-01

    Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.

  10. Pixel detector readout electronics with two-level discriminator scheme

    International Nuclear Information System (INIS)

    Pengg, F.

    1998-01-01

    In preparation for a silicon pixel detector with more than 3,000 readout channels per chip for operation at the future large hadron collider (LHC) at CERN the analog front end of the readout electronics has been designed and measured on several test-arrays with 16 by 4 cells. They are implemented in the HP 0.8 microm process but compatible with the design rules of the radiation hard Honeywell 0.8 microm bulk process. Each cell contains bump bonding pad, preamplifier, discriminator and control logic for masking and testing within a layout area of only 50 microm by 140 microm. A new two-level discriminator scheme has been implemented to cope with the problems of time-walk and interpixel cross-coupling. The measured gain of the preamplifier is 900 mV for a minimum ionizing particle (MIP, about 24,000 e - for a 300 microm thick Si-detector) with a return to baseline within 750 ns for a 1 MIP input signal. The full readout chain (without detector) shows an equivalent noise charge to 60e - r.m.s. The time-walk, a function of the separation between the two threshold levels, is measured to be 22 ns at a separation of 1,500 e - , which is adequate for the 40 MHz beam-crossing frequency at the LHC. The interpixel cross-coupling, measured with a 40fF coupling capacitance, is less than 3%. A single cell consumes 35 microW at 3.5 V supply voltage

  11. Ingestive Behaviour of Grazing Ewes Given Two Levels of Concentrate

    African Journals Online (AJOL)

    It was expected that concentrate supplementation would reflect directly on forage intake owing to the substitution effect, which causes sheep where the supplement supplied a small proportion of net energy requirement, to have a greater grazing intensity. The two breeds differed in the time spent ruminating or lying, with the ...

  12. CONFOUNDING STRUCTURE OF TWO-LEVEL NONREGULAR FACTORIAL DESIGNS

    Institute of Scientific and Technical Information of China (English)

    Ren Junbai

    2012-01-01

    In design theory,the alias structure of regular fractional factorial designs is elegantly described with group theory.However,this approach cannot be applied to nonregular designs directly. For an arbitrary nonregular design,a natural question is how to describe the confounding relations between its effects,is there any inner structure similar to regular designs? The aim of this article is to answer this basic question.Using coefficients of indicator function,confounding structure of nonregular fractional factorial designs is obtained as linear constrains on the values of effects.A method to estimate the sparse significant effects in an arbitrary nonregular design is given through an example.

  13. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  14. Aspects of two-level systems under external time-dependent fields

    Energy Technology Data Exchange (ETDEWEB)

    Bagrov, V.G.; Wreszinski, W.F. [Tomsk State University and Tomsk Institute of High Current Electronics (Russian Federation); Barata, J.C.A.; Gitman D.M. [Universidade de Sao Paulo, Instituto de Fisica (Brazil)]. E-mails: jbarata@fma.if.usp.br; gitman@fma.if.usp.br

    2001-12-14

    The dynamics of two-level systems in time-dependent backgrounds is under consideration. We present some new exact solutions in special backgrounds decaying in time. On the other hand, following ideas of Feynman et al, we discuss in detail the possibility of reducing the quantum dynamics to a classical Hamiltonian system. This, in particular, opens the possibility of directly applying powerful methods of classical mechanics (e.g. KAM methods) to study the quantum system. Following such an approach, we draw conclusions of relevance for 'quantum chaos' when the external background is periodic or quasi-periodic in time. (author)

  15. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

    Science.gov (United States)

    Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

    2014-12-01

    We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic

  16. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  17. Protecting quantum coherence of two-level atoms from vacuum fluctuations of electromagnetic field

    International Nuclear Information System (INIS)

    Liu, Xiaobao; Tian, Zehua; Wang, Jieci; Jing, Jiliang

    2016-01-01

    In the framework of open quantum systems, we study the dynamics of a static polarizable two-level atom interacting with a bath of fluctuating vacuum electromagnetic field and explore under which conditions the coherence of the open quantum system is unaffected by the environment. For both a single-qubit and two-qubit systems, we find that the quantum coherence cannot be protected from noise when the atom interacts with a non-boundary electromagnetic field. However, with the presence of a boundary, the dynamical conditions for the insusceptible of quantum coherence are fulfilled only when the atom is close to the boundary and is transversely polarizable. Otherwise, the quantum coherence can only be protected in some degree in other polarizable direction. -- Highlights: •We study the dynamics of a two-level atom interacting with a bath of fluctuating vacuum electromagnetic field. •For both a single and two-qubit systems, the quantum coherence cannot be protected from noise without a boundary. •The insusceptible of the quantum coherence can be fulfilled only when the atom is close to the boundary and is transversely polarizable. •Otherwise, the quantum coherence can only be protected in some degree in other polarizable direction.

  18. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  19. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  20. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  1. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  2. A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Qiegen Liu

    2014-01-01

    Full Text Available Nonconvex optimization has shown that it needs substantially fewer measurements than l1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU, the modified alternating direction method (ADM solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l1 and l2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values.

  3. Information Entropy Squeezing of a Two-Level Atom Interacting with Two-Mode Coherent Fields

    Institute of Scientific and Technical Information of China (English)

    LIU Xiao-Juan; FANG Mao-Fa

    2004-01-01

    From a quantum information point of view we investigate the entropy squeezing properties for a two-level atom interacting with the two-mode coherent fields via the two-photon transition. We discuss the influences of the initial state of the system on the atomic information entropy squeezing. Our results show that the squeezed component number,squeezed direction, and time of the information entropy squeezing can be controlled by choosing atomic distribution angle,the relative phase between the atom and the two-mode field, and the difference of the average photon number of the two field modes, respectively. Quantum information entropy is a remarkable precision measure for the atomic squeezing.

  4. Thermal analysis of multi-MW two-level wind power converter

    DEFF Research Database (Denmark)

    Zhou, Dao; Blaabjerg, Frede; Mogens, Lau

    2012-01-01

    In this paper, the multi-MW wind turbine of partial-scale and full-scale two-level power converter with DFIG and direct-drive PMSG are designed and compared in terms of their thermal performance. Simulations of different configurations regarding loss distribution and junction temperature...... in the power device in the whole range of wind speed are presented and analyzed. It is concluded that in both partial-scale and full-scale power converter the most thermal stressed power device in the generator-side converter will have higher mean junction temperature and larger junction temperature...... fluctuation compared to grid-side converter at the rated wind speed. Moreover, the thermal performance of the generator-side converter in the partial-scale power converter becomes crucial around the synchronous operating point and should be considered carefully....

  5. Two-level modulation scheme to reduce latency for optical mobile fronthaul networks.

    Science.gov (United States)

    Sung, Jiun-Yu; Chow, Chi-Wai; Yeh, Chien-Hung; Chang, Gee-Kung

    2016-10-31

    A system using optical two-level orthogonal-frequency-division-multiplexing (OFDM) - amplitude-shift-keying (ASK) modulation is proposed and demonstrated to reduce the processing latency for the optical mobile fronthaul networks. At the proposed remote-radio-head (RRH), the high data rate OFDM signal does not need to be processed, but is directly launched into a high speed photodiode (HSPD) and subsequently emitted by an antenna. Only a low bandwidth PD is needed to recover the low data rate ASK control signal. Hence, it is simple and provides low-latency. Furthermore, transporting the proposed system over the already deployed optical-distribution-networks (ODNs) of passive-optical-networks (PONs) is also demonstrated with 256 ODN split-ratios.

  6. SUBJECT «NUMBER SYSTEMS» IN TWO-LEVELED FORMAT PREPARATION TEACHERS OF MATHEMATICS

    Directory of Open Access Journals (Sweden)

    V. I. Igoshin

    2017-01-01

    Full Text Available The aim of this article is to analyze the format of a two-leveled training – bachelor and master – future teachers of mathematics from the point of view of the content of mathematical material, which is to develop prospective teachers of mathematics at those two levels, shaping their professional competence.Methods. The study involves the theoretical methods: the analysis of pedagogical and methodical literature, normative documents; historical, comparative and logical analysis of the content of pedagogical mathematical education; forecasting, planning and designing of two-leveled methodical system of training of future teachers of mathematics.Results and scientific novelty. The level differentiation of the higher education system requires developing the appropriate curricula for undergraduate and graduate programs. The fundamental principle must be the principle of continuity – the magister must continue to deepen and broaden knowledge and skills, along with competences acquired, developed and formed on the undergraduate level. From these positions, this paper examines the course «Number Systems» – the most important in terms of methodology course for future mathematics teachers, and shows what content should be filled with this course at the undergraduate level and the graduate level. At the undergraduate level it is proposed to study classical number systems – natural, integer, rational, real and complex. Further extensions of the number systems are studied at the graduate level. The theory of numeric systems is presented as a theory of algebraic systems, arising at the intersection of algebra and mathematical logic. Here we study algebras over a field, division algebra over a field, an alternative algebra with division over the field, Jordan algebra, Lie algebra. Comprehension of bases of the theory of algebras by the master of the «mathematical education» profile will promote more conscious

  7. Analysis and Implementation of Parallel Connected Two-Induction Motor Single-Inverter Drive by Direct Vector Control for Industrial Application

    DEFF Research Database (Denmark)

    Gunabalan, Ramachandiran; Padmanaban, Sanjeevikumar; Blaabjerg, Frede

    2015-01-01

    Sensorless-based direct vector control techniques are widely used for three-phase induction motor drive, whereas in case of multiple-motor control, it becomes intensively complicated and very few research articles in support to industrial applications were found. A straight-forward direct vector...... to estimate the rotor speed, rotor flux, and load torque of both motors. Simulation results along with theoretical background provided in this paper confirm the feasibility of operation of the ac motors and proves reliability for industrial applications....

  8. Adiabatic interpretation of a two-level atom diode, a laser device for unidirectional transmission of ground-state atoms

    International Nuclear Information System (INIS)

    Ruschhaupt, A.; Muga, J. G.

    2006-01-01

    We present a generalized two-level scheme for an 'atom diode', namely, a laser device that lets a two-level ground-state atom pass in one direction, say from left to right, but not in the opposite direction. The laser field is composed of two lateral state-selective mirror regions and a central pumping region. We demonstrate the robustness of the scheme and propose a physical realization. It is shown that the inclusion of a counterintuitive laser field blocking the excited atoms on the left side of the device is essential for a perfect diode effect. The reason for this, the diodic behavior, and the robustness may be understood with an adiabatic approximation. The conditions to break down the approximation, which imply also the diode failure, are analyzed

  9. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  10. Entropy squeezing for a two-level atom in the Jaynes-Cummings model with an intensity-depend coupling

    Institute of Scientific and Technical Information of China (English)

    李春先; 方卯发

    2003-01-01

    We study the squeezing for a two-level atom in the Jaynes-Cummings model with intensity-dependent coupling using quantum information entropy, and examine the influences of the initial state of the system on the squeezed component number and direction of the information entropy squeezing. Our results show that, the squeezed component number depends on the atomic initial distribution angle, while the squeezed direction is determined by both the phases of the atom and the field for the information entropy squeezing. Quantum information entropy is shown to be a remarkable precision measure for atomic squeezing.

  11. Two-step values for games with two-level communication structure

    NARCIS (Netherlands)

    Béal, Silvain; Khmelnitskaya, Anna Borisovna; Solal, Philippe

    TU games with two-level communication structure, in which a two-level communication structure relates fundamentally to the given coalition structure and consists of a communication graph on the collection of the a priori unions in the coalition structure, as well as a collection of communication

  12. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  13. Vorticity, backscatter and counter-gradient transport predictions using two-level simulation of turbulent flows

    Science.gov (United States)

    Ranjan, R.; Menon, S.

    2018-04-01

    The two-level simulation (TLS) method evolves both the large-and the small-scale fields in a two-scale approach and has shown good predictive capabilities in both isotropic and wall-bounded high Reynolds number (Re) turbulent flows in the past. Sensitivity and ability of this modelling approach to predict fundamental features (such as backscatter, counter-gradient turbulent transport, small-scale vorticity, etc.) seen in high Re turbulent flows is assessed here by using two direct numerical simulation (DNS) datasets corresponding to a forced isotropic turbulence at Taylor's microscale-based Reynolds number Reλ ≈ 433 and a fully developed turbulent flow in a periodic channel at friction Reynolds number Reτ ≈ 1000. It is shown that TLS captures the dynamics of local co-/counter-gradient transport and backscatter at the requisite scales of interest. These observations are further confirmed through a posteriori investigation of the flow in a periodic channel at Reτ = 2000. The results reveal that the TLS method can capture both the large- and the small-scale flow physics in a consistent manner, and at a reduced overall cost when compared to the estimated DNS or wall-resolved LES cost.

  14. Indoor Semantic Modelling for Routing: The Two-Level Routing Approach for Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Liu Liu

    2017-11-01

    detailed levels. • On the conceptual level, it supports routing on a logical network and assists the derivation of a conceptual path (i.e., logical path for a user in terms of space sequence. Routing criteria are designed based on the INSM semantics of spaces, which can generate logical paths similar to human wayfinding results such as minimizing VerticalUnit or HorizontalConnector. • On the detailed level, it considers the size of users and results in obstacle-avoiding paths. By using this approach, geometric networks can be generated to avoid obstacles for the given users and accessible paths are flexibly provided for user demands. This approach can process changes of user size more efficiently, in contrast to routing on a complete geometric network. • It supports routing on both the logical and the geometric networks, which can generate geometric paths based on user-specific logical paths, or re-compute logical paths when geometric paths are inaccessible. This computation method is very useful for complex buildings. The two-level routing approach can flexibly provide logical and geometric paths according to user preferences and sizes, and can adjust the generated paths in limited time. Based on the two-level routing approach, this thesis also provides a vision on possible cooperation with other methods. A potential direction is to design more routing options according to other indoor scenarios and user preferences. Extensions of the two-level routing approach, such as other types of semantics, multi-level networks and dynamic obstacles, will make it possible to deal with other routing cases. Last but not least, it is also promising to explore its relationships with indoor guidance, different building subdivisions and outdoor navigation.

  15. Two-level MOC calculation scheme in APOLLO2 for cross-section library generation for LWR hexagonal assemblies

    International Nuclear Information System (INIS)

    Petrov, Nikolay; Todorova, Galina; Kolev, Nikola; Damian, Frederic

    2011-01-01

    The accurate and efficient MOC calculation scheme in APOLLO2, developed by CEA for generating multi-parameterized cross-section libraries for PWR assemblies, has been adapted to hexagonal assemblies. The neutronic part of this scheme is based on a two-level calculation methodology. At the first level, a multi-cell method is used in 281 energy groups for cross-section definition and self-shielding. At the second level, precise MOC calculations are performed in a collapsed energy mesh (30-40 groups). In this paper, the application and validation of the two-level scheme for hexagonal assemblies is described. Solutions for a VVER assembly are compared with TRIPOLI4® calculations and direct 281g MOC solutions. The results show that the accuracy is close to that of the 281g MOC calculation while the CPU time is substantially reduced. Compared to the multi-cell method, the accuracy is markedly improved. (author)

  16. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  17. Two level undercut-profile substrate for filamentary YBa2Cu3O7 coated conductors

    DEFF Research Database (Denmark)

    Wulff, Anders Christian; Solovyov, M.; Gömöry, Fedor

    2015-01-01

    A novel substrate design is presented for scalable industrial production of filamentary coated conductors (CCs). The new substrate, called ‘two level undercut-profile substrate (2LUPS)’, has two levels of plateaus connected by walls with an undercut profile. The undercuts are made to produce...... a shading effect during subsequent deposition of layers, thereby creating gaps in the superconducting layer deposited on the curved walls between the two levels. It is demonstrated that such 2LUPS-based CCs can be produced in a large-scale production system using standard deposition processes...

  18. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  19. Dynamical properties of a two-level system with arbitrary nonlinearities

    Indian Academy of Sciences (India)

    nication, information processing and quantum computing, such as in the investigation of quantum teleportation ... They con- sidered a two-level atom interacting with an undamped cavity initially in a coherent state. ... Because concurrence pro-.

  20. Two levels ARIMAX and regression models for forecasting time series data with calendar variation effects

    Science.gov (United States)

    Suhartono, Lee, Muhammad Hisyam; Prastyo, Dedy Dwi

    2015-12-01

    The aim of this research is to develop a calendar variation model for forecasting retail sales data with the Eid ul-Fitr effect. The proposed model is based on two methods, namely two levels ARIMAX and regression methods. Two levels ARIMAX and regression models are built by using ARIMAX for the first level and regression for the second level. Monthly men's jeans and women's trousers sales in a retail company for the period January 2002 to September 2009 are used as case study. In general, two levels of calendar variation model yields two models, namely the first model to reconstruct the sales pattern that already occurred, and the second model to forecast the effect of increasing sales due to Eid ul-Fitr that affected sales at the same and the previous months. The results show that the proposed two level calendar variation model based on ARIMAX and regression methods yields better forecast compared to the seasonal ARIMA model and Neural Networks.

  1. Efficiency analysis on a two-level three-phase quasi-soft-switching inverter

    DEFF Research Database (Denmark)

    Geng, Pan; Wu, Weimin; Huang, Min

    2013-01-01

    When designing an inverter, an engineer often needs to select and predict the efficiency beforehand. For the standard inverters, plenty of researches are analyzing the power losses and also many software tools are being used for efficiency calculation. In this paper, the efficiency calculation...... for non-conventional inverters with special shoot-through state is introduced and illustrated through the analysis on a special two-level three-phase quasi-soft-switching inverter. Efficiency comparison between the classical two-stage two-level three-phase inverter and the two-level three-phase quasi......-soft-switching inverter is carried out. A 10 kW/380 V prototype is constructed to verify the analysis. The experimental results show that the efficiency of the new inverter is higher than that of the traditional two-stage two- level three-phase inverter....

  2. Controlling the optical bistability and multistability in a two-level pumped-probe system

    International Nuclear Information System (INIS)

    Mahmoudi, Mohammad; Sahrai, Mostafa; Masoumeh Mousavi, Seyede

    2010-01-01

    We study the behavior of the optical bistability (OB) and multistability (OM) in a two-level pumped-probe atomic system by means of a unidirectional ring cavity. We show that the optical bistability in a two-level atomic system can be controlled by adjusting the intensity of the pump field and the detuning between two fields. We find that applying the pumping field decreases the threshold of the optical bistability.

  3. Excitation transfer in two two-level systems coupled to an oscillator

    International Nuclear Information System (INIS)

    Hagelstein, P L; Chaudhary, I U

    2008-01-01

    We consider a generalization of the spin-boson model in which two different two-level systems are coupled to an oscillator, under conditions where the oscillator energy is much less than the two-level system energies, and where the oscillator is highly excited. We find that the two-level system transition energy is shifted, producing a Bloch-Siegert shift in each two-level system similar to what would be obtained if the other were absent. At resonances associated with energy exchange between a two-level system and the oscillator, the level splitting is about the same as would be obtained in the spin-boson model at a Bloch-Siegert resonance. However, there occur resonances associated with the transfer of excitation between one two-level system and the other, an effect not present in the spin-boson model. We use a unitary transformation leading to a rotated system in which terms responsible for the shift and splittings can be identified. The level splittings at the anticrossings associated with both energy exchange and excitation transfer resonances are accounted for with simple two-state models and degenerate perturbation theory using operators that appear in the rotated Hamiltonian

  4. Two-Level Control for Fast Electrical Vehicle Charging Stations with Multi Flywheel Energy Storage System

    DEFF Research Database (Denmark)

    SUN, BO; Dragicevic, Tomislav; Vasquez, Juan Carlos

    2015-01-01

    This paper applies a hierarchical control for a fast charging station (FCS) composed of paralleled PWM rectifier and dedicated paralleled multiple flywheel energy storage systems (FESSs), in order to mitigate peak power shock on grid caused by sudden connection of electrical vehicle (EV) chargers...

  5. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  6. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  7. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  8. Solutions of the two-level problem in terms of biconfluent Heun functions

    Energy Technology Data Exchange (ETDEWEB)

    Ishkhanyan, Artur [Engineering Center of Armenian National Academy of Sciences, Ashtarak (Armenia)]. E-mail: artur@ec.sci.am; Suominen, Kalle-Antti [Helsinki Institute of Physics, Helsinki (Finland); Department of Applied Physics, University of Turku, Turku (Finland)

    2001-08-17

    Five four-parametric classes of quantum mechanical two-level models permitting solutions in terms of the biconfluent Heun function are derived. Three of these classes are generalizations of the well known classes of Landau-Zener, Nikitin and Crothers. It is shown that two other classes describe super- and sublinear and essentially nonlinear level crossings, as well as processes with three crossing points. In particular, these classes include two-level models where the field amplitude is constant and the detuning varies as {delta}{sub 0}t+{delta}{sub 2}t{sup 3} or {approx}t{sup 1/3}. For the essentially nonlinear cubic-crossing model, {delta}{sub t}{approx}{delta}{sub 2}t{sup 3}, the general solution of the two-level problem is shown to be expressed as series of confluent hypergeometric functions. (author)

  9. Two-Level Solutions to Exponentially Complex Problems in Glass Science

    DEFF Research Database (Denmark)

    Mauro, John C.; Smedskjær, Morten Mattrup

    Glass poses an especially challenging problem for physicists. The key to making progress in theoretical glass science is to extract the key physics governing properties of practical interest. In this spirit, we discuss several two-level solutions to exponentially complex problems in glass science....... Topological constraint theory, originally developed by J.C. Phillips, is based on a two-level description of rigid and floppy modes in a glass network and can be used to derive quantitatively accurate and analytically solvable models for a variety of macroscopic properties. The temperature dependence...... that captures both primary and secondary relaxation modes. Such a model also offers the ability to calculate the distinguishability of particles during glass transition and relaxation processes. Two-level models can also be used to capture the distribution of various network-forming species in mixed...

  10. Crossing rule for a PT-symmetric two-level time-periodic system

    International Nuclear Information System (INIS)

    Moiseyev, Nimrod

    2011-01-01

    For a two-level system in a time-periodic field we show that in the non-Hermitian PT case the level crossing is of two quasistationary states that have the same dynamical symmetry property. At the field's parameters where the two levels which have the same dynamical symmetry cross, the corresponding quasienergy states coalesce and a self-orthogonal state is obtained. This situation is very different from the Hermitian case where a crossing of two quasienergy levels happens only when the corresponding two quasistationary states have different dynamical symmetry properties and, unlike the situation in the non-Hermitian case, the spectrum remains complete also when the two levels cross.

  11. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  12. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  13. Kir2.1 channels set two levels of resting membrane potential with inward rectification.

    Science.gov (United States)

    Chen, Kuihao; Zuo, Dongchuan; Liu, Zheng; Chen, Haijun

    2018-04-01

    Strong inward rectifier K + channels (Kir2.1) mediate background K + currents primarily responsible for maintenance of resting membrane potential. Multiple types of cells exhibit two levels of resting membrane potential. Kir2.1 and K2P1 currents counterbalance, partially accounting for the phenomenon of human cardiomyocytes in subphysiological extracellular K + concentrations or pathological hypokalemic conditions. The mechanism of how Kir2.1 channels contribute to the two levels of resting membrane potential in different types of cells is not well understood. Here we test the hypothesis that Kir2.1 channels set two levels of resting membrane potential with inward rectification. Under hypokalemic conditions, Kir2.1 currents counterbalance HCN2 or HCN4 cation currents in CHO cells that heterologously express both channels, generating N-shaped current-voltage relationships that cross the voltage axis three times and reconstituting two levels of resting membrane potential. Blockade of HCN channels eliminated the phenomenon in K2P1-deficient Kir2.1-expressing human cardiomyocytes derived from induced pluripotent stem cells or CHO cells expressing both Kir2.1 and HCN2 channels. Weakly inward rectifier Kir4.1 or inward rectification-deficient Kir2.1•E224G mutant channels do not set such two levels of resting membrane potential when co-expressed with HCN2 channels in CHO cells or when overexpressed in human cardiomyocytes derived from induced pluripotent stem cells. These findings demonstrate a common mechanism that Kir2.1 channels set two levels of resting membrane potential with inward rectification by balancing inward currents through different cation channels such as hyperpolarization-activated HCN channels or hypokalemia-induced K2P1 leak channels.

  14. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  15. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  16. A Two-Level Cache for Distributed Information Retrieval in Search Engines

    Directory of Open Access Journals (Sweden)

    Weizhe Zhang

    2013-01-01

    Full Text Available To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users’ logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  17. A two-level cache for distributed information retrieval in search engines.

    Science.gov (United States)

    Zhang, Weizhe; He, Hui; Ye, Jianwei

    2013-01-01

    To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

  18. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  19. Experimental Research into the Two-Level Cylindrical Cyclone with a Different Number of Channels

    Directory of Open Access Journals (Sweden)

    Egidijus Baliukas

    2014-10-01

    Full Text Available The multichannel two-level cyclone has been designed for separating solid particles from airflow and built at the Laboratory of Environmental Protection Technologies of Vilnius Gediminas Technical University. The conducted research is aimed at determining air flow distribution at two levels and channels of the multichannel cyclone. The multifunctional meter Testo-400 and the dynamic Pitot tube have been used form easuring air flow rates in the channels. The obtained results show that the equal volume of air gets into two levels installed inside the cyclone, and rates are distributed equally in the channels of these levels. The maximum air flow rate is recorded in the first channel and occurs when half-rings are set in such positions so that 75% of air flow returns to the previous channel. The biggest aerodynamic resistance is 1660 Pa and has been recorded in the cyclone having eight channels under air flow distribution ratio 75/25. The highest air purification efficiency has been observed in the two-level six-channel cyclone under air flow distribution ratio 75/25. The effectiveness of separating granite particles is 92.1% and that of wood particles – 91.1 when the particles are up to 20 μm in diameter.

  20. Polynomial pseudosupersymmetry underlying a two-level atom in an external electromagnetic field

    International Nuclear Information System (INIS)

    Samsonov, B.F.; Shamshutdinova, V.V.; Gitman, D.M.

    2005-01-01

    Chains of transformations introduced previously were studied in order to obtain electric fields with a time-dependent frequency for which the equation of motion of a two-level atom in the presence of these fields can be solved exactly. It is shown that a polynomial pseudosupersymmetry may be associated to such chains

  1. Ultimate temperature for laser cooling of two-level neutral atoms

    International Nuclear Information System (INIS)

    Bagnato, V.S.; Zilio, S.C.

    1989-01-01

    We present a simple pedagogical method to evaluate the minimum attainable temperature for laser cooling of two-level neutral atoms. Results are given as a function of the laser detuning and intensity. We also discuss the use of this approach to predict the minimum temperature of neutral atoms confined in magnetic traps. (author) [pt

  2. Resonant retuning of Rabi oscillations in a two-level system

    International Nuclear Information System (INIS)

    Leonov, A.V.; Feranchuk, I.D.

    2009-01-01

    The evolution of a two-level system in a single-mode quantum field is considered beyond the rotating wave approximation. The existence of quasi-degenerate energy levels is shown to influence the essential characteristics of temporal and amplitude Rabi oscillations of the system in a resonant manner. (authors)

  3. An Owen-type value for games with two-level communication structures

    NARCIS (Netherlands)

    van den Brink, René; Khmelnitskaya, Anna Borisovna; van der Laan, Gerard

    We introduce an Owen-type value for games with two-level communication structure, which is a structure where the players are partitioned into a coalition structure such that there exists restricted communication between as well as within the a priori unions of the coalition structure. Both types of

  4. Reactive Power Impact on Lifetime Prediction of Two-level Wind Power Converter

    DEFF Research Database (Denmark)

    Zhou, Dao; Blaabjerg, Frede; Lau, M.

    2013-01-01

    The influence of reactive power injection on the dominating two-level wind power converter is investigated and compared in terms of power loss and thermal behavior. Then the lifetime of both the partial-scale and full-scale power converter is estimated based on the widely used Coffin-Manson model...

  5. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2012-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  6. Random model of two-level atoms interacting with electromagnetic field

    International Nuclear Information System (INIS)

    Kireev, A.N.; Meleshko, A.N.

    1983-12-01

    A phase transition has been studied in a random system of two-level atoms interacting with an electromagnetic field. It is shown that superradiation can arise when there is short-range order in a spin-subsystem. The existence of long-range order is irrelevant for this phase transition

  7. Excitation of graphene plasmons as an analogy with the two-level system

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Jiahui [Microwave and Electromagnetic Laboratory, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin City, Heilongjiang Province (China); Lv, Bo, E-mail: lb19840313@126.com [Microwave and Electromagnetic Laboratory, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin City, Heilongjiang Province (China); Li, Rujiang [College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027 (China); Ma, Ruyu; Chen, Wan; Meng, Fanyi [Microwave and Electromagnetic Laboratory, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin City, Heilongjiang Province (China)

    2016-02-15

    The excitation of graphene plasmons (GPs) is presented as an interaction between the GPs and the incident electromagnetic field. In this Letter, the excitation of GPs in a plasmonic system is interpreted as an analogy with the two-level system by taking the two-coupled graphene-covered gratings as an example. Based on the equivalent circuit theory, the excitation of GPs in the graphene-covered grating is equivalent to the resonance of an oscillator. Thus, according to the governing equation, the electric currents at the resonant frequencies for two-coupled graphene-covered gratings correspond to the energy states in a two-level system. In addition, the excitation of GPs in different two-coupled graphene-covered gratings is numerically studied to validate our theoretical model. Our work provides an intuitive understanding of the excitation of GPs using an analogy with the two-level system. - Highlights: • The excitation of graphene plasmons (GPs) in graphene-covered grating is equivalent to the resonance of an oscillator. • We establish the equivalent circuit of two-level system to analyze the resonant character. • The excitation of GPs in different two-coupled graphene-covered gratings are numerically studied to validate our theoretical model.

  8. Analysis of Two-Level Support Systems with Time-Dependent Overflow - A Banking Application

    DEFF Research Database (Denmark)

    Barth, Wolfgang; Manitz, Michael; Stolletz, Raik

    2010-01-01

    In this paper, we analyze the performance of call centers of financial service providers with two levels of support and a time-dependent overflow mechanism. Waiting calls from the front-office queue flow over to the back office if a waiting-time limit is reached and at least one back-office agent...

  9. Excitation of graphene plasmons as an analogy with the two-level system

    International Nuclear Information System (INIS)

    Fu, Jiahui; Lv, Bo; Li, Rujiang; Ma, Ruyu; Chen, Wan; Meng, Fanyi

    2016-01-01

    The excitation of graphene plasmons (GPs) is presented as an interaction between the GPs and the incident electromagnetic field. In this Letter, the excitation of GPs in a plasmonic system is interpreted as an analogy with the two-level system by taking the two-coupled graphene-covered gratings as an example. Based on the equivalent circuit theory, the excitation of GPs in the graphene-covered grating is equivalent to the resonance of an oscillator. Thus, according to the governing equation, the electric currents at the resonant frequencies for two-coupled graphene-covered gratings correspond to the energy states in a two-level system. In addition, the excitation of GPs in different two-coupled graphene-covered gratings is numerically studied to validate our theoretical model. Our work provides an intuitive understanding of the excitation of GPs using an analogy with the two-level system. - Highlights: • The excitation of graphene plasmons (GPs) in graphene-covered grating is equivalent to the resonance of an oscillator. • We establish the equivalent circuit of two-level system to analyze the resonant character. • The excitation of GPs in different two-coupled graphene-covered gratings are numerically studied to validate our theoretical model.

  10. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2013-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  11. Localization of a two-level atom via the absorption spectrum

    International Nuclear Information System (INIS)

    Xu, Jun; Hu, Xiang-Ming

    2007-01-01

    We show that it is possible to localize a two-level atom as it passes through a standing-wave field by measuring the probe-field absorption. There is 50% detecting probability of the atom at the nodes of the standing-wave field in the subwavelength domain when the probe field is tuned resonant with the atomic transition

  12. A spatial scan statistic for nonisotropic two-level risk cluster.

    Science.gov (United States)

    Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie

    2012-01-30

    Spatial scan statistic methods are commonly used for geographical disease surveillance and cluster detection. The standard spatial scan statistic does not model any variability in the underlying risks of subregions belonging to a detected cluster. For a multilevel risk cluster, the isotonic spatial scan statistic could model a centralized high-risk kernel in the cluster. Because variations in disease risks are anisotropic owing to different social, economical, or transport factors, the real high-risk kernel will not necessarily take the central place in a whole cluster area. We propose a spatial scan statistic for a nonisotropic two-level risk cluster, which could be used to detect a whole cluster and a noncentralized high-risk kernel within the cluster simultaneously. The performance of the three methods was evaluated through an intensive simulation study. Our proposed nonisotropic two-level method showed better power and geographical precision with two-level risk cluster scenarios, especially for a noncentralized high-risk kernel. Our proposed method is illustrated using the hand-foot-mouth disease data in Pingdu City, Shandong, China in May 2009, compared with two other methods. In this practical study, the nonisotropic two-level method is the only way to precisely detect a high-risk area in a detected whole cluster. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Two-Level Designs to Estimate All Main Effects and Two-Factor Interactions

    NARCIS (Netherlands)

    Eendebak, P.T.; Schoen, E.D.

    2017-01-01

    We study the design of two-level experiments with N runs and n factors large enough to estimate the interaction model, which contains all the main effects and all the two-factor interactions. Yet, an effect hierarchy assumption suggests that main effect estimation should be given more prominence

  14. Exact Solution of the Two-Level System and the Einstein Solid in the Microcanonical Formalism

    Science.gov (United States)

    Bertoldi, Dalia S.; Bringa, Eduardo M.; Miranda, E. N.

    2011-01-01

    The two-level system and the Einstein model of a crystalline solid are taught in every course of statistical mechanics and they are solved in the microcanonical formalism because the number of accessible microstates can be easily evaluated. However, their solutions are usually presented using the Stirling approximation to deal with factorials. In…

  15. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  16. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Rapid Screening of Acetylcholinesterase Inhibitors by Effect-Directed Analysis Using LC × LC Fractionation, a High Throughput in Vitro Assay, and Parallel Identification by Time of Flight Mass Spectrometry.

    Science.gov (United States)

    Ouyang, Xiyu; Leonards, Pim E G; Tousova, Zuzana; Slobodnik, Jaroslav; de Boer, Jacob; Lamoree, Marja H

    2016-02-16

    Effect-directed analysis (EDA) is a useful tool to identify bioactive compounds in complex samples. However, identification in EDA is usually challenging, mainly due to limited separation power of the liquid chromatography based fractionation. In this study, comprehensive two-dimensional liquid chromatography (LC × LC) based microfractionation combined with parallel high resolution time of flight (HR-ToF) mass spectrometric detection and a high throughput acetylcholinesterase (AChE) assay was developed. The LC × LC fractionation method was validated using analytical standards and a C18 and pentafluorophenyl (PFP) stationary phase combination was selected for the two-dimensional separation and fractionation in four 96-well plates. The method was successfully applied to identify AChE inhibitors in a wastewater treatment plant (WWTP) effluent. Good orthogonality (>0.9) separation was achieved and three AChE inhibitors (tiapride, amisulpride, and lamotrigine), used as antipsychotic medicines, were identified and confirmed by two-dimensional retention alignment as well as their AChE inhibition activity.

  18. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  19. A Two-Level Sensorless MPPT Strategy Using SRF-PLL on a PMSG Wind Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Amina Echchaachouai

    2017-01-01

    Full Text Available In this paper, a two-level sensorless Maximum Power Point Tracking (MPPT strategy is presented for a variable speed Wind Energy Conversion System (WECS. The proposed system is composed of a wind turbine, a direct-drive Permanent Magnet Synchronous Generator (PMSG and a three phase controlled rectifier connected to a DC load. The realised generator output power maximization analysis justifies the use of the Field Oriented Control (FOC giving the six Pulse Width Modulation (PWM signals to the active rectifier. The generator rotor speed and position required by the FOC and the sensorless MPPT are estimated using a Synchronous Reference Frame Phase Locked Loop (SRF-PLL. The MPPT strategy used consists of two levels, the first level is a power regulation loop and the second level is an extremum seeking bloc generating the coefficient gathering the turbine characteristics. Experimental results validated on a hardware test setup using a DSP digital board (dSPACE 1104 are presented. Figures illustrating the estimated speed and angle confirm that the SRF-PLL is able to give an estimated speed and angle which closely follow the real ones. Also, the power at the DC load and the power at the generator output indicate that the MPPT gives optimum extracted power. Finally, other results show the effectiveness of the adopted approach in real time applications.

  20. Optimal control of quantum gates and suppression of decoherence in a system of interacting two-level particles

    International Nuclear Information System (INIS)

    Grace, Matthew; Brif, Constantin; Rabitz, Herschel; Walmsley, Ian A; Kosut, Robert L; Lidar, Daniel A

    2007-01-01

    Methods of optimal control are applied to a model system of interacting two-level particles (e.g., spin-half atomic nuclei or electrons or two-level atoms) to produce high-fidelity quantum gates while simultaneously negating the detrimental effect of decoherence. One set of particles functions as the quantum information processor, whose evolution is controlled by a time-dependent external field. The other particles are not directly controlled and serve as an effective environment, coupling to which is the source of decoherence. The control objective is to generate target one- and two-qubit unitary gates in the presence of strong environmentally-induced decoherence and under physically motivated restrictions on the control field. The quantum-gate fidelity, expressed in terms of a novel state-independent distance measure, is maximized with respect to the control field using combined genetic and gradient algorithms. The resulting high-fidelity gates demonstrate the feasibility of precisely guiding the quantum evolution via optimal control, even when the system complexity is exacerbated by environmental coupling. It is found that the gate duration has an important effect on the control mechanism and resulting fidelity. An analysis of the sensitivity of the gate performance to random variations in the system parameters reveals a significant degree of robustness attained by the optimal control solutions

  1. Analysis of stationary availability factor of two-level backbone computer networks with arbitrary topology

    Science.gov (United States)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.

  2. Two-Level Iteration Penalty Methods for the Navier-Stokes Equations with Friction Boundary Conditions

    Directory of Open Access Journals (Sweden)

    Yuan Li

    2013-01-01

    Full Text Available This paper presents two-level iteration penalty finite element methods to approximate the solution of the Navier-Stokes equations with friction boundary conditions. The basic idea is to solve the Navier-Stokes type variational inequality problem on a coarse mesh with mesh size H in combining with solving a Stokes, Oseen, or linearized Navier-Stokes type variational inequality problem for Stokes, Oseen, or Newton iteration on a fine mesh with mesh size h. The error estimate obtained in this paper shows that if H, h, and ε can be chosen appropriately, then these two-level iteration penalty methods are of the same convergence orders as the usual one-level iteration penalty method.

  3. Dynamics of quantum Fisher information in a two-level system coupled to multiple bosonic reservoirs

    Science.gov (United States)

    Wang, Guo-You; Guo, You-Neng; Zeng, Ke

    2015-11-01

    We consider the optimal parameter estimation for a two-level system coupled to multiple bosonic reservoirs. By using quantum Fisher information (QFI), we investigate the effect of the Markovian reservoirs’ number N on QFI in both weak and strong coupling regimes for a two-level system surrounded by N zero-temperature reservoirs of field modes initially in the vacua. The results show that the dynamics of QFI non-monotonically decays to zero with revival oscillations at some time in the weak coupling regime depending on the reservoirs’ parameters. Furthermore, we also present the relations between the QFI flow, the flows of energy and information, and the sign of the decay rate to gain insight into the physical processes characterizing the dynamics. Project supported by the Hunan Provincial Innovation Foundation for Postgraduate, China (Grant No. CX2014B194) and the Scientific Research Foundation of Hunan Provincial Education Department, China (Grant No. 13C039).

  4. Minimax terminal approach problem in two-level hierarchical nonlinear discrete-time dynamical system

    Energy Technology Data Exchange (ETDEWEB)

    Shorikov, A. F., E-mail: afshorikov@mail.ru [Ural Federal University, 19 S. Mira, Ekaterinburg, 620002, Russia Institute of Mathematics and Mechanics, Ural Branch of Russian Academy of Sciences, 16 S. Kovalevskaya, Ekaterinburg, 620990 (Russian Federation)

    2015-11-30

    We consider a discrete–time dynamical system consisting of three controllable objects. The motions of all objects are given by the corresponding vector nonlinear or linear discrete–time recurrent vector relations, and control system for its has two levels: basic (first or I level) that is dominating and subordinate level (second or II level) and both have different criterions of functioning and united a priori by determined informational and control connections defined in advance. For the dynamical system in question, we propose a mathematical formalization in the form of solving a multistep problem of two-level hierarchical minimax program control over the terminal approach process with incomplete information and give a general scheme for its solving.

  5. A modified two-level three-phase quasi-soft-switching inverter

    DEFF Research Database (Denmark)

    Liu, Yusheng; Wu, Weimin; Blaabjerg, Frede

    2014-01-01

    A traditional Voltage Source Inverter (VSI) has higher efficiency than a Current Voltage Source (CSI) due to the less conduction power loss. However, the reverse recovery of the free-wheeling diode limits the efficiency improvement for the silicon devices based hard-switching VSI. The traditional...... quasi-soft-switching inverter can alternate between VSI and CSI by using a proper control scheme and thereby reduce the power losses caused by the reverse recovery of the free-wheeling diode. Nevertheless, slightly extra conduction power loss of the auxiliary switch is also introduced. In order...... to reduce the extra conduction power loss and the voltage stress across the DC-link capacitor, a modified two-level three-phase quasi-soft-switching inverter is proposed by using a SiC MOSFET instead of an IGBT. The principle of the modified two-level three-phase quasi-soft-switching inverter is analyzed...

  6. Revisional Surgery for Hallux Valgus with Serial Osteotomies at Two Levels

    Directory of Open Access Journals (Sweden)

    Jason B. T. Lim

    2011-01-01

    Full Text Available The aetiology and form of hallux valgus (HV is varied with many corrective procedures described. We report a 39-year-old woman, previously treated with a Chevron osteotomy, who presented with recurrent right HV, metatarsus primus varus, and associated bunion. Osteotomies were performed at two levels as a revisional procedure. This report highlights (1 limitations of the Chevron osteotomy and (2 the revisional procedure of the two level osteotomies: (i proximal opening-wedge basal osteotomy and (ii distal short Scarf with medial closing wedges. If a Chevron osteotomy is used inappropriately, for example, in an attempt to correct too large a deformity, it may angulate laterally causing a malunion with an increased distal metatarsal articular angle. Secondly, it is feasible to correct this combined deformity using a combination of proximal opening-wedge and distal short Scarf osteotomies.

  7. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  8. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  9. Non-zero temperature two-mode squeezing for time-dependent two-level systems

    International Nuclear Information System (INIS)

    Aliaga, J.; Gruver, J.L.; Proto, A.N.; Cerdeira, H.A.

    1994-01-01

    A Maximum Entropy Principle density matrix method, valid for systems with temperature different from zero, is presented making it possible two-mode squeezed states in two-level systems with relevant operators and Hamiltonian connected with O(3,2). A method which allows one to relate the appearance of squeezing to the relevant operators, included in order to define the density matrix of the system is given. (author). 14 refs, 1 fig

  10. Urea metabolism in buffalo calves fed on rations containing two levels of crude protein

    International Nuclear Information System (INIS)

    Verma, D.N.; Singh, U.B.; Lal, M.; Varma, A.; Ranjhan, S.K.

    1974-01-01

    Urea entry rates into the body pools of Murrah Buffalo calves have been estimated using a single injection isotope dilution technique using 14 C-urea. The animals were fed two levels of crude proteins, namely, 13 percent lower and 19 percent higher than N.R.C. recommendations. Results show that the recycling of urea is significantly better in animals given low crude protein contents. (M.G.B.)

  11. Experiences of building a medical data acquisition system based on two-level modeling.

    Science.gov (United States)

    Li, Bei; Li, Jianbin; Lan, Xiaoyun; An, Ying; Gao, Wuqiang; Jiang, Yuqiao

    2018-04-01

    Compared to traditional software development strategies, the two-level modeling approach is more flexible and applicable to build an information system in the medical domain. However, the standards of two-level modeling such as openEHR appear complex to medical professionals. This study aims to investigate, implement, and improve the two-level modeling approach, and discusses the experience of building a unified data acquisition system for four affiliated university hospitals based on this approach. After the investigation, we simplified the approach of archetype modeling and developed a medical data acquisition system where medical experts can define the metadata for their own specialties by using a visual easy-to-use tool. The medical data acquisition system for multiple centers, clinical specialties, and diseases has been developed, and integrates the functions of metadata modeling, form design, and data acquisition. To date, 93,353 data items and 6,017 categories for 285 specific diseases have been created by medical experts, and over 25,000 patients' information has been collected. OpenEHR is an advanced two-level modeling method for medical data, but its idea to separate domain knowledge and technical concern is not easy to realize. Moreover, it is difficult to reach an agreement on archetype definition. Therefore, we adopted simpler metadata modeling, and employed What-You-See-Is-What-You-Get (WYSIWYG) tools to further improve the usability of the system. Compared with the archetype definition, our approach lowers the difficulty. Nevertheless, to build such a system, every participant should have some knowledge in both medicine and information technology domains, as these interdisciplinary talents are necessary. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Feedback controlled dephasing and population relaxation in a two-level system

    International Nuclear Information System (INIS)

    Wang Jin

    2009-01-01

    This Letter presents the maximum achievable stability and purity that can be obtained in a two-level system with both dephasing and population relaxation processes by using homodyne-mediated feedback control. An analytic formula giving the optimal amplitudes of the driving and feedback for the steady-state is also presented. Experimental examples are used to show the importance of controlling the dephasing process.

  13. FAST COMMUNICATION: A PDE Based Two Level Model of the Masking Property of the Human Ear

    OpenAIRE

    Xin, Jack; Qi, Yingyong

    2003-01-01

    Human ear has the masking property that certain audible sound becomes inaudible in the presence of another sound. Masking is quantified by the raised threshold from the absolute hearing threshold in quiet. It is of scientific and practical importance to compute masking thresholds. Empirical models on masking have applications in low bit rate digital music compression. A first principle based two level model is developed with partial differential equation (PDE) at the periphe...

  14. Effective Hamiltonians, two level systems, and generalized Maxwell-Bloch equations

    International Nuclear Information System (INIS)

    Sczaniecki, L.

    1981-02-01

    A new method is proposed involving a canonical transformation leading to the non-secular part of time-independent perturbation calculus. The method is used to derive expressions for effective Shen-Walls Hamiltonians which, taken in the two-level approximation and on the inclusion of non-Hamiltonian terms into the dynamics of the system, lead to generalized Maxwell-Bloch equations. The rotating wave approximation is written anew within the framework of our formalism. (author)

  15. Understanding of phase modulation in two-level systems through inverse scattering

    International Nuclear Information System (INIS)

    Hasenfeld, A.; Hammes, S.L.; Warren, W.S.

    1988-01-01

    Analytical and numerical calculations describe the effects of shaped radiation pulses on two-level systems in terms of quantum-mechanical scattering. Previous results obtained in the reduced case of amplitude modulation are extended to the general case of simultaneous amplitude and phase modulation. We show that an infinite family of phase- and amplitude-modulated pulses all generate rectangular inversion profiles. Experimental measurements also verify the theoretical analysis

  16. The simulation of the non-Markovian behaviour of a two-level system

    Science.gov (United States)

    Semina, I.; Petruccione, F.

    2016-05-01

    Non-Markovian relaxation dynamics of a two-level system is studied with the help of the non-linear stochastic Schrödinger equation with coloured Ornstein-Uhlenbeck noise. This stochastic Schrödinger equation is investigated numerically with an adapted Platen scheme. It is shown, that the memory effects have a significant impact to the dynamics of the system.

  17. MMC with parallel-connected MOSFETs as an alternative to wide bandgap converters for LVDC distribution networks

    Directory of Open Access Journals (Sweden)

    Yanni Zhong

    2017-03-01

    Full Text Available Low-voltage direct-current (LVDC networks offer improved conductor utilisation on existing infrastructure and reduced conversion stages, which can lead to a simpler and more efficient distribution network. However, LVDC networks must continue to support AC loads, requiring efficient, low-distortion DC–AC converters. Additionally, increasing numbers of DC loads on the LVAC network require controlled, low-distortion, unity power factor AC-DC converters with large capacity, and bi-directional capability. An AC–DC/DC–AC converter design is therefore proposed in this study to minimise conversion loss and maximise power quality. Comparative analysis is performed for a conventional IGBT two-level converter, a SiC MOSFET two-level converter, a Si MOSFET modular multi-level converter (MMC and a GaN HEMT MMC, in terms of power loss, reliability, fault tolerance, converter cost and heatsink size. The analysis indicates that the five-level MMC with parallel-connected Si MOSFETs is an efficient, cost-effective converter for low-voltage converter applications. MMC converters suffer negligible switching loss, which enables reduced device switching without loss penalty from increased harmonics and filtering. Optimal extent of parallel-connection for MOSFETs in an MMC is investigated. Experimental results are presented to show the reduction in device stress and electromagnetic interference generating transients through the use of reduced switching and device parallel-connection.

  18. Minimum time control of a pair of two-level quantum systems with opposite drifts

    International Nuclear Information System (INIS)

    Romano, Raffaele; D’Alessandro, Domenico

    2016-01-01

    In this paper we solve two equivalent time optimal control problems. On one hand, we design the control field to implement in minimum time the SWAP (or equivalent) operator on a two-level system, assuming that it interacts with an additional, uncontrollable, two-level system. On the other hand, we synthesize the SWAP operator simultaneously, in minimum time, on a pair of two-level systems subject to opposite drifts. We assume that it is possible to perform three independent control actions, and that the total control strength is bounded. These controls either affect the dynamics of the target system, under the first perspective, or, simultaneously, the dynamics of both systems, in the second view. We obtain our results by using techniques of geometric control theory on Lie groups. In particular, we apply the Pontryagin maximum principle, and provide a complete characterization of singular and nonsingular extremals. Our analysis shows that the problem can be formulated as the motion of a material point in a central force, a well known system in classical mechanics. Although we focus on obtaining the SWAP operator, many of the ideas and techniques developed in this work apply to the time optimal implementation of an arbitrary unitary operator. (paper)

  19. Two-Level Micro-to-Nanoscale Hierarchical TiO2 Nanolayers on Titanium Surface

    Directory of Open Access Journals (Sweden)

    Elena G. Zemtsova

    2016-12-01

    Full Text Available Joint replacement is being actively developed within modern orthopedics. One novel material providing fast implantation is bioactive coatings. The synthesis of targeted nanocoatings on metallic nanotitanium surface is reported in this paper. TiO2-based micro- and nanocoatings were produced by sol-gel synthesis using dip-coating technology with subsequent fast (shock drying in hot plate mode at 400 °C. As a result of shock drying, the two-level hierarchical TiO2 nanolayer on the nanotitanium was obtained. This two-level hierarchy includes nanorelief of porous xerogel and microrelief of the micron-sized “defect” network (a crack network. The thickness of TiO2 nanolayers was controlled by repeating dip-coating process the necessary number of times after the first layer deposition. The state of the MS3T3-E1 osteoblast cell line (young cells that form bone tissue on the two-level hierarchical surface has been studied. Particularly, adhesion character, adhesion time and morphology have been studied. The reported results may serve the starting point for the development of novel bioactive coatings for bone and teeth implants.

  20. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  1. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  2. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  3. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  4. The FORCE: A highly portable parallel programming language

    Science.gov (United States)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.

  5. The FORCE - A highly portable parallel programming language

    Science.gov (United States)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.

  6. Rapid characterization of microscopic two-level systems using Landau-Zener transitions in a superconducting qubit

    International Nuclear Information System (INIS)

    Tan, Xinsheng; Yu, Haifeng; Yu, Yang; Han, Siyuan

    2015-01-01

    We demonstrate a fast method to detect microscopic two-level systems in a superconducting phase qubit. By monitoring the population leak after sweeping the qubit bias flux, we are able to measure the two-level systems that are coupled with the qubit. Compared with the traditional method that detects two-level systems by energy spectroscopy, our method is faster and more sensitive. This method supplies a useful tool to investigate two-level systems in solid-state qubits

  7. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  8. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  9. Simulation of neutron transport equation using parallel Monte Carlo for deep penetration problems

    International Nuclear Information System (INIS)

    Bekar, K. K.; Tombakoglu, M.; Soekmen, C. N.

    2001-01-01

    Neutron transport equation is simulated using parallel Monte Carlo method for deep penetration neutron transport problem. Monte Carlo simulation is parallelized by using three different techniques; direct parallelization, domain decomposition and domain decomposition with load balancing, which are used with PVM (Parallel Virtual Machine) software on LAN (Local Area Network). The results of parallel simulation are given for various model problems. The performances of the parallelization techniques are compared with each other. Moreover, the effects of variance reduction techniques on parallelization are discussed

  10. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  11. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  12. Renormalization of correlations in a quasiperiodically forced two-level system: quadratic irrationals

    International Nuclear Information System (INIS)

    Mestel, B D; Osbaldestin, A H

    2004-01-01

    Generalizing from the case of golden mean frequency to a wider class of quadratic irrationals, we extend our renormalization analysis of the self-similarity of correlation functions in a quasiperiodically forced two-level system. We give a description of all piecewise-constant periodic orbits of an additive functional recurrence generalizing that present in the golden mean case. We establish a criterion for periodic orbits to be globally bounded, and also calculate the asymptotic height of the main peaks in the correlation function

  13. Quantum averaging and resonances: two-level atom in a one-mode classical laser field

    Directory of Open Access Journals (Sweden)

    M. Amniat-Talab

    2007-06-01

    Full Text Available   We use a nonperturbative method based on quantum averaging and an adapted from of resonant transformations to treat the resonances of the Hamiltonian of a two-level atom interacting with a one-mode classical field in Floquet formalism. We illustrate this method by extraction of effective Hamiltonians of the system in two regimes of weak and strong coupling. The results obtained in the strong-coupling regime, are valid in the whole range of the coupling constant for the one-photon zero-field resonance.

  14. Characteristics of the 2011 Tohoku Tsunami and introduction of two level tsunamis for tsunami disaster mitigation.

    Science.gov (United States)

    Sato, Shinji

    2015-01-01

    Characteristics of the 2011 Tohoku Tsunami have been revealed by collaborative tsunami surveys extensively performed under the coordination of the Joint Tsunami Survey Group. The complex behaviors of the mega-tsunami were characterized by the unprecedented scale and the low occurrence frequency. The limitation and the performance of tsunami countermeasures were described on the basis of tsunami surveys, laboratory experiments and numerical analyses. These findings contributed to the introduction of two-level tsunami hazards to establish a new strategy for tsunami disaster mitigation, combining structure-based flood protection designed by the Level-1 tsunami and non-structure-based damage reduction planned by the Level-2 tsunami.

  15. Teleporting the one-qubit state via two-level atoms with spontaneous emission

    Energy Technology Data Exchange (ETDEWEB)

    Hu Mingliang, E-mail: mingliang0301@xupt.edu.cn, E-mail: mingliang0301@163.com [School of Science, Xi' an University of Posts and Telecommunications, Xi' an 710061 (China)

    2011-05-14

    We study quantum teleportation via two two-level atoms coupled collectively to a multimode vacuum field and prepared initially in different atomic states. We concentrated on the influence of the spontaneous emission, collective damping and dipole-dipole interaction of the atoms on fidelity dynamics of quantum teleportation and obtained the region of spatial distance between the two atoms over which the state can be teleported nonclassically. Moreover, we showed through concrete examples that entanglement of the channel state is the prerequisite but not the only essential quantity for predicting the teleportation fidelity.

  16. A January angular momentum balance in the OSU two-level atmospheric general circulation model

    Science.gov (United States)

    Kim, J.-W.; Grady, W.

    1982-01-01

    The present investigation is concerned with an analysis of the atmospheric angular momentum balance, based on the simulation data of the Oregon State University two-level atmospheric general circulation model (AGCM). An attempt is also made to gain an understanding of the involved processes. Preliminary results on the angular momentum and mass balance in the AGCM are shown. The basic equations are examined, and questions of turbulent momentum transfer are investigated. The methods of analysis are discussed, taking into account time-averaged balance equations, time and longitude-averaged balance equations, mean meridional circulation, the mean meridional balance of relative angular momentum, and standing and transient components of motion.

  17. Thermal analysis of two-level wind power converter under symmetrical grid fault

    DEFF Research Database (Denmark)

    Zhou, Dao; Blaabjerg, Frede

    2013-01-01

    In this paper, the case of symmetrical grid fault when using the multi-MW wind turbine of partial-scale and full-scale two-level power converter are designed and investigated. Firstly, the different operation behaviors of the relevant power converters under the voltage dip will be described......) condition as well as the junction temperature. For the full-scale wind turbine system, the most thermal stressed power device in the grid-side converter will appear at the grid voltage below 0.5 pu, and for the partial-scale wind turbine system, the most thermal stressed power device in the rotor...

  18. Elimination of two level fluctuators in superconducting quantum bits by an epitaxial tunnel barrier

    International Nuclear Information System (INIS)

    Oh, Seongshik; Cicak, Katarina; Kline, Jeffrey S.; Sillanpaeae, Mika A.; Osborn, Kevin D.; Whittaker, Jed D.; Simmonds, Raymond W.; Pappas, David P.

    2006-01-01

    Quantum computing based on Josephson junction technology is considered promising due to its scalable architecture. However, decoherence is a major obstacle. Here, we report evidence for improved Josephson quantum bits (qubits) using a single-crystal Al 2 O 3 tunnel barrier. We have found an ∼80% reduction in the density of the spectral splittings that indicate the existence of two-level fluctators (TLFs) in amorphous tunnel barriers. The residual ∼20% TLFs can be attributed to interfacial effects that may be further reduced by different electrode materials. These results show that decoherence sources in the tunnel barrier of Josephson qubits can be identified and eliminated

  19. Geometric manipulation of the quantum states of two-level atoms

    International Nuclear Information System (INIS)

    Tian, Mingzhen; Barber, Zeb W.; Fischer, Joe A.; Babbitt, Wm. Randall

    2004-01-01

    Manipulation of the quantum states of two-level atoms has been investigated using laser-controlled geometric phase change, which has the potential to build robust quantum logic gates for quantum computing. For a qubit based on two electronic transition levels of an atom, two basic quantum operations that can make any universal single qubit gate have been designed employing resonant laser pulses. An operation equivalent to a phase gate has been demonstrated using Tm 3+ doped in a yttrium aluminum garnet crystal

  20. Oscillations of Doppler-Raby of two level atom moving in resonator

    International Nuclear Information System (INIS)

    Kozlovskij, A.V.

    2001-01-01

    The interaction of the two-level atom with the quantum mode of the high-quality resonator uniformly moving by the classic trajectory, is considered. The recurrent formula for the probability of the atom transition with the photon radiation is determined through the dressed states method. It is shown, that the ratio between the Doppler shift value of the atom transition and the Raby frequency value of the atom-field system qualitatively effects the dependence of the moving atom transition probability on its position in the resonator, as well as on its value [ru

  1. Limitations of two-level emitters as nonlinearities in two-photon controlled-PHASE gates

    DEFF Research Database (Denmark)

    Nysteen, Anders; McCutcheon, Dara P. S.; Heuck, Mikkel

    2017-01-01

    We investigate the origin of imperfections in the fidelity of a two-photon controlled-PHASE gate based on two-level-emitter nonlinearities. We focus on a passive system that operates without external modulations to enhance its performance. We demonstrate that the fidelity of the gate is limited...... by opposing requirements on the input pulse width for one-and two-photon-scattering events. For one-photon scattering, the spectral pulse width must be narrow compared with the emitter linewidth, while two-photon-scattering processes require the pulse width and emitter linewidth to be comparable. We find...

  2. Probe transparency in a two-level medium embedded by a squeezed vacuum

    International Nuclear Information System (INIS)

    Swain, S.; Zhou, P.

    1994-08-01

    Effect of the detuning on the probe absorption spectra of a two-level system with and without a classically driven field in a squeezed vacuum is investigated. For a strong squeezing, there is a threshold which determines the positions and widths of the absorption peaks, for the squeezed parameter M. In a large detuning, the spectra exhibit some resemblance to the Fano spectrum. The squeezing-induced transparency occurs at the frequency 2ω L - ω A in the minimum-uncertainty squeezed vacuum. This effect is not phase-sensitive. (author). 15 refs, 8 figs

  3. Automatic Management of Parallel and Distributed System Resources

    Science.gov (United States)

    Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

    1990-01-01

    Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

  4. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  5. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm

    Science.gov (United States)

    Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.

  6. SCREENING OF MEDIUM COMPOUNDS USING A TWO-LEVEL FACTORIAL DESIGN FOR SACCHAROMYCES BOULARDII

    Directory of Open Access Journals (Sweden)

    GUOWEI SHU

    2016-04-01

    Full Text Available Even if the probiotic effect of Saccharomyces boulardii is has been reported, this yeast is rarely used in medium composition. Based on single factor experiment, two-level factorial design was employed to evaluate the effect of carbon sources (sucrose, glucose, nitrogen sources (soy peptone, beef extract, yeast extract, calf serum, malt extract and salts (K2HPO4, KH2PO4, MgSO4, Na2HPO4, NaH2PO4, CaCl2, sodium citrate, sodium glutamate on the growth of S. boulardii. At the same time, the optical density (OD in the medium was measured at 560 nm after 36 h of incubation. The result of two-level factorial design experiment showed that calf serum (p = 0.0214 and sodium citrate (p = 0.0045 are the significant growth factors of S. boulardii, sucrose (p = 0.0861 and malt extract (p = 0.0763 are important factors. In addition, sucrose and sodium citrate showed positive effect on the growth of S. boulardii. However, calf serum and malt extract showed negative effect on the growth. And we determined that the optimum medium composition for S. boulardii was as follow: 37.5 g·L-1 sucrose, 6 g·L-1 calf serum, 6 g·L-1 malt extract, 5 g·L-1 sodium citrate.

  7. Improved Genetic Algorithm with Two-Level Approximation for Truss Optimization by Using Discrete Shape Variables

    Directory of Open Access Journals (Sweden)

    Shen-yan Chen

    2015-01-01

    Full Text Available This paper presents an Improved Genetic Algorithm with Two-Level Approximation (IGATA to minimize truss weight by simultaneously optimizing size, shape, and topology variables. On the basis of a previously presented truss sizing/topology optimization method based on two-level approximation and genetic algorithm (GA, a new method for adding shape variables is presented, in which the nodal positions are corresponding to a set of coordinate lists. A uniform optimization model including size/shape/topology variables is established. First, a first-level approximate problem is constructed to transform the original implicit problem to an explicit problem. To solve this explicit problem which involves size/shape/topology variables, GA is used to optimize individuals which include discrete topology variables and shape variables. When calculating the fitness value of each member in the current generation, a second-level approximation method is used to optimize the continuous size variables. With the introduction of shape variables, the original optimization algorithm was improved in individual coding strategy as well as GA execution techniques. Meanwhile, the update strategy of the first-level approximation problem was also improved. The results of numerical examples show that the proposed method is effective in dealing with the three kinds of design variables simultaneously, and the required computational cost for structural analysis is quite small.

  8. Two-level method for unsteady Navier-Stokes equations based on a new projection

    International Nuclear Information System (INIS)

    Hou Yanren; Li Kaitai

    2004-12-01

    A two-level algorithm for the two dimensional unsteady Navier-Stokes equations based on a new projection is proposed and investigated. The approximate solution is solved as a sum of a large eddy component and a small eddy component, which are in the sense of the new projection, constructed in this paper. These two terms advance in time explicitly. Actually, the new algorithm proposed here can be regarded as a sort of postprocessing algorithm for the standard Galerkin method (SGM). The large eddy part is solved by SGM in the usual L 2 -based large eddy subspace while the small eddy part (the correction part) is obtained in its complement subspace in the sense of the new projection. The stability analysis indicates the improvement of the stability comparing with SGM of the same scale, and the L 2 -error estimate shows that the scheme can improve the accuracy of SGM approximation for half order. We also propose a numerical implementation based on Lagrange multiplier for this two-level algorithm. (author)

  9. Development and evaluation of a two-level functional structure for the thin film encapsulation

    International Nuclear Information System (INIS)

    Lee, Jae-Wung; Sharma, Jaibir; Singh, Navab; Kwong, Dim-Lee

    2013-01-01

    This paper reports a two level capping structure for encapsulating micro-electro-mechanical system (MEMS) devices. The two level capping solves the main issue of the longer release time as well as safe sealing in thin film encapsulation (TFE). In this technique, the first cap layer has many etch holes, which were uniformly distributed on it to enhance the removal of the sacrificial layer. The second cap layer forms a cap on every etch hole in the first cap layer to protect the mass loading on MEMS devices. This technique was found to be very effective in reducing the release time of the TFE. For the 1200 µm × 1200 µm sized cavity encapsulation, this technique decreases the release time of the TFE by a factor of 24 in comparison to the sidewall located channel scheme. The presented technique also helps in reducing the size of TFE as the etch holes are uniformly distributed on the TFE itself. Wide seal rings were not required to accommodate sidewall channels. (paper)

  10. Two-level Robust Measurement Fusion Kalman Filter for Clustering Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Peng; QI Wen-Juan; DENG Zi-Li

    2014-01-01

    This paper investigates the distributed fusion Kalman filtering over clustering sensor networks. The sensor network is partitioned as clusters by the nearest neighbor rule and each cluster consists of sensing nodes and cluster-head. Using the minimax robust estimation principle, based on the worst-case conservative system with the conservative upper bounds of noise variances, two-level robust measurement fusion Kalman filter is presented for the clustering sensor network systems with uncertain noise variances. It can significantly reduce the communication load and save energy when the number of sensors is very large. A Lyapunov equation approach for the robustness analysis is presented, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented, and the robust accuracy relations among the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the two-level weighted measurement fuser is equal to that of the global centralized robust fuser and is higher than those of each local robust filter and each local weighted measurement fuser. A simulation example shows the correctness and effectiveness of the proposed results.

  11. Ramsey interferometry with a two-level generalized Tonks-Girardeau gas

    International Nuclear Information System (INIS)

    Mousavi, S. V.; Campo, A. del; Lizuain, I.; Muga, J. G.

    2007-01-01

    We propose a solvable generalization of the Tonks-Girardeau model that describes a coherent one-dimensional (1D) gas of cold two-level bosons which interact with two external fields in a Ramsey interferometer. They also interact among themselves by idealized, infinitely strong contact potentials, with interchange of momentum and internal state. We study the corresponding Ramsey fringes and the quantum projection noise which, essentially unaffected by the interactions, remains that for ideal bosons. The dual system of this gas, an ideal gas of two-level fermions coupled by the interaction with the separated fields, produces the same fringes and noise fluctuations. The cases of time-separated and spatially separated fields are studied. For spatially separated fields the fringes may be broadened slightly by increasing the number of particles, but only for large particle numbers far from present experiments with Tonks-Girardeau gases. The uncertainty in the determination of the atomic transition frequency diminishes, essentially with the inverse root of the particle number. The difficulties to implement the model experimentally and possible shortcomings of strongly interacting 1D gases for frequency standards and atomic clocks are discussed

  12. Detuning-induced stimulated Raman adiabatic passage in dense two-level systems

    Science.gov (United States)

    Deng, Li; Lin, Gongwei; Niu, Yueping; Gong, Shangqing

    2018-05-01

    We investigate the coherence generation in dense two-level systems under detuning-induced stimulated Raman adiabatic passage (D-STIRAP). In the dense two-level system, the near dipole-dipole (NDD) interaction should be taken into consideration. With the increase in the strength of the NDD interaction, it is found that a switchlike transition of the generated coherence from maximum value to zero appears. Meanwhile, the adiabatic condition of the D-STIRAP is destroyed in the presence of the NDD interaction. In order to avoid the sudden decrease in the generated coherence and maintain the maximum value, we can use stronger detuning pulse or pump pulse, between which increasing the intensity of the detuning pulse is of more efficiency. Except for taking advantage of such maximum coherence in the high density case into areas like enhancing the four-wave mixing process, we also point out that the phenomenon of the coherence transition can be applied as an optical switch.

  13. Dynamical model of coherent circularly polarized optical pulse interactions with two-level quantum systems

    International Nuclear Information System (INIS)

    Slavcheva, G.; Hess, O.

    2005-01-01

    We propose and develop a method for theoretical description of circularly (elliptically) polarized optical pulse resonant coherent interactions with two-level atoms. The method is based on the time-evolution equations of a two-level quantum system in the presence of a time-dependent dipole perturbation for electric dipole transitions between states with total angular-momentum projection difference (ΔJ z =±1) excited by a circularly polarized electromagnetic field [Feynman et al., J. Appl. Phys. 28, 49 (1957)]. The adopted real-vector representation approach allows for coupling with the vectorial Maxwell's equations for the optical wave propagation and thus the resulting Maxwell pseudospin equations can be numerically solved in the time domain without any approximations. The model permits a more exact study of the ultrafast coherent pulse propagation effects taking into account the vector nature of the electromagnetic field and hence the polarization state of the optical excitation. We demonstrate self-induced transparency effects and formation of polarized solitons. The model represents a qualitative extension of the well-known optical Maxwell-Bloch equations valid for linearly polarized light and a tool for studying coherent quantum control mechanisms

  14. Digital parallel-to-series pulse-train converter

    Science.gov (United States)

    Hussey, J.

    1971-01-01

    Circuit converts number represented as two level signal on n-bit lines to series of pulses on one of two lines, depending on sign of number. Converter accepts parallel binary input data and produces number of output pulses equal to number represented by input data.

  15. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  16. Computer simulation of two-level pedicle subtraction osteotomy for severe thoracolumbar kyphosis in ankylosing spondylitis

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2017-01-01

    Full Text Available Background: Advanced ankylosing spondylitis is often associated with thoracolumbar kyphosis, resulting in an abnormal spinopelvic balance and pelvic morphology. Different osteotomy techniques have been used to correct AS deformities, unfortunnaly, not all AS patients can gain spinal sagittal balance and good horizontal vision after osteotomy. Materials and Methods: Fourteen consecutive AS patients with severe thoracolumbar kyphosis who were treated with two-level PSO were studied retrospectively. All were male with a mean age of 34.9 ± 9.6 years. The followup ranged from 1–5 years. Preoperative computer simulations using the Surgimap Spinal software were performed for all patients, and the osteotomy level and angle determined from the computer simulation were used surgically. Spinal sagittal parameters were measured preoperatively, after the computer simulation, and postoperatively and included thoracic kyphosis (TK, lumbar lordosis (LL, sagittal vertical axis (SVA, pelvic incidence, pelvic tilt (PT, and sacral slope (SS. The level of correlation between the computer simulation and postoperative parameters was evaluated, and the differences between preoperative and postoperative parameters were compared. The visual analog scale (VAS for back pain and clinical outcome was also assessed. Results: Six cases underwent PSO at L1 and L3, five cases at L2 and T12, and three cases at L3 and T12. TK was corrected from 57.8 ± 15.2° preoperatively to 45.3 ± 7.7° postoperatively (P < 0.05, LL from 9.3 ± 17.5° to −52.3 ± 3.9° (P < 0.001, SVA from 154.5 ± 36.7 to 37.8 ± 8.4 mm (P < 0.001, PT from 43.3 ± 6.1° to 18.0 ± 0.9° (P < 0.001, and SS from 0.8 ± 7.0° to 26.5 ± 10.6° (P < 0.001. The LL, VAS, and PT of the simulated two-level PSO were highly consistent with, or almost the same as, the postoperative parameters. The correlations between the computer simulations and postoperative parameters were significant. The VAS decreased

  17. Comprehensive solutions to the Bloch equations and dynamical models for open two-level systems

    Science.gov (United States)

    Skinner, Thomas E.

    2018-01-01

    The Bloch equation and its variants constitute the fundamental dynamical model for arbitrary two-level systems. Many important processes, including those in more complicated systems, can be modeled and understood through the two-level approximation. It is therefore of widespread relevance, especially as it relates to understanding dissipative processes in current cutting-edge applications of quantum mechanics. Although the Bloch equation has been the subject of considerable analysis in the 70 years since its inception, there is still, perhaps surprisingly, significant work that can be done. This paper extends the scope of previous analyses. It provides a framework for more fully understanding the dynamics of dissipative two-level systems. A solution is derived that is compact, tractable, and completely general, in contrast to previous results. Any solution of the Bloch equation depends on three roots of a cubic polynomial that are crucial to the time dependence of the system. The roots are typically only sketched out qualitatively, with no indication of their dependence on the physical parameters of the problem. Degenerate roots, which modify the solutions, have been ignored altogether. Here the roots are obtained explicitly in terms of a single real-valued root that is expressed as a simple function of the system parameters. For the conventional Bloch equation, a simple graphical representation of this root is presented that makes evident the explicit time dependence of the system for each point in the parameter space. Several intuitive, visual models of system dynamics are developed. A Euclidean coordinate system is identified in which any generalized Bloch equation is separable, i.e., the sum of commuting rotation and relaxation operators. The time evolution in this frame is simply a rotation followed by relaxation at modified rates that play a role similar to the standard longitudinal and transverse rates. These rates are functions of the applied field, which

  18. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  19. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  20. Canyon of current suppression in an interacting two-level quantum dot

    DEFF Research Database (Denmark)

    Karlström, O; Pedersen, Jonas Nyvold; Samuelsson, P

    2011-01-01

    Motivated by the recent discovery of a canyon of conductance suppression in a two-level equal-spin quantum dot system [Phys. Rev. Lett. 104, 186804 (2010)], the transport through this system is studied in detail. At low bias and low temperature a strong current suppression is found around...... the electron-hole symmetry point independent of the couplings, in agreement with previous results. By means of a Schrieffer–Wolff transformation we are able to give an intuitive explanation to this suppression in the low-energy regime. In the general situation, numerical simulations are carried out using...... for the current suppression. It is also shown how broadening, interference, and a finite interaction energy cause a shift of the current minimum away from degeneracy. Finally we see how an increased population of the upper level leads to current peaks on each side of the suppression line. At sufficiently high...

  1. Propagation of an attosecond pulse in a dense two-level medium

    International Nuclear Information System (INIS)

    Song Xiaohong; Gong Shangqing; Yang Weifeng; Xu Zhizhan

    2004-01-01

    We investigate the propagation of attosecond pulse in a dense two-level medium by using an iterative predictor-corrector finite-difference time-domain method. We find when attosecond pulse is considered, that the standard area theorem will break down even for small area pulses: ideal self-induced transparency cannot occur even for a 2π pulse, while the pulses whose areas are not integer multiples of 2π, such as 1.8π and 2.2π pulses, cannot evolve to 2π pulses as predicted by the standard area theorem. Significantly higher spectra components can occur on all these small area propagating pulses due to strong carrier reshaping. Furthermore, these higher spectral components dependent sensitively on the pulse area: the larger the pulse area is, the more evident are these higher spectral components

  2. Transmission-line resonators for the study of individual two-level tunneling systems

    Science.gov (United States)

    Brehm, Jan David; Bilmes, Alexander; Weiss, Georg; Ustinov, Alexey V.; Lisenfeld, Jürgen

    2017-09-01

    Parasitic two-level tunneling systems (TLS) emerge in amorphous dielectrics and constitute a serious nuisance for various microfabricated devices, where they act as a source of noise and decoherence. Here, we demonstrate a new test bed for the study of TLS in various materials which provides access to properties of individual TLS as well as their ensemble response. We terminate a superconducting transmission-line resonator with a capacitor that hosts TLS in its dielectric. By tuning TLS via applied mechanical strain, we observe the signatures of individual TLS strongly coupled to the resonator in its transmission characteristics and extract the coupling components of their dipole moments and energy relaxation rates. The strong and well-defined coupling to the TLS bath results in pronounced resonator frequency fluctuations and excess phase noise, through which we can study TLS ensemble effects such as spectral diffusion, and probe theoretical models of TLS interactions.

  3. Entanglement for a Bimodal Cavity Field Interacting with a Two-Level Atom

    International Nuclear Information System (INIS)

    Liu Jia; Chen Ziyu; Bu Shenping; Zhang Guofeng

    2009-01-01

    Negativity has been adopted to investigate the entanglement in a system composed of a two-level atom and a two-mode cavity field. Effects of Kerr-like medium and the number of photon inside the cavity on the entanglement are studied. Our results show that atomic initial state must be superposed, so that the two cavity field modes can be entangled. Moreover, we also conclude that the number of photon in the two cavity mode should be equal. The interaction between modes, namely, the Kerr effect, has a significant negative contribution. Note that the atom frequency and the cavity frequency have an indistinguishable effect, so a corresponding approximation has been made in this article. These results may be useful for quantum information in optics systems.

  4. Interacting two-level defects as sources of fluctuating high-frequency noise in superconducting circuits

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Clemens [ARC Centre of Excellence for Engineered Quantum Systems, The University of Queensland, Brisbane (Australia); Lisenfeld, Juergen [Physikalisches Institut, Karlsruhe Institute of Technology, Karlsruhe (Germany); Shnirman, Alexander [Institut fuer Theory der Kondensierten Materie, Karlsruhe Institute of Technology, Karlsruhe (Germany); LD Landau Institute for Theoretical Physics, Moscow (Russian Federation); Poletto, Stefano [IBM TJ Watson Research Centre, Yorktown Heights (United States)

    2016-07-01

    Since the very first experiments, superconducting circuits have suffered from strong coupling to environmental noise, destroying quantum coherence and degrading performance. In state-of-the-art experiments, it is found that the relaxation time of superconducting qubits fluctuates as a function of time. We present measurements of such fluctuations in a 3D-transmon circuit and develop a qualitative model based on interactions within a bath of background two-level systems (TLS) which emerge from defects in the device material. In our model, the time-dependent noise density acting on the qubit emerges from its near-resonant coupling to high-frequency TLS which experience energy fluctuations due to their interaction with thermally fluctuating TLS at low frequencies. We support the model by providing experimental evidence of such energy fluctuations observed in a single TLS in a phase qubit circuit.

  5. Two-Level Hierarchical FEM Method for Modeling Passive Microwave Devices

    Science.gov (United States)

    Polstyanko, Sergey V.; Lee, Jin-Fa

    1998-03-01

    In recent years multigrid methods have been proven to be very efficient for solving large systems of linear equations resulting from the discretization of positive definite differential equations by either the finite difference method or theh-version of the finite element method. In this paper an iterative method of the multiple level type is proposed for solving systems of algebraic equations which arise from thep-version of the finite element analysis applied to indefinite problems. A two-levelV-cycle algorithm has been implemented and studied with a Gauss-Seidel iterative scheme used as a smoother. The convergence of the method has been investigated, and numerical results for a number of numerical examples are presented.

  6. Estimation of Missing Observations in Two-Level Split-Plot Designs

    DEFF Research Database (Denmark)

    Almimi, Ashraf A.; Kulahci, Murat; Montgomery, Douglas C.

    2008-01-01

    Inserting estimates for the missing observations from split-plot designs restores their balanced or orthogonal structure and alleviates the difficulties in the statistical analysis. In this article, we extend a method due to Draper and Stoneman to estimate the missing observations from unreplicated...... two-level factorial and fractional factorial split-plot (FSP and FFSP) designs. The missing observations, which can either be from the same whole plot, from different whole plots, or comprise entire whole plots, are estimated by equating to zero a number of specific contrast columns equal...... to the number of the missing observations. These estimates are inserted into the design table and the estimates for the remaining effects (or alias chains of effects as the case with FFSP designs) are plotted on two half-normal plots: one for the whole-plot effects and the other for the subplot effects...

  7. Two-Level Verification of Data Integrity for Data Storage in Cloud Computing

    Science.gov (United States)

    Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping

    Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.

  8. Nonlinear Jaynes–Cummings model for two interacting two-level atoms

    International Nuclear Information System (INIS)

    Santos-Sánchez, O de los; González-Gutiérrez, C; Récamier, J

    2016-01-01

    In this work we examine a nonlinear version of the Jaynes–Cummings model for two identical two-level atoms allowing for Ising-like and dipole–dipole interplays between them. The model is said to be nonlinear in the sense that it can incorporate both a general intensity-dependent interaction between the atomic system and the cavity field and/or the presence of a nonlinear medium inside the cavity. As an example, we consider a particular type of atom-field coupling based upon the so-called Buck–Sukumar model and a lossless Kerr-like cavity. We describe the possible effects of such features on the evolution of some quantities of current interest, such as atomic excitation, purity, concurrence, the entropy of the field and the evolution of the latter in phase space. (paper)

  9. Two Level Versus Matrix Converters Performance in Wind Energy Conversion Systems Employing DFIG

    Science.gov (United States)

    Reddy, Gongati Pandu Ranga; Kumar, M. Vijaya

    2017-10-01

    Wind power capacity has received enormous growth during past decades. With substantial development of wind power, it is expected to provide a fifth of world's electricity by the end of 2030. In wind energy conversion system, the power electronic converters play an important role. This paper presents the two level and matrix converters performance in wind energy conversion system employing Doubly Fed Induction Generator (DFIG). The DFIG is a wound rotor induction generator. Because of the advantages of the DFIG over other generators it is being used for most of the wind applications. This paper also discusses control of converters using the space vector pulse width modulation technique. The MATLAB/SIMULINK ® software is used to study the performance of the converters.

  10. Phonon induced optical gain in a current carrying two-level quantum dot

    Energy Technology Data Exchange (ETDEWEB)

    Eskandari-asl, Amir, E-mail: amir.eskandari.asl@gmail.com [Department of Physics, Shahid Beheshti University, G.C. Evin, Tehran 1983963113 (Iran, Islamic Republic of); School of Nano Science, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5531, Tehran, Iran (Iran, Islamic Republic of)

    2017-05-15

    In this work we consider a current carrying two level quantum dot (QD) that is coupled to a single mode phonon bath. Using self-consistent Hartree-Fock approximation, we obtain the I-V curve of QD. By considering the linear response of our system to an incoming classical light, we see that depending on the parametric regime, the system could have weak or strong light absorption or may even show lasing. This lasing occurs at high enough bias voltages and is explained by a population inversion considering side bands, while the total electron population in the higher level is less than the lower one. The frequency at which we have the most significant lasing depends on the level spacing and phonon frequency and not on the electron-phonon coupling strength.

  11. TWO-LEVEL HIERARCHICAL COORDINATION QUEUING METHOD FOR TELECOMMUNICATION NETWORK NODES

    Directory of Open Access Journals (Sweden)

    M. V. Semenyaka

    2014-07-01

    Full Text Available The paper presents hierarchical coordination queuing method. Within the proposed method a queuing problem has been reduced to optimization problem solving that was presented as two-level hierarchical structure. The required distribution of flows and bandwidth allocation was calculated at the first level independently for each macro-queue; at the second level solutions obtained on lower level for each queue were coordinated in order to prevent probable network link overload. The method of goal coordination has been determined for multilevel structure managing, which makes it possible to define the order for consideration of queue cooperation restrictions and calculation tasks distribution between levels of hierarchy. Decisions coordination was performed by the method of Lagrange multipliers. The study of method convergence has been carried out by analytical modeling.

  12. Intensity profiles of superdeformed bands in Pb isotopes in a two-level mixing model

    International Nuclear Information System (INIS)

    Wilson, A. N.; Szigeti, S. S.; Rogers, J. I.; Davidson, P. M.; Cardamone, D. M.

    2009-01-01

    A recently developed two-level mixing model of the decay out of superdeformed bands is applied to examine the loss of flux from the yrast superdeformed bands in 192 Pb, 194 Pb, and 196 Pb. Probability distributions for decay to states at normal deformations are calculated at each level. The sensitivity of the results to parameters describing the levels at normal deformation and their coupling to levels in the superdeformed well is explored. It is found that except for narrow ranges of the interaction strength coupling the states, the amount of intensity lost is primarily determined by the ratio of γ decay widths in the normal and superdeformed wells. It is also found that while the model can accommodate the observed fractional intensity loss profiles for decay from bands at relatively high excitation, it cannot accommodate the similarly abrupt decay from bands at lower energies if standard estimates of the properties of the states in the first minimum are employed

  13. Photon echo with a few photons in two-level atoms

    International Nuclear Information System (INIS)

    Bonarota, M; Dajczgewand, J; Louchet-Chauvet, A; Le Gouët, J-L; Chanelière, T

    2014-01-01

    To store and retrieve signals at the single photon level, various photon echo schemes have resorted to complex preparation steps involving ancillary shelving states in multi-level atoms. For the first time, we experimentally demonstrate photon echo operation at such a low signal intensity without any preparation step, which allows us to work with mere two-level atoms. This simplified approach relies on the so-coined ‘revival of silenced echo’ (ROSE) scheme. Low noise conditions are obtained by returning the atoms to the ground state before the echo emission. In the present paper we manage ROSE in photon counting conditions, showing that very strong control fields can be compatible with extremely weak signals, making ROSE consistent with quantum memory requirements. (paper)

  14. Dynamical Evolution of an Effective Two-Level System with {\\mathscr{P}}{\\mathscr{T}} Symmetry

    Science.gov (United States)

    Du, Lei; Xu, Zhihao; Yin, Chuanhao; Guo, Liping

    2018-05-01

    We investigate the dynamics of parity- and time-reversal (PT ) symmetric two-energy-level atoms in the presence of two optical and a radio-frequency (rf) fields. The strength and relative phase of fields can drive the system from unbroken to broken PT symmetric regions. Compared with the Hermitian model, Rabi-type oscillation is still observed, and the oscillation characteristics are also adjusted by the strength and relative phase in the region of unbroken PT symmetry. At exception point (EP), the oscillation breaks down. To better understand the underlying properties we study the effective Bloch dynamics and find the emergence of the z components of the fixed points is the feature of the PT symmetry breaking and the projections in x-y plane can be controlled with high flexibility compared with the standard two-level system with PT symmetry. It helps to study the dynamic behavior of the complex PT symmetric model.

  15. Faithful state transfer between two-level systems via an actively cooled finite-temperature cavity

    Science.gov (United States)

    Sárkány, Lőrinc; Fortágh, József; Petrosyan, David

    2018-03-01

    We consider state transfer between two qubits—effective two-level systems represented by Rydberg atoms—via a common mode of a microwave cavity at finite temperature. We find that when both qubits have the same coupling strength to the cavity field, at large enough detuning from the cavity mode frequency, quantum interference between the transition paths makes the swap of the excitation between the qubits largely insensitive to the number of thermal photons in the cavity. When, however, the coupling strengths are different, the photon-number-dependent differential Stark shift of the transition frequencies precludes efficient transfer. Nevertheless, using an auxiliary cooling system to continuously extract the cavity photons, we can still achieve a high-fidelity state transfer between the qubits.

  16. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  17. Segmental and global lordosis changes with two-level axial lumbar interbody fusion and posterior instrumentation

    Science.gov (United States)

    Melgar, Miguel A; Tobler, William D; Ernst, Robert J; Raley, Thomas J; Anand, Neel; Miller, Larry E; Nasca, Richard J

    2014-01-01

    Background Loss of lumbar lordosis has been reported after lumbar interbody fusion surgery and may portend poor clinical and radiographic outcome. The objective of this research was to measure changes in segmental and global lumbar lordosis in patients treated with presacral axial L4-S1 interbody fusion and posterior instrumentation and to determine if these changes influenced patient outcomes. Methods We performed a retrospective, multi-center review of prospectively collected data in 58 consecutive patients with disabling lumbar pain and radiculopathy unresponsive to nonsurgical treatment who underwent L4-S1 interbody fusion with the AxiaLIF two-level system (Baxano Surgical, Raleigh NC). Main outcomes included back pain severity, Oswestry Disability Index (ODI), Odom's outcome criteria, and fusion status using flexion and extension radiographs and computed tomography scans. Segmental (L4-S1) and global (L1-S1) lumbar lordosis measurements were made using standing lateral radiographs. All patients were followed for at least 24 months (mean: 29 months, range 24-56 months). Results There was no bowel injury, vascular injury, deep infection, neurologic complication or implant failure. Mean back pain severity improved from 7.8±1.7 at baseline to 3.3±2.6 at 2 years (p lordosis, defined as a change in Cobb angle ≤ 5°, was identified in 84% of patients at L4-S1 and 81% of patients at L1-S1. Patients with loss or gain in segmental or global lordosis experienced similar 2-year outcomes versus those with less than a 5° change. Conclusions/Clinical Relevance Two-level axial interbody fusion supplemented with posterior fixation does not alter segmental or global lordosis in most patients. Patients with postoperative change in lordosis greater than 5° have similarly favorable long-term clinical outcomes and fusion rates compared to patients with less than 5° lordosis change. PMID:25694920

  18. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  19. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  20. DNCON2: improved protein contact prediction using two-level deep convolutional neural networks.

    Science.gov (United States)

    Adhikari, Badri; Hou, Jie; Cheng, Jianlin

    2018-05-01

    Significant improvements in the prediction of protein residue-residue contacts are observed in the recent years. These contacts, predicted using a variety of coevolution-based and machine learning methods, are the key contributors to the recent progress in ab initio protein structure prediction, as demonstrated in the recent CASP experiments. Continuing the development of new methods to reliably predict contact maps is essential to further improve ab initio structure prediction. In this paper we discuss DNCON2, an improved protein contact map predictor based on two-level deep convolutional neural networks. It consists of six convolutional neural networks-the first five predict contacts at 6, 7.5, 8, 8.5 and 10 Å distance thresholds, and the last one uses these five predictions as additional features to predict final contact maps. On the free-modeling datasets in CASP10, 11 and 12 experiments, DNCON2 achieves mean precisions of 35, 50 and 53.4%, respectively, higher than 30.6% by MetaPSICOV on CASP10 dataset, 34% by MetaPSICOV on CASP11 dataset and 46.3% by Raptor-X on CASP12 dataset, when top L/5 long-range contacts are evaluated. We attribute the improved performance of DNCON2 to the inclusion of short- and medium-range contacts into training, two-level approach to prediction, use of the state-of-the-art optimization and activation functions, and a novel deep learning architecture that allows each filter in a convolutional layer to access all the input features of a protein of arbitrary length. The web server of DNCON2 is at http://sysbio.rnet.missouri.edu/dncon2/ where training and testing datasets as well as the predictions for CASP10, 11 and 12 free-modeling datasets can also be downloaded. Its source code is available at https://github.com/multicom-toolbox/DNCON2/. chengji@missouri.edu. Supplementary data are available at Bioinformatics online.

  1. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  2. Spectral density of Cooper pairs in two level quantum dot–superconductors Josephson junction

    Energy Technology Data Exchange (ETDEWEB)

    Dhyani, A., E-mail: archana.d2003@gmail.com [Department of Physics, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand (India); Rawat, P.S. [Department of Nuclear Science and Technology, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand (India); Tewari, B.S., E-mail: bstewari@ddn.upes.ac.in [Department of Physics, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand (India)

    2016-09-15

    Highlights: • The present work deals with the study of the electronic spectral density of electron pairs and its effect in charge transport in superconductor-quantum dot-superconductor junctions. • The charge transfer across such junctions can be controlled by changing the positions of the dot level. • The Josephson supercurrent can also be tuned by controlling the position of quantum dot energy levels. - Abstract: In the present paper, we report the role of quantum dot energy levels on the electronic spectral density for a two level quantum dot coupled to s-wave superconducting leads. The theoretical arguments in this work are based on the Anderson model so that it necessarily includes dot energies, single particle tunneling and superconducting order parameter for BCS superconductors. The expression for single particle spectral function is obtained by using the Green's function equation of motion technique. On the basis of numerical computation of spectral function of superconducting leads, it has been found that the charge transfer across such junctions can be controlled by the positions and availability of the dot levels.

  3. Induced absorption and stimulated emission in a driven two-level atom

    International Nuclear Information System (INIS)

    Mavroyannis, C.

    1992-01-01

    We have considered the induced processes that occur in a driven two-level atom, where a laser photon is absorbed and emitted by the ground and by the excited states of the atom, respectively. In the low-intensity limit of the laser field, the induced spectra arising when a laser photon is absorbed by the ground state of the atom consist of two peaks describing induced absorption and stimulated-emission processes, respectively, where the former prevails over the latter. Asymmetry of the spectral lines occurs at off-resonance and its extent depends on the detuning of the laser field. The physical. process where a laser photon is emitted by the excited state is the reverse of that arising from the absorption of a laser photon by the ground state of the atom. The former differs from the latter in that the emission of a laser photon by the excited state occurs in the low frequency regime and that the stimulated-emission process prevails over that of the induced absorption. In this case, amplification of ultrashort pulses is likely to occur without the need of population inversion between the optical transitions. The computed spectra are graphically presented and discussed. (author)

  4. A distributed monitoring system for photovoltaic arrays based on a two-level wireless sensor network

    Science.gov (United States)

    Su, F. P.; Chen, Z. C.; Zhou, H. F.; Wu, L. J.; Lin, P. J.; Cheng, S. Y.; Li, Y. F.

    2017-11-01

    In this paper, a distributed on-line monitoring system based on a two-level wireless sensor network (WSN) is proposed for real time status monitoring of photovoltaic (PV) arrays to support the fine management and maintenance of PV power plants. The system includes the sensing nodes installed on PV modules (PVM), sensing and routing nodes installed on combiner boxes of PV sub-arrays (PVA), a sink node and a data management centre (DMC) running on a host computer. The first level WSN is implemented by the low-cost wireless transceiver nRF24L01, and it is used to achieve single hop communication between the PVM nodes and their corresponding PVA nodes. The second level WSN is realized by the CC2530 based ZigBee network for multi-hop communication among PVA nodes and the sink node. The PVM nodes are used to monitor the PVM working voltage and backplane temperature, and they send the acquired data to their PVA node via the nRF24L01 based first level WSN. The PVA nodes are used to monitor the array voltage, PV string current and environment irradiance, and they send the acquired and received data to the DMC via the ZigBee based second level WSN. The DMC is designed using the MATLAB GUIDE and MySQL database. Laboratory experiment results show that the system can effectively acquire, display, store and manage the operating and environment parameters of PVA in real time.

  5. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    Science.gov (United States)

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  6. Entanglement Criteria of Two Two-Level Atoms Interacting with Two Coupled Modes

    Science.gov (United States)

    Baghshahi, Hamid Reza; Tavassoly, Mohammad Kazem; Faghihi, Mohammad Javad

    2015-08-01

    In this paper, we study the interaction between two two-level atoms and two coupled modes of a quantized radiation field in the form of parametric frequency converter injecting within an optical cavity enclosed by a medium with Kerr nonlinearity. It is demonstrated that, by applying the Bogoliubov-Valatin canonical transformation, the introduced model is reduced to a well-known form of the generalized Jaynes-Cummings model. Then, under particular initial conditions for the atoms (in a coherent superposition of its ground and upper states) and the fields (in a standard coherent state) which may be prepared, the time evolution of state vector of the entire system is analytically evaluated. In order to understand the degree of entanglement between subsystems (atom-field and atom-atom), the dynamics of entanglement through different measures, namely, von Neumann reduced entropy, concurrence and negativity is evaluated. In each case, the effects of Kerr nonlinearity and detuning parameter on the above measures are numerically analyzed, in detail. It is illustrated that the amount of entanglement can be tuned by choosing the evolved parameters, appropriately.

  7. Multilevel Converter by Cascading Two-Level Three-Phase Voltage Source Converter

    Directory of Open Access Journals (Sweden)

    Abdullrahman A. Al-Shamma’a

    2018-04-01

    Full Text Available This paper proposes a topology using isolated, cascaded multilevel voltage source converters (VSCs and employing two-winding magnetic elements for high-power applications. The proposed topology synthesizes 6 two-level, three-phase VSCs, so the power capability of the presented converter is six times the capability of each VSC module. The characteristics of the proposed topology are demonstrated through analyzing its current relationships, voltage relationships and power capability in detail. The power rating is equally shared among the VSC modules without the need for a sharing algorithm; thus, the converter operates as a single three-phase VSC. The comparative analysis with classical neutral-point clamped, flying capacitor and cascaded H-bridge exhibits the superior features of fewer insulated gate bipolar transistors (IGBTs, capacitor requirement and fewer diodes. To validate the theoretical performance of the proposed converter, it is simulated in a MATLAB/Simulink environment and the results are experimentally demonstrated using a laboratory prototype.

  8. Acoustic interactions between inversion symmetric and asymmetric two-level systems

    International Nuclear Information System (INIS)

    Churkin, A; Schechter, M; Barash, D

    2014-01-01

    Amorphous solids, as well as many disordered lattices, display remarkable universality in their low temperature acoustic properties. This universality is attributed to the attenuation of phonons by tunneling two-level systems (TLSs), facilitated by the interaction of the TLSs with the phonon field. TLS-phonon interaction also mediates effective TLS–TLS interactions, which dictates the existence of a glassy phase and its low energy properties. Here we consider KBr:CN, the archetypal disordered lattice showing universality. We calculate numerically, using conjugate gradients method, the effective TLS–TLS interactions for inversion symmetric (CN flips) and asymmetric (CN rotations) TLSs, in the absence and presence of disorder, in two and three dimensions. The observed dependence of the magnitude and spatial power law of the interaction on TLS symmetry, and its change with disorder, characterizes TLS–TLS interactions in disordered lattices in both extreme and moderate dilutions. Our results are in good agreement with the two-TLS model, recently introduced to explain long-standing questions regarding the quantitative universality of phonon attenuation and the energy scale of ≈1–3 K below which universality is observed. (paper)

  9. Generalized Heine–Stieltjes and Van Vleck polynomials associated with two-level, integrable BCS models

    International Nuclear Information System (INIS)

    Marquette, Ian; Links, Jon

    2012-01-01

    We study the Bethe ansatz/ordinary differential equation (BA/ODE) correspondence for Bethe ansatz equations that belong to a certain class of coupled, nonlinear, algebraic equations. Through this approach we numerically obtain the generalized Heine–Stieltjes and Van Vleck polynomials in the degenerate, two-level limit for four cases of integrable Bardeen–Cooper–Schrieffer (BCS) pairing models. These are the s-wave pairing model, the p + ip-wave pairing model, the p + ip pairing model coupled to a bosonic molecular pair degree of freedom, and a newly introduced extended d + id-wave pairing model with additional interactions. The zeros of the generalized Heine–Stieltjes polynomials provide solutions of the corresponding Bethe ansatz equations. We compare the roots of the ground states with curves obtained from the solution of a singular integral equation approximation, which allows for a characterization of ground-state phases in these systems. Our techniques also permit the computation of the roots of the excited states. These results illustrate how the BA/ODE correspondence can be used to provide new numerical methods to study a variety of integrable systems. (paper)

  10. SPONGY (SPam ONtoloGY: Email Classification Using Two-Level Dynamic Ontology

    Directory of Open Access Journals (Sweden)

    Seongwook Youn

    2014-01-01

    Full Text Available Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user’s background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1 to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2 to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.

  11. Quasiparticle-induced decoherence of microscopic two-level-systems in superconducting qubits

    Energy Technology Data Exchange (ETDEWEB)

    Bilmes, Alexander; Lisenfeld, Juergen; Zanker, Sebastian; Weiss, Georg; Ustinov, Alexey V. [PHI, KIT, Karlsruhe (Germany); Marthaler, Michael; Schoen, Gerd [TFP, KIT, Karlsruhe (Germany)

    2016-07-01

    Parasitic Two-Level-Systems (TLS) are one of the main sources of decoherence in superconducting nano-scale devices such as SQUIDs, resonators and quantum bits (qubits), although the TLS' microscopic nature remains unclear. We use a superconducting phase qubit to detect TLS contained within the tunnel barrier of the qubit's Al/AlOx/Al Josephson junction. If the TLS transition frequency lies within the 6-10 GHz range, we can coherently drive it by resonant microwave pulses and access its quantum state by utilizing the strong coupling to the qubit. Our previous measurements of TLS coherence in dependence of the temperature indicate that quasiparticles (QPs), which diffuse from the superconducting Al electrodes into the oxide layer, may give rise to TLS energy loss and dephasing. Here, we probe the TLS-QP interaction using a reliable method of in-situ QP injection via an on-chip dc-SQUID that is pulse-biased beyond its switching current. The QP density is calibrated by measuring associated characteristic changes to the qubit's energy relaxation rate. We will present experimental data which show the QP-induced TLS decoherence in good agreement to theoretical predictions.

  12. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  13. Amplification without inversion, fast light and optical bistability in a duplicated two-level system

    International Nuclear Information System (INIS)

    Ebrahimi Zohravi, Lida; Vafafard, Azar; Mahmoudi, Mohammad

    2014-01-01

    The optical properties of a weak probe field in a duplicated two-level system are investigated in multi-photon resonance (MPR) condition and beyond it. It is shown that by changing the relative phase of applied fields, the absorption switches to the amplification without inversion in MPR condition. By applying the Floquet decomposition to the equations of motion beyond MPR condition, it is shown that the phase-dependent behavior is valid only in MPR condition. Moreover, it is demonstrated that the group velocity of light pulse can be controlled by the intensity of the applied fields and the gain-assisted superluminal light propagation (fast light) is obtained in this system. In addition, the optical bistability (OB) behavior of the system is studied beyond MPR condition. We apply an indirect incoherent pumping field to the system and it is found that the group velocity and OB behavior of the system can be controlled by the incoherent pumping rate. - Highlights: • We studied the optical properties of DTL system under MPR condition and beyond it. • By changing the relative phase, the absorption switches to the amplification without inversion in MPR condition. • The gain-assisted superluminal light propagation (fast light) is obtained in this system. • The optical bistability (OB) behavior of the system is studied beyond MPR condition. • The incoherent pumping rate has a major role in controlling the group velocity and OB behavior of the system

  14. Automatic QRS complex detection using two-level convolutional neural network.

    Science.gov (United States)

    Xiang, Yande; Lin, Zhitao; Meng, Jianyi

    2018-01-29

    The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.

  15. A Two-Level Task Scheduler on Multiple DSP System for OpenCL

    Directory of Open Access Journals (Sweden)

    Li Tian

    2014-04-01

    Full Text Available This paper addresses the problem that multiple DSP system does not support OpenCL programming. With the compiler, runtime, and the kernel scheduler proposed, an OpenCL application becomes portable not only between multiple CPU and GPU, but also between embedded multiple DSP systems. Firstly, the LLVM compiler was imported for source-to-source translation in which the translated source was supported by CCS. Secondly, two-level schedulers were proposed to support efficient OpenCL kernel execution. The DSP/BIOS is used to schedule system level tasks such as interrupts and drivers; however, the synchronization mechanism resulted in heavy overhead during task switching. So we designed an efficient second level scheduler especially for OpenCL kernel work-item scheduling. The context switch process utilizes the 8 functional units and cross path links which was superior to DSP/BIOS in the aspect of task switching. Finally, dynamic loading and software managed CACHE were redesigned for OpenCL running on multiple DSP system. We evaluated the performance using some common OpenCL kernels from NVIDIA, AMD, NAS, and Parboil benchmarks. Experimental results show that the DSP OpenCL can efficiently exploit the computing resource of multiple cores.

  16. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    Science.gov (United States)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  17. SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.

    Science.gov (United States)

    Youn, Seongwook

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.

  18. Crises-induced intermittencies in a coherently driven system of two-level atoms

    International Nuclear Information System (INIS)

    Pando L, C.L.; Perez, G.; Cerdeira, H.A.

    1993-04-01

    We study the coherent dynamics of a thin layer of two-level atoms driven by an external coherent field and a phase conjugated mirror (PCM). Since the variables of the system are defined on the Bloch sphere, the third dimension is provided by the temporal modulation of the Rabi frequencies, which are induced by a PCM which reflects an electric field with a carrier frequency different from the incident one. We show that as the PCM gain coefficient is changed period doubling leading to chaos occurs. We find crises of attractor merging and attractor widening types related to homoclinic and heteroclinic tangencies respectively. For the attractor merging crises we find the critical exponent for the characteristic time of intermittency versus the control parameter which is given by the gain coefficient of the PCM. We show that during the crises of attractor widening type, another crisis due to attractor destruction occurs as the control parameter is changed. The latter is due to the collision of the old attractor with its basin boundary when a new attractor is created. This new attractor is stable only in a very small interval in the neighborhood of this second crisis. (author). 31 refs, 15 figs

  19. Dynamics of a quantum two-level system under the action of phase-diffusion field

    Energy Technology Data Exchange (ETDEWEB)

    Sobakinskaya, E.A. [Institute for Physics of Microstructures of RAS, Nizhny Novgorod, 603950 (Russian Federation); Pankratov, A.L., E-mail: alp@ipm.sci-nnov.ru [Institute for Physics of Microstructures of RAS, Nizhny Novgorod, 603950 (Russian Federation); Vaks, V.L. [Institute for Physics of Microstructures of RAS, Nizhny Novgorod, 603950 (Russian Federation)

    2012-01-09

    We study a behavior of quantum two-level system, interacting with noisy phase-diffusion field. The dynamics is shown to split into two regimes, determined by the coherence time of the phase-diffusion field. For both regimes we present a model of quantum system behavior and discuss possible applications of the obtained effect for spectroscopy. In particular, the obtained analytical formula for the macroscopic polarization demonstrates that the phase-diffusion field does not affect the absorption line shape, which opens up an intriguing possibility of noisy spectroscopy, based on broadband sources with Lorentzian line shape. -- Highlights: ► We study dynamics of quantum system interacting with noisy phase-diffusion field. ► At short times the phase-diffusion field induces polarization in the quantum system. ► At long times the noise leads to polarization decay and heating of a quantum system. ► Simple model of interaction is derived. ► Application of the described effects for spectroscopy is discussed.

  20. Injury patterns of child abuse: Experience of two Level 1 pediatric trauma centers.

    Science.gov (United States)

    Yu, Yangyang R; DeMello, Annalyn S; Greeley, Christopher S; Cox, Charles S; Naik-Mathuria, Bindi J; Wesson, David E

    2018-05-01

    This study examines non-accidental trauma (NAT) fatalities as a percentage of all injury fatalities and identifies injury patterns in NAT admissions to two level 1 pediatric trauma centers. We reviewed all children (<5years old) treated for NAT from 2011 to 2015. Patient demographics, injury sites, and survival were obtained from both institutional trauma registries. Of 4623 trauma admissions, 557 (12%) were due to NAT. However, 43 (46%) of 93 overall trauma fatalities were due to NAT. Head injuries were the most common injuries sustained (60%) and led to the greatest increased risk of death (RR 5.1, 95% CI 2.0-12.7). Less common injuries that increased the risk of death were facial injuries (14%, RR 2.9, 95% CI 1.6-5.3), abdominal injuries (8%, RR 2.8, 95% CI 1.4-5.6), and spinal injuries (3%, RR 3.9, 95% CI 1.8-8.8). Although 76% of head injuries occurred in infants <1year, children ages 1-4years old with head injuries had a significantly higher case fatality rate (27% vs. 6%, p<0.001). Child abuse accounts for a large proportion of trauma fatalities in children under 5years of age. Intracranial injuries are common in child abuse and increase the risk of death substantially. Preventing NAT in infants and young children should be a public health priority. Retrospective Review. II. Copyright © 2018. Published by Elsevier Inc.

  1. Bayesian feedback versus Markovian feedback in a two-level atom

    International Nuclear Information System (INIS)

    Wiseman, H.M.; Mancini, Stefano; Wang Jin

    2002-01-01

    We compare two different approaches to the control of the dynamics of a continuously monitored open quantum system. The first is Markovian feedback, as introduced in quantum optics by Wiseman and Milburn [Phys. Rev. Lett. 70, 548 (1993)]. The second is feedback based on an estimate of the system state, developed recently by Doherty and Jacobs [Phys. Rev. A 60, 2700 (1999)]. Here we choose to call it, for brevity, Bayesian feedback. For systems with nonlinear dynamics, we expect these two methods of feedback control to give markedly different results. The simplest possible nonlinear system is a driven and damped two-level atom, so we choose this as our model system. The monitoring is taken to be homodyne detection of the atomic fluorescence, and the control is by modulating the driving. The aim of the feedback in both cases is to stabilize the internal state of the atom as close as possible to an arbitrarily chosen pure state, in the presence of inefficient detection and other forms of decoherence. Our results (obtained without recourse to stochastic simulations) prove that Bayesian feedback is never inferior, and is usually superior, to Markovian feedback. However, it would be far more difficult to implement than Markovian feedback and it loses its superiority when obvious simplifying approximations are made. It is thus not clear which form of feedback would be better in the face of inevitable experimental imperfections

  2. Risk Analysis of a Two-Level Supply Chain Subject to Misplaced Inventory

    Directory of Open Access Journals (Sweden)

    Lijing Zhu

    2017-06-01

    Full Text Available Misplaced inventory is prevalent in retail stores and may lead to the overall poor performance of the supply chain. We explore the impact of misplaced inventory on a two-level supply chain, which consists of a risk-neutral supplier and a risk-averse retailer. The supplier decides the wholesale price to maximize her profit, whereas the retailer decides the order quantity to maximize his utility. Under the Conditional Value-at-Risk (CVaR criterion, we formulate the problem as a Stackelberg game model and obtain the equilibrium solutions in three cases: (i information asymmetry about inventory errors exists; (ii the retailer shares information about inventory errors with the supplier; and (iii in order to reduce misplaced inventory, the supply chain deploys Radio-Frequency Identification (RFID technology. The benefits of information sharing and RFID implementation are explored. A revenue and cost sharing contract is proposed to coordinate the supply chain and to allocate the cost savings from RFID implementation among supply chain participants. Finally, we provide managerial insights for risk-averse decision makers that are considering investing in the RFID technology.

  3. SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology

    Science.gov (United States)

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240

  4. Two level undercut-profile substrate-based filamentary coated conductors produced using metal organic chemical vapor deposition

    DEFF Research Database (Denmark)

    Insinga, Andrea R.; Sundaram, Aarthi; Hazelton, Drew W.

    2018-01-01

    The two level undercut-profile substrate (2LUPS) has been introduced as a concept for subdividing rare-earth-Ba$_{2}$Cu$_{3}$O$_{7}$ (REBCO) coated conductors (CC) into narrow filaments which reduces the AC losses and improves field stability for DC magnets. The 2LUPS consists of two levels...

  5. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  6. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  7. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  8. A Novel Scheme to Minimize Hop Count for GAF in Wireless Sensor Networks: Two-Level GAF

    Directory of Open Access Journals (Sweden)

    Vaibhav Soni

    2015-01-01

    Full Text Available In wireless sensor networks, geographic adaptive fidelity (GAF is one of the most popular energy-aware routing protocols. It conserves energy by identifying equivalence between sensors from a routing perspective and then turning off unnecessary sensors, while maintaining the connectivity of the network. Nevertheless, the traditional GAF still cannot reach the optimum energy usage since it needs more number of hops to transmit data packets to the sink. As a result, it also leads to higher packet delay. In this paper, we propose a modified version of GAF to minimize hop count for data routing, called two-level GAF (T-GAF. Furthermore, we use a generalized version of GAF called Diagonal-GAF (DGAF where two diagonal adjacent grids can also directly communicate. It has an advantage of less overhead of coordinator election based on the residual energy of sensors. Analysis and simulation results show significant improvements of the proposed work comparing to traditional GAF in the aspect of total hop count, energy consumption, total distance covered by the data packet before reaching the sink, and packet delay. As a result, compared to traditional GAF, it needs 40% to 47% less hop count and consumes 27% to 35% less energy to extend the network lifetime.

  9. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  10. Lambda-Based Data Processing Architecture for Two-Level Load Forecasting in Residential Buildings

    Directory of Open Access Journals (Sweden)

    Gde Dharma Nugraha

    2018-03-01

    Full Text Available Building energy management systems (BEMS have been intensively used to manage the electricity consumption of residential buildings more efficiently. However, the dynamic behavior of the occupants introduces uncertainty problems that affect the performance of the BEMS. To address this uncertainty problem, the BEMS may implement load forecasting as one of the BEMS modules. Load forecasting utilizes historical load data to compute model predictions for a specific time in the future. Recently, smart meters have been introduced to collect electricity consumption data. Smart meters not only capture aggregation data, but also individual data that is more frequently close to real-time. The processing of both smart meter data types for load forecasting can enhance the performance of the BEMS when confronted with uncertainty problems. The collection of smart meter data can be processed using a batch approach for short-term load forecasting, while the real-time smart meter data can be processed for very short-term load forecasting, which adjusts the short-term load forecasting to adapt to the dynamic behavior of the occupants. This approach requires different data processing techniques for aggregation and individual of smart meter data. In this paper, we propose Lambda-based data processing architecture to process the different types of smart meter data and implement the two-level load forecasting approach, which combines short-term and very short-term load forecasting techniques on top of our proposed data processing architecture. The proposed approach is expected to enhance the BEMS to address the uncertainty problem in order to process data in less time. Our experiment showed that the proposed approaches improved the accuracy by 7% compared to a typical BEMS with only one load forecasting technique, and had the lowest computation time when processing the smart meter data.

  11. Absorption spectrum of a two-level atom in a bad cavity with injected squeezed vacuum

    Science.gov (United States)

    Zhou, Peng; Swain, S.

    1996-02-01

    We study the absorption spectrum of a coherently driven two-level atom interacting with a resonant cavity mode which is coupled to a broadband squeezed vacuum through its input-output mirror in the bad cavity limit. We study the modification of the two-photon correlation strength of the injected squeezed vacuum inside the cavity, and show that the equations describing probe absorption in the cavity environment are formally identical to these in free space, but with modified parameters describing the squeezed vacuum. The two photon correlations induced by the squeezed vacuum are always weaker than in free space. We pay particular attention to the spectral behaviour at line centre in the region of intermediate trength driving intensities, where anomalous spectral features such as hole-burning and dispersive profiles are displayed. These unusual spectral features are very sensitive to the squeezing phase and the Rabi frequency of the driving field. We also derive the threshold value of the Rabi frequency which gives rise to the transparency of the probe beam at the driving frequency. When the Rabi frequency is less than the threshold value, the probe beam is absorbed, whilst the probe beam is amplified (without population inversion under certain conditions) when the Rabi frequency is larger than this threshold. The anomalous spectral features all take place in the vicinity of the critical point dividing the different dynamical regimes, probe absorption and amplification, of the atomic radiation. The physical origin of the strong amplification without population inversion, and the feasibility of observing it, are discussed.

  12. A two level mutation-selection model of cultural evolution and diversity.

    Science.gov (United States)

    Salazar-Ciudad, Isaac

    2010-11-21

    Cultural evolution is a complex process that can happen at several levels. At the level of individuals in a population, each human bears a set of cultural traits that he or she can transmit to its offspring (vertical transmission) or to other members of his or her society (horizontal transmission). The relative frequency of a cultural trait in a population or society can thus increase or decrease with the relative reproductive success of its bearers (individual's level) or the relative success of transmission (called the idea's level). This article presents a mathematical model on the interplay between these two levels. The first aim of this article is to explore when cultural evolution is driven by the idea's level, when it is driven by the individual's level and when it is driven by both. These three possibilities are explored in relation to (a) the amount of interchange of cultural traits between individuals, (b) the selective pressure acting on individuals, (c) the rate of production of new cultural traits, (d) the individual's capacity to remember cultural traits and to the population size. The aim is to explore the conditions in which cultural evolution does not lead to a better adaptation of individuals to the environment. This is to contrast the spread of fitness-enhancing ideas, which make individual bearers better adapted to the environment, to the spread of "selfish" ideas, which spread well simply because they are easy to remember but do not help their individual bearers (and may even hurt them). At the same time this article explores in which conditions the adaptation of individuals is maximal. The second aim is to explore how these factors affect cultural diversity, or the amount of different cultural traits in a population. This study suggests that a larger interchange of cultural traits between populations could lead to cultural evolution not improving the adaptation of individuals to their environment and to a decrease of cultural diversity

  13. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  14. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  15. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  16. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  17. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  18. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  19. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  20. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  1. Two levels decision system for efficient planning and implementation of bioenergy production

    International Nuclear Information System (INIS)

    Ayoub, Nasser; Martins, Ricardo; Wang, Kefeng; Seki, Hiroya; Naka, Yuji

    2007-01-01

    When planning bioenergy production from biomass, planners should take into account each and every stakeholder along the biomass supply chains, e.g. biomass resources suppliers, transportation, conversion and electricity suppliers. Also, the planners have to consider social concerns, environmental and economical impacts related with establishing the biomass systems and the specific difficulties of each country. To overcome these problems in a sustainable manner, a robust decision support system is required. For that purpose, a two levels general Bioenergy Decision System (gBEDS) for bioenergy production planning and implementation was developed. The core part of the gBEDS is the information base, which includes the basic bioenergy information and the detailed decision information. Basic bioenergy information include, for instance, the geographical information system (GIS) database, the biomass materials' database, the biomass logistic database and the biomass conversion database. The detailed decision information considers the parameters' values database with their default values and the variables database, values obtained by simulation and optimization. It also includes a scenario database, which is used for demonstration to new users and also for case based reasoning by planners and executers. Based on the information base, the following modules are included to support decision making: the simulation module with graph interface based on the unit process (UP) definition and the genetic algorithms (GAs) methods for optimal decisions and the Matlab module for applying data mining methods (fuzzy C-means clustering and decision trees) to the biomass collection points, to define the location of storage and bioenergy conversion plants based on the simulation and optimization model developed of the whole life cycle of bioenergy generation. Furthermore, Matlab is used to set up a calculation model with crucial biomass planning parameters (e.g. costs, CO 2 emissions), over

  2. Evaluation of Circulating Current Suppression Methods for Parallel Interleaved Inverters

    DEFF Research Database (Denmark)

    Gohil, Ghanshyamsinh Vijaysinh; Bede, Lorand; Teodorescu, Remus

    2016-01-01

    Two-level Voltage Source Converters (VSCs) are often connected in parallel to achieve desired current rating in multi-megawatt Wind Energy Conversion System (WECS). A multi-level converter can be realized by interleaving the carrier signals of the parallel VSCs. As a result, the harmonic perfor......-mance of the WECS can be significantly improved. However, the interleaving of the carrier signals may lead to the flow of circulating current between parallel VSCs and it is highly desirable to avoid/suppress this unwanted circulating current. A comparative evaluation of the different methods to avoid....../suppress the circulating current between the parallel interleaved VSCs is presented in this paper. The losses and the volume of the inductive components and the semiconductor losses are evaluated for the WECS with different circulating current suppression methods. Multi-objective optimizations of the inductive components...

  3. Parallel and Distributed Systems for Probabilistic Reasoning

    Science.gov (United States)

    2012-12-01

    Ranganathan "et"al...typically a random permutation over the vertices. Advances by Elidan et al. [2006] and Ranganathan et al. [2007] have focused on dynamic asynchronous...Wildfire algorithm shown in Alg. 3.6 is a direct parallelization of the algorithm proposed by [ Ranganathan et al., 2007]. The Wildfire algorithm

  4. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  5. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  6. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  7. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  8. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  9. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  10. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  11. The off-resonant aspects of decoherence and a critique of the two-level approximation

    International Nuclear Information System (INIS)

    Savran, Kerim; Hakioglu, T; Mese, E; Sevincli, Haldun

    2006-01-01

    Conditions in favour of a realistic multilevelled description of a decohering quantum system are examined. In this regard the first crucial observation is that the thermal effects, contrary to the conventional belief, play a minor role at low temperatures in the decoherence properties. The system-environment coupling and the environmental energy spectrum dominantly affect the decoherence. In particular, zero temperature quantum fluctuations or non-equilibrium sources can be present and influential on the decoherence rates in a wide energy range allowed by the spectrum of the environment. A crucial observation against the validity of the two-level approximation is that the decoherence rates are found to be dominated not by the long time resonant but the short time off-resonant processes. This observation is demonstrated in two stages. Firstly, our zero temperature numerical results reveal that the calculated short time decoherence rates are Gaussian-like (the time dependence of the density matrix is led by the second time derivative at t = 0). Exact analytical results are also permitted in the short time limit, which, consistent with our numerical results, reveal that this specific Gaussian-like behaviour is a property of the non-Markovian correlations in the environment. These Gaussian-like rates have no dependence on any spectral parameter (position and the width of the spectrum) except, in totality, the spectral area itself. The dependence on the spectral area is a power law. Furthermore, the Gaussian-like character at short times is independent of the number of levels (N), but the numerical value of the decoherence rates is a monotonic function of N. In this context, we demonstrate that leakage, as a characteristic multilevel effect, is dominated by the non-resonant processes. The long time behaviour of decoherence is also examined. Since our spectral model allows Markovian environmental correlations at long times, the decoherence rates in this regime become

  12. Device for balancing parallel strings

    Science.gov (United States)

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  13. An investigation of two-level fracture in the blistering of D+ irradiated Cu

    International Nuclear Information System (INIS)

    Johnson, P.B.; Jones, W.R.

    1984-01-01

    The blisters produced by 200 keV D + irradiation of Cu at 120 K and subsequent heating to room temperature are found to be of two distinct types: small semi-spherical blisters and large blister flakes. A simple method has been developed to remove blister flakes enabling direct observation of the exposed underside of the flakes by scanning electron microscopy. The small semi-spherical blisters, which form before the more extensive blister flakes, have a consistently deeper plane of fracture than the flakes. To explain the different depths of fracture two alternative models are proposed. Compressional stress may inhibit bubble nucleation and early growth near the depth region around the maxima in the damage and gas deposition profiles. It is proposed that in the later stages of the irradiation shear introduced by differential expansion, caused by a combination of radiation induced swelling and localised heating plays a central role in fracture. (orig./RK)

  14. Geometric phase for a two-level system in photonic band gab crystal

    Science.gov (United States)

    Berrada, K.

    2018-05-01

    In this work, we investigate the geometric phase (GP) for a qubit system coupled to its own anisotropic and isotropic photonic band gap (PBG) crystal environment without Born or Markovian approximation. The qubit frequency affects the GP of the qubit directly through the effect of the PBG environment. The results show the deviation of the GP depends on the detuning parameter and this deviation will be large for relatively large detuning of atom frequency inside the gap with respect to the photonic band edge. Whereas for detunings outside the gap, the GP of the qubit changes abruptly to zero, exhibiting collapse phenomenon of the GP. Moreover, we find that the GP in the isotropic PBG photonic crystal is more robust than that in the anisotropic PBG under the same condition. Finally, we explore the relationship between the variation of the GP and population in terms of the physical parameters.

  15. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  16. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  17. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  18. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  19. Dataflow Query Execution in a Parallel, Main-memory Environment

    NARCIS (Netherlands)

    Wilschut, A.N.; Apers, Peter M.G.

    In this paper, the performance and characteristics of the execution of various join-trees on a parallel DBMS are studied. The results of this study are a step into the direction of the design of a query optimization strategy that is fit for parallel execution of complex queries. Among others,

  20. Dataflow Query Execution in a Parallel Main-Memory Environment

    NARCIS (Netherlands)

    Wilschut, A.N.; Apers, Peter M.G.

    1991-01-01

    The performance and characteristics of the execution of various join-trees on a parallel DBMS are studied. The results are a step in the direction of the design of a query optimization strategy that is fit for parallel execution of complex queries. Among others, synchronization issues are identified

  1. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  2. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  3. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  4. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  5. Multidirectional testing of one- and two-level ProDisc-L versus simulated fusions.

    Science.gov (United States)

    Panjabi, Manohar; Henderson, Gweneth; Abjornson, Celeste; Yue, James

    2007-05-20

    An in vitro human cadaveric biomechanical study. To evaluate intervertebral rotation changes due to lumbar ProDisc-L compared with simulated fusion, using follower load and multidirectional testing. Artificial discs, as opposed to the fusions, are thought to decrease the long-term accelerated degeneration at adjacent levels. A biomechanical assessment can be helpful, as the long-term clinical evaluation is impractical. Six fresh human cadaveric lumbar specimens (T12-S1) underwent multidirectional testing in flexion-extension, bilateral lateral bending, and bilateral torsion using the Hybrid test method. First, intact specimen total range of rotation (T12-S1) was determined. Second, using pure moments again, this range of rotation was achieved in each of the 5 constructs: A) ProDisc-L at L5-S1; B) fusion at L5-S1; C) ProDisc-L at L4-L5 and fusion at L5-S1; D) ProDisc-L at L4-L5 and L5-S1; and E) 2-level fusion at L4-L5 to L5-S1. Significant changes in the intervertebral rotations due to each construct were determined at the operated and nonoperated levels using repeated measures single factor ANOVA and Bonferroni statistical tests (P < 0.05). Adjacent-level effects (ALEs) were defined as the percentage changes in intervertebral rotations at the nonoperated levels due to the constructs. One- and 2-level ProDisc-L constructs showed only small ALE in any of the 3 rotations. In contrast, 1- and 2-level fusions showed increased ALE in all 3 directions (average, 7.8% and 35.3%, respectively, for 1 and 2 levels). In the disc plus fusion combination (construct C), the ALEs were similar to the 1-level fusion alone. In general, ProDisc-L preserved physiologic motions at all spinal levels, while the fusion simulations resulted in significant ALE.

  6. The Population Inversion and the Entropy of a Moving Two-Level Atom in Interaction with a Quantized Field

    Science.gov (United States)

    Abo-Kahla, D. A. M.; Abdel-Aty, M.; Farouk, A.

    2018-05-01

    An atom with only two energy eigenvalues is described by a two-dimensional state space spanned by the two energy eigenstates is called a two-level atom. We consider the interaction between a two-level atom system with a constant velocity. An analytic solution of the systems which interacts with a quantized field is provided. Furthermore, the significant effect of the temperature on the atomic inversion, the purity and the information entropy are discussed in case of the initial state either an exited state or a maximally mixed state. Additionally, the effect of the half wavelengths number of the field-mode is investigated.

  7. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  8. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  9. Direct determination of k Q factors for cylindrical and plane-parallel ionization chambers in high-energy electron beams from 6 MeV to 20 MeV

    Science.gov (United States)

    Krauss, A.; Kapsch, R.-P.

    2018-02-01

    For the ionometric determination of the absorbed dose to water, D w, in high-energy electron beams from a clinical accelerator, beam quality dependent correction factors, k Q, are required. By using a water calorimeter, these factors can be determined experimentally and potentially with lower standard uncertainties than those of the calculated k Q factors, which are tabulated in various dosimetry protocols. However, one of the challenges of water calorimetry in electron beams is the small measurement depths in water, together with the steep dose gradients present especially at lower energies. In this investigation, water calorimetry was implemented in electron beams to determine k Q factors for different types of cylindrical and plane-parallel ionization chambers (NE2561, NE2571, FC65-G, TM34001) in 10 cm  ×  10 cm electron beams from 6 MeV to 20 MeV (corresponding beam quality index R 50 ranging from 1.9 cm to 7.5 cm). The measurements were carried out using the linear accelerator facility of the Physikalisch-Technische Bundesanstalt. Relative standard uncertainties for the k Q factors between 0.50% for the 20 MeV beam and 0.75% for the 6 MeV beam were achieved. For electron energies above 8 MeV, general agreement was found between the relative electron energy dependencies of the k Q factors measured and those derived from the AAPM TG-51 protocol and recent Monte Carlo-based studies, as well as those from other experimental investigations. However, towards lower energies, discrepancies of up to 2.0% occurred for the k Q factors of the TM34001 and the NE2571 chamber.

  10. Direct determination of k Q factors for cylindrical and plane-parallel ionization chambers in high-energy electron beams from 6 MeV to 20 MeV.

    Science.gov (United States)

    Krauss, A; Kapsch, R-P

    2018-02-06

    For the ionometric determination of the absorbed dose to water, D w , in high-energy electron beams from a clinical accelerator, beam quality dependent correction factors, k Q , are required. By using a water calorimeter, these factors can be determined experimentally and potentially with lower standard uncertainties than those of the calculated k Q factors, which are tabulated in various dosimetry protocols. However, one of the challenges of water calorimetry in electron beams is the small measurement depths in water, together with the steep dose gradients present especially at lower energies. In this investigation, water calorimetry was implemented in electron beams to determine k Q factors for different types of cylindrical and plane-parallel ionization chambers (NE2561, NE2571, FC65-G, TM34001) in 10 cm  ×  10 cm electron beams from 6 MeV to 20 MeV (corresponding beam quality index R 50 ranging from 1.9 cm to 7.5 cm). The measurements were carried out using the linear accelerator facility of the Physikalisch-Technische Bundesanstalt. Relative standard uncertainties for the k Q factors between 0.50% for the 20 MeV beam and 0.75% for the 6 MeV beam were achieved. For electron energies above 8 MeV, general agreement was found between the relative electron energy dependencies of the k Q factors measured and those derived from the AAPM TG-51 protocol and recent Monte Carlo-based studies, as well as those from other experimental investigations. However, towards lower energies, discrepancies of up to 2.0% occurred for the k Q factors of the TM34001 and the NE2571 chamber.

  11. Open quantum systems and the two-level atom interacting with a single mode of the electromagnetic field

    International Nuclear Information System (INIS)

    Sandulescu, A.; Stefanescu, E.

    1987-07-01

    On the basis of Lindblad theory of open quantum systems we obtain new optical equations for the system of two-level atom interacting with a single mode of the electromagnetic field. The conventional Block equations in a generalized form with field phases are obtained in the hypothesis that all the terms are slowly varying in the rotating frame.(authors)

  12. Analytical Design of Passive LCL Filter for Three-phase Two-level Power Factor Correction Rectifiers

    DEFF Research Database (Denmark)

    Kouchaki, Alireza; Nymand, Morten

    2017-01-01

    This paper proposes a comprehensive analytical LCL filter design method for three-phase two-level power factor correction rectifiers (PFCs). The high frequency converter current ripple generates the high frequency current harmonics that need to be attenuated with respect to the grid standards...

  13. Quantum driving protocols for a two-level system: From generalized Landau-Zener sweeps to transitionless control

    DEFF Research Database (Denmark)

    Malossi, Nicola; Bason, Mark George; Viteau, Matthieu

    2013-01-01

    We present experimental results on the preparation of a desired quantum state in a two-level system with the maximum possible fidelity using driving protocols ranging from generalizations of the linear Landau-Zener protocol to transitionless driving protocols that ensure perfect following of the ...

  14. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  15. A Topological Model for Parallel Algorithm Design

    Science.gov (United States)

    1991-09-01

    effort should be directed to planning, requirements analysis, specification and design, with 20% invested into the actual coding, and then the final 40...be olle more language to learn. And by investing the effort into improving the utility of ai, existing language instead of creating a new one, this...193) it abandons the notion of a process as a fundemental concept of parallel program design and that it facilitates program derivation by rigorously

  16. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  17. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  18. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  19. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  20. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  1. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  2. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  3. Quantum metrology of phase for accelerated two-level atom coupled with electromagnetic field with and without boundary

    Science.gov (United States)

    Yang, Ying; Liu, Xiaobao; Wang, Jieci; Jing, Jiliang

    2018-03-01

    We study how to improve the precision of the quantum estimation of phase for an uniformly accelerated atom in fluctuating electromagnetic field by reflecting boundaries. We find that the precision decreases with increases of the acceleration without the boundary. With the presence of a reflecting boundary, the precision depends on the atomic polarization, position and acceleration, which can be effectively enhanced compared to the case without boundary if we choose the appropriate conditions. In particular, with the presence of two parallel reflecting boundaries, we obtain the optimal precision for atomic parallel polarization and the special distance between two boundaries, as if the atom were shielded from the fluctuation.

  4. PSCAD modeling of a two-level space vector pulse width modulation algorithm for power electronics education

    Directory of Open Access Journals (Sweden)

    Ahmet Mete Vural

    2016-09-01

    Full Text Available This paper presents the design details of a two-level space vector pulse width modulation algorithm in PSCAD that is able to generate pulses for three-phase two-level DC/AC converters with two different switching patterns. The presented FORTRAN code is generic and can be easily modified to meet many other kinds of space vector modulation strategies. The code is also editable for hardware programming. The new component is tested and verified by comparing its output as six gating signals with those of a similar component in MATLAB library. Moreover the component is used to generate digital signals for closed-loop control of STATCOM for reactive power compensation in PSCAD. This add-on can be an effective tool to give students better understanding of the space vector modulation algorithm for different control tasks in power electronics area, and can motivate them for learning.

  5. Non-Hermitian wave packet approximation for coupled two-level systems in weak and intense fields

    Energy Technology Data Exchange (ETDEWEB)

    Puthumpally-Joseph, Raiju; Charron, Eric [Institut des Sciences Moléculaires d’Orsay (ISMO), CNRS, Univ. Paris-Sud, Université Paris-Saclay, F-91405 Orsay (France); Sukharev, Maxim [Science and Mathematics Faculty, College of Letters and Sciences, Arizona State University, Mesa, Arizona 85212 (United States)

    2016-04-21

    We introduce a non-Hermitian Schrödinger-type approximation of optical Bloch equations for two-level systems. This approximation provides a complete and accurate description of the coherence and decoherence dynamics in both weak and strong laser fields at the cost of losing accuracy in the description of populations. In this approach, it is sufficient to propagate the wave function of the quantum system instead of the density matrix, providing that relaxation and dephasing are taken into account via automatically adjusted time-dependent gain and decay rates. The developed formalism is applied to the problem of scattering and absorption of electromagnetic radiation by a thin layer comprised of interacting two-level emitters.

  6. Minimax approach problem with incomplete information for the two-level hierarchical discrete-time dynamical system

    Energy Technology Data Exchange (ETDEWEB)

    Shorikov, A. F. [Ural Federal University, 19 S. Mira, Ekaterinburg, 620002, Russia and Institute of Mathematics and Mechanics, Ural Division of Russian Academy of Sciences, 16 S. Kovalevskaya, Ekaterinburg, 620990 (Russian Federation)

    2014-11-18

    We consider a discrete-time dynamical system consisting of three controllable objects. The motions of all objects are given by the corresponding vector linear or convex discrete-time recurrent vector relations, and control system for its has two levels: basic (first or I level) that is dominating and subordinate level (second or II level) and both have different criterions of functioning and united a priori by determined informational and control connections defined in advance. For the dynamical system in question, we propose a mathematical formalization in the form of solving a multistep problem of two-level hierarchical minimax program control over the terminal approach process with incomplete information and give a general scheme for its solution.

  7. Fabrication of Ni stamp with high aspect ratio, two-leveled, cylindrical microstructures using dry etching and electroplating

    DEFF Research Database (Denmark)

    Petersen, Ritika Singh; Keller, Stephan Sylvest; Hansen, Ole

    2015-01-01

    obtained by defining a reservoir and a separating trench with different depths of 85 and 125 μm, respectively, in a single embossing step. The fabrication of the required two leveled stamp is done using a modified DEEMO (dry etching, electroplating and molding) process. Dry etching using the Bosch process...... and electroplating are optimized to obtain a stamp with smooth stamp surfaces and a positive sidewall profile. Using this stamp, hot embossing is performed successfully with excellent yield and high replication fidelity....

  8. An EOQ Model with Stock-Dependent Demand under Two Levels of Trade Credit and Time Value of Money

    OpenAIRE

    H.A.O. Jia-Qin; M.O. Jiangtao

    2013-01-01

    Since the value of money changes with time, it is necessary to take account of the influence of time factor in making the replenishment policy. In this study, to investigate the influence of the time value of money to the inventory strategy, an inventory system for deteriorating items with stock-dependent demand is investigated under two levels of trade credit. The method to efficiently determine the optimal cycle time is presented. Numerical examples are provided to demonstrate the model and...

  9. Coherent control of the group velocity in a dielectric slab doped with duplicated two-level atoms

    Science.gov (United States)

    Ziauddin; Chuang, You-Lin; Lee, Ray-Kuang; Qamar, Sajid

    2016-01-01

    Coherent control of reflected and transmitted pulses is investigated theoretically through a slab doped with atoms in a duplicated two-level configuration. When a strong control field and a relatively weak probe field are employed, coherent control of the group velocity is achieved via changing the phase shift ϕ between control and probe fields. Furthermore, the peak values in the delay time of the reflected and transmitted pulses are also studied by varying the phase shift ϕ.

  10. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    International Nuclear Information System (INIS)

    Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-01-01

    Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  11. Entropy squeezing for a two-level atom in two-mode Raman coupled model with intrinsic decoherence

    Institute of Scientific and Technical Information of China (English)

    Zhang Jian; Shao Bin; Zou Jian

    2009-01-01

    In this paper,we investigate the entropy squeezing for a two-level atom interacting with two quantized fields through Raman coupling.We obtain the dynamical evolution of the total system under the influence of intrinsic decoherence when the two quantized fields are prepared in a two-mode squeezing vacuum state initially.The effects of the field squeezing factor,the two-level atomic transition frequency,the second field frequency and the intrinsic decoherence on the entropy squeezing are discussed.Without intrinsic decoherence,the increase of field squeezing factor can break the entropy squeezing.The two-level atomic transition frequency changes only the period of oscillation but not the strength of entropy squeezing.The influence of the second field frequency is complicated.With the intrinsic decoherence taken into consideration,the results show that the stronger the intrinsic decoherence is,the more quickly the entropy squeezing will disappear.The increase of the atomic transition frequency can hasten the disappearance of entropy squeezing.

  12. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    Energy Technology Data Exchange (ETDEWEB)

    Schanen, Michel; Marin, Oana; Zhang, Hong; Anitescu, Mihai

    2016-01-01

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validate it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.

  13. Performance and meat quality traits of beef heifers fed with two levels of concentrate and ruminally undegradable protein.

    Science.gov (United States)

    Duarte, Marcio de Souza; Paulino, Pedro Veiga Rodrigues; Valadares Filho, Sebastião de Campos; Paulino, Mario Fonseca; Detmann, Edenio; Zervoudakis, Joanis Tilemahos; Monnerat, João Paulo Ismerio dos Santos; Viana, Gabriel da Silva; Silva, Luiz Henrique P; Serão, Nicola Vergara Lopes

    2011-04-01

    The effects of two levels of concentrate and ruminally undegradable protein (RUP) on performance, intake, digestibility, carcass characteristics, meat quality traits, and commercial cuts yield were assessed. Twenty crossbred heifers (240 kg average body weight) were used. At the beginning of the trial, four animals were slaughtered as reference group and the 16 remaining animals were randomly assigned to four treatments, in a 2 × 2 factorial design: two levels of concentrate (40% and 80%, dry matter (DM) basis) and two levels of RUP (48.79% and 27.19% of CP). At the end of the trial, all the animals were slaughtered. There was no interaction (P > 0.05) between concentrate and RUP levels. Dry matter intake and nutrients digestibility was not affected (P > 0.05) by RUP level. Heifers fed the highest RUP level had greater (P  0.05) DMI and ADG. Heifers fed diets with 80% concentrate had greater intake of TDN and EE, and lower intake of NDF (P RUP levels did not affect (P > 0.05) the carcass characteristics and carcass gain composition. Heifers fed 80% concentrate diets had larger (P  0.05) the composition of carcass gain. There was no effect (P > 0.05) of RUP and concentrate levels on meat quality traits and commercial cut yields.

  14. Entropy squeezing for a two-level atom in two-mode Raman coupled model with intrinsic decoherence

    International Nuclear Information System (INIS)

    Jian, Zhang; Bin, Shao; Jian, Zou

    2009-01-01

    In this paper, we investigate the entropy squeezing for a two-level atom interacting with two quantized fields through Raman coupling. We obtain the dynamical evolution of the total system under the influence of intrinsic decoherence when the two quantized fields are prepared in a two-mode squeezing vacuum state initially. The effects of the field squeezing factor, the two-level atomic transition frequency, the second field frequency and the intrinsic decoherence on the entropy squeezing are discussed. Without intrinsic decoherence, the increase of field squeezing factor can break the entropy squeezing. The two-level atomic transition frequency changes only the period of oscillation but not the strength of entropy squeezing. The influence of the second field frequency is complicated. With the intrinsic decoherence taken into consideration, the results show that the stronger the intrinsic decoherence is, the more quickly the entropy squeezing will disappear. The increase of the atomic transition frequency can hasten the disappearance of entropy squeezing. (classical areas of phenomenology)

  15. An Indication of Reliability of the Two-Level Approach of the AWIN Welfare Assessment Protocol for Horses

    Directory of Open Access Journals (Sweden)

    Irena Czycholl

    2018-01-01

    Full Text Available To enhance feasibility, the Animal Welfare Indicators (AWIN assessment protocol for horses consists of two levels: the first is a visual inspection of a sample of horses performed from a distance, the second a close-up inspection of all horses. The aim was to analyse whether information would be lost if only the first level were performed. In this study, 112 first and 112 second level assessments carried out on a subsequent day by one observer were compared by calculating the Spearman’s Rank Correlation Coefficient (RS, Intraclass Correlation Coefficients (ICC, Smallest Detectable Changes (SDC and Limits of Agreements (LoA. Most indicators demonstrated sufficient reliability between the two levels. Exceptions were the Horse Grimace Scale, the Avoidance Distance Test and the Voluntary Human Approach Test (e.g., Voluntary Human Approach Test: RS: 0.38, ICC: 0.38, SDC: 0.21, LoA: −0.25–0.17, which could, however, be also interpreted as a lack of test-retest reliability. Further disagreement was found for the indicator consistency of manure (RS: 0.31, ICC: 0.38, SDC: 0.36, LoA: −0.38–0.36. For these indicators, an adaptation of the first level would be beneficial. Overall, in this study, the division into two levels was reliable and might therewith have the potential to enhance feasibility in other welfare assessment schemes.

  16. Parallel electric fields from ionospheric winds

    International Nuclear Information System (INIS)

    Nakada, M.P.

    1987-01-01

    The possible production of electric fields parallel to the magnetic field by dynamo winds in the E region is examined, using a jet stream wind model. Current return paths through the F region above the stream are examined as well as return paths through the conjugate ionosphere. The Wulf geometry with horizontal winds moving in opposite directions one above the other is also examined. Parallel electric fields are found to depend strongly on the width of current sheets at the edges of the jet stream. If these are narrow enough, appreciable parallel electric fields are produced. These appear to be sufficient to heat the electrons which reduces the conductivity and produces further increases in parallel electric fields and temperatures. Calculations indicate that high enough temperatures for optical emission can be produced in less than 0.3 s. Some properties of auroras that might be produced by dynamo winds are examined; one property is a time delay in brightening at higher and lower altitudes

  17. Comparison of Cervical Kinematics, Pain, and Functional Disability Between Single- and Two-level Anterior Cervical Discectomy and Fusion.

    Science.gov (United States)

    Chien, Andy; Lai, Dar-Ming; Wang, Shwu-Fen; Hsu, Wei-Li; Cheng, Chih-Hsiu; Wang, Jaw-Lin

    2016-08-01

    A prospective, time series design. The purpose of this study is two-fold: firstly, to investigate the impact of altered cervical alignment and range of motion (ROM) on patients' self-reported outcomes after anterior cervical discectomy and fusion (ACDF), and secondly, to comparatively differentiate the influence of single- and two-level ACDF on the cervical ROM and adjacent segmental kinematics up to 12-month postoperatively. ACDF is one of the most commonly employed surgical interventions to treat degenerative disc disease. However, there are limited in vivo data on the impact of ACDF on the cervical kinematics and its association with patient-reported clinical outcomes. Sixty-two patients (36 males; 55.63 ± 11.6 yrs) undergoing either a single- or consecutive two-level ACDF were recruited. The clinical outcomes were assessed with the Pain Visual Analogue Scale (VAS) and the Neck Disability Index (NDI). Radiological results included cervical lordosis, global C2-C7 ROM, ROM of the Functional Spinal Unit (FSU), and its adjacent segments. The outcome measures were collected preoperatively and then at 3, 6, and 12-month postoperatively. A significant reduction of both VAS and NDI was found for both groups from the preoperative to 3-month period (P < 0.01). Pearson correlation revealed no significant correlation between global ROM with neither VAS (P = 0.667) nor NDI (P = 0.531). A significant reduction of global ROM was identified for the two-level ACDF group at 12 months (P = 0.017) but not for the single-level group. A significant interaction effect was identified for the upper adjacent segment ROM (P = 0.024) but not at the lower adjacent segment. Current study utilized dynamic radiographs to comparatively evaluate the biomechanical impact of single- and two-level ACDF. The results highlighted that the two-level group demonstrated a greater reduction of global ROM coupled with an increased upper adjacent segmental compensatory motions that

  18. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  19. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  20. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  1. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  2. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  3. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  4. Parallel execution of chemical software on EGEE Grid

    CERN Document Server

    Sterzel, Mariusz

    2008-01-01

    Constant interest among chemical community to study larger and larger molecules forces the parallelization of existing computational methods in chemistry and development of new ones. These are main reasons of frequent port updates and requests from the community for the Grid ports of new packages to satisfy their computational demands. Unfortunately some parallelization schemes used by chemical packages cannot be directly used in Grid environment. Here we present a solution for Gaussian package. The current state of development of Grid middleware allows easy parallel execution in case of software using any of MPI flavour. Unfortunately many chemical packages do not use MPI for parallelization therefore special treatment is needed. Gaussian can be executed in parallel on SMP architecture or via Linda. These require reservation of certain number of processors/cores on a given WN and the equal number of processors/cores on each WN, respectively. The current implementation of EGEE middleware does not offer such f...

  5. Two-level anterior lumbar interbody fusion with percutaneous pedicle screw fixation. A minimum 3-year follow-up study

    International Nuclear Information System (INIS)

    Lee, Dong-Yeob; Lee, Sang-Ho; Maeng, Dae-Hyeon

    2010-01-01

    The clinical and radiological outcomes of two-level anterior lumbar interbody fusion (ALIF) with percutaneous pedicle screw fixation (PSF) were evaluated in 24 consecutive patients who underwent two level ALIF with percutaneous PSF for segmental instability and were followed up for more than 3 years. Clinical outcomes were assessed using a visual analogue scale (VAS) score and the Oswestry Disability Index (ODI). Sagittal alignment, bone union, and adjacent segment degeneration (ASD) were assessed using radiography and magnetic resonance imaging. The mean age of the patients at the time of operation was 56.3 years (range 39-70 years). Minor complications occurred in 2 patients in the perioperative period. At a mean follow-up duration of 39.4 months (range 36-42 months), VAS scores for back pain and leg pain, and ODI score decreased significantly (from 6.5, 6.8, and 46.9% to 3.0, 1.9, and 16.3%, respectively). Clinical success was achieved in 22 of the 24 patients. The mean segmental lordosis, whole lumbar lordosis, and sacral tilt significantly increased after surgery (from 25.1deg, 39.2deg, and 32.6deg to 32.9deg, 44.5deg, and 36.6deg, respectively). Solid fusion was achieved in 21 patients. ASD was found in 8 of the 24 patients. No patient underwent revision surgery due to nonunion or ASD. Two-level ALIF with percutaneous PSF yielded satisfactory clinical and radiological outcomes and could be a useful alternative to posterior fusion surgery. (author)

  6. Two-level anterior lumbar interbody fusion with percutaneous pedicle screw fixation. A minimum 3-year follow-up study

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong-Yeob; Lee, Sang-Ho; Maeng, Dae-Hyeon [Wooridul Spine Hospital, Seoul (Korea, Republic of)

    2010-08-15

    The clinical and radiological outcomes of two-level anterior lumbar interbody fusion (ALIF) with percutaneous pedicle screw fixation (PSF) were evaluated in 24 consecutive patients who underwent two level ALIF with percutaneous PSF for segmental instability and were followed up for more than 3 years. Clinical outcomes were assessed using a visual analogue scale (VAS) score and the Oswestry Disability Index (ODI). Sagittal alignment, bone union, and adjacent segment degeneration (ASD) were assessed using radiography and magnetic resonance imaging. The mean age of the patients at the time of operation was 56.3 years (range 39-70 years). Minor complications occurred in 2 patients in the perioperative period. At a mean follow-up duration of 39.4 months (range 36-42 months), VAS scores for back pain and leg pain, and ODI score decreased significantly (from 6.5, 6.8, and 46.9% to 3.0, 1.9, and 16.3%, respectively). Clinical success was achieved in 22 of the 24 patients. The mean segmental lordosis, whole lumbar lordosis, and sacral tilt significantly increased after surgery (from 25.1deg, 39.2deg, and 32.6deg to 32.9deg, 44.5deg, and 36.6deg, respectively). Solid fusion was achieved in 21 patients. ASD was found in 8 of the 24 patients. No patient underwent revision surgery due to nonunion or ASD. Two-level ALIF with percutaneous PSF yielded satisfactory clinical and radiological outcomes and could be a useful alternative to posterior fusion surgery. (author)

  7. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  8. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  9. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  10. Entropy as a measure of the noise extent in a two-level quantum feedback controlled system

    Institute of Scientific and Technical Information of China (English)

    Wang Tao-Bo; Fang Mao-Fa; Hu Yao-Hua

    2007-01-01

    By introducing the von Neumann entropy as a measure of the extent of noise, this paper discusses the entropy evolution in a two-level quantum feedback controlled system. The results show that the feedback control can induce the reduction of the degree of noise, and different control schemes exhibit different noise controlling ability, the extent of the reduction also related with the position of the target state on the Bloch sphere. It is shown that the evolution of entropy can provide a real time noise observation and a systematic guideline to make reasonable choice of control strategy.

  11. Comparison of PI and PR current controllers applied on two-level VSC-HVDC transmission system

    DEFF Research Database (Denmark)

    Manoloiu, A.; Pereria, H.A.; Teodorescu, Remus

    2015-01-01

    This paper analyzes differences between αβ and dq reference frames regarding the control of two-level VSC-HVDC current loop and dc-link voltage outer loop. In the first part, voltage feedforward effect is considered with PI and PR controllers. In the second part, the feedforward effect is removed...... and the PR gains are tuned to keep the dynamic performance. Also, the power feedforward is removed and the outer loop PI controller is tuned in order to maintain the system dynamic performance. The paper is completed with simulation results, which highlight the advantages of using PR controller....

  12. On the deviation from the sech2 superradiant emission law in a two-level atomic system

    International Nuclear Information System (INIS)

    Goncalves, A.E.

    1990-01-01

    The atomic superradiant emission is treated in the single particle mean field approximation. A single particle Hamiltonian, which represents a dressed two-level atom in a radiation field, can be obtained and it is verified that it describes the transient regime of the emission process. While the line shape emission for a bare atom follows the sech 2 law, for the dressed atom the line shape deviates appreciably from this law and it is verified that the deviation depends crucially on the ratio of the dynamic frequency shift to the transition frequency. This kind of deviation is observed in experimental results. (Author) [pt

  13. 2L-PCA: a two-level principal component analyzer for quantitative drug design and its applications.

    Science.gov (United States)

    Du, Qi-Shi; Wang, Shu-Qing; Xie, Neng-Zhong; Wang, Qing-Yan; Huang, Ri-Bo; Chou, Kuo-Chen

    2017-09-19

    A two-level principal component predictor (2L-PCA) was proposed based on the principal component analysis (PCA) approach. It can be used to quantitatively analyze various compounds and peptides about their functions or potentials to become useful drugs. One level is for dealing with the physicochemical properties of drug molecules, while the other level is for dealing with their structural fragments. The predictor has the self-learning and feedback features to automatically improve its accuracy. It is anticipated that 2L-PCA will become a very useful tool for timely providing various useful clues during the process of drug development.

  14. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  15. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  16. On the Convergence of Asynchronous Parallel Pattern Search

    International Nuclear Information System (INIS)

    Tamara Gilbson Kolda

    2002-01-01

    In this paper the authors prove global convergence for asynchronous parallel pattern search. In standard pattern search, decisions regarding the update of the iterate and the step-length control parameter are synchronized implicitly across all search directions. They lose this feature in asynchronous parallel pattern search since the search along each direction proceeds semi-autonomously. By bounding the value of the step-length control parameter after any step that produces decrease along a single search direction, they can prove that all the processes share a common accumulation point and that such a point is a stationary point of the standard nonlinear unconstrained optimization problem

  17. MINARET: Towards a time-dependent neutron transport parallel solver

    International Nuclear Information System (INIS)

    Baudron, A.M.; Lautard, J.J.; Maday, Y.; Mula, O.

    2013-01-01

    We present the newly developed time-dependent 3D multigroup discrete ordinates neutron transport solver that has recently been implemented in the MINARET code. The solver is the support for a study about computing acceleration techniques that involve parallel architectures. In this work, we will focus on the parallelization of two of the variables involved in our equation: the angular directions and the time. This last variable has been parallelized by a (time) domain decomposition method called the para-real in time algorithm. (authors)

  18. Transverse magnetic field effect on the giant Goos–Hänchen shifts based on a degenerate two-level system

    Science.gov (United States)

    Nasehi, R.

    2018-06-01

    We study the effect of the Goos–Hänchen (GH) shifts through a cavity with degenerate two-level systems in the line of . For this purpose, we focus on the transverse magnetic field (TMF) in a Floquet frame to obtain the giant GH shifts. Physically, the collisional effects of TMF lead to increasing the population trapping in the ground state. However, we demonstrate that the population trapping generates the large negative or positive GH shifts and simultaneously switches from superluminal to subluminal (or vice versa). Also, we investigate the other optical properties such as the longitudinal magnetic field (LMF), which plays an important role in the control of the GH shifts and leads to the generation of new subsystems. In the next step, we evaluate the GH shifts beyond the multi-photon resonance condition by the control of TMF. Moreover, we compute the appearance of negative and positive GH shifts by setting the width of the incident Gaussian beams in the presence of a multi-photon resonance condition. Our results show that superluminal or subluminal light propagation can be simultaneously controlled by adjusting the rates of the TMF and LMF. The significant effects of these factors on the degenerate two-level systems provide different applications such as slow light, optical switches and quantum information storage.

  19. Political legitimacy and European monetary union: contracts, constitutionalism and the normative logic of two-level games

    Science.gov (United States)

    Bellamy, Richard; Weale, Albert

    2015-01-01

    ABSTRACT The crisis of the euro area has severely tested the political authority of the European Union (EU). The crisis raises questions of normative legitimacy both because the EU is a normative order and because the construction of economic and monetary union (EMU) rested upon a theory that stressed the normative value of the depoliticization of money. However, this theory neglected the normative logic of the two-level game implicit in EMU. It also neglected the need for an impartial and publically acceptable constitutional order to acknowledge reasonable disagreements. By contrast, we contend that any reconstruction of the EU's economic constitution has to pay attention to reconciling a European monetary order with the legitimacy of member state governance. The EU requires a two-level contract to meet this standard. Member states must treat each other as equals and be representative of and accountable to their citizens on an equitable basis. These criteria entail that the EU's political legitimacy requires a form of demoicracy that we call ‘republican intergovernmentalism’. Only rules that could be acceptable as the product of a political constitution among the peoples of Europe can ultimately meet the required standards of political legitimacy. Such a political constitution could be brought about through empowering national parliaments in EU decision-making. PMID:26924935

  20. Experimental study of magnetocaloric effect in the two-level quantum system KTm(MoO4)2

    Science.gov (United States)

    Tarasenko, R.; Tkáč, V.; Orendáčová, A.; Orendáč, M.; Valenta, J.; Sechovský, V.; Feher, A.

    2018-05-01

    KTm(MoO4)2 belongs to the family of binary alkaline rare-earth molybdates. This compound can be considered to be an almost ideal quantum two-level system at low temperatures. Magnetocaloric properties of KTm(MoO4)2 single crystals were investigated using specific heat and magnetization measurement in the magnetic field applied along the easy axis. Large conventional magnetocaloric effect (-ΔSM ≈ 10.3 J/(kg K)) was observed in the magnetic field of 5 T in a relatively wide temperature interval. The isothermal magnetic entropy change of about 8 J/(kgK) has been achieved already for the magnetic field of 2 T. Temperature dependence of the isothermal entropy change under different magnetic fields is in good agreement with theoretical predictions for a quantum two-level system with Δ ≈ 2.82 cm-1. Investigation of magnetocaloric properties of KTm(MoO4)2 suggests that the studied system can be considered as a good material for magnetic cooling at low temperatures.

  1. Can centralized sanctioning promote trust in social dilemmas? A two-level trust game with incomplete information.

    Science.gov (United States)

    Wang, Raymond Yu; Ng, Cho Nam

    2015-01-01

    The problem of trust is a paradigmatic social dilemma. Previous literature has paid much academic attention on effects of peer punishment and altruistic third-party punishment on trust and human cooperation in dyadic interactions. However, the effects of centralized sanctioning institutions on decentralized reciprocity in hierarchical interactions remain to be further explored. This paper presents a formal two-level trust game with incomplete information which adds an authority as a strategic purposive actor into the traditional trust game. This model allows scholars to examine the problem of trust in more complex game theoretic configurations. The analysis demonstrates how the centralized institutions might change the dynamics of reciprocity between the trustor and the trustee. Findings suggest that the sequential equilibria of the newly proposed two-level model simultaneously include the risk of placing trust for the trustor and the temptation of short-term defection for the trustee. Moreover, they have shown that even a slight uncertainty about the type of the newly introduced authority might facilitate the establishment of trust and reciprocity in social dilemmas.

  2. Can centralized sanctioning promote trust in social dilemmas? A two-level trust game with incomplete information.

    Directory of Open Access Journals (Sweden)

    Raymond Yu Wang

    Full Text Available The problem of trust is a paradigmatic social dilemma. Previous literature has paid much academic attention on effects of peer punishment and altruistic third-party punishment on trust and human cooperation in dyadic interactions. However, the effects of centralized sanctioning institutions on decentralized reciprocity in hierarchical interactions remain to be further explored. This paper presents a formal two-level trust game with incomplete information which adds an authority as a strategic purposive actor into the traditional trust game. This model allows scholars to examine the problem of trust in more complex game theoretic configurations. The analysis demonstrates how the centralized institutions might change the dynamics of reciprocity between the trustor and the trustee. Findings suggest that the sequential equilibria of the newly proposed two-level model simultaneously include the risk of placing trust for the trustor and the temptation of short-term defection for the trustee. Moreover, they have shown that even a slight uncertainty about the type of the newly introduced authority might facilitate the establishment of trust and reciprocity in social dilemmas.

  3. An Economic Order Quantity Model with Completely Backordering and Nondecreasing Demand under Two-Level Trade Credit

    Directory of Open Access Journals (Sweden)

    Zohreh Molamohamadi

    2014-01-01

    Full Text Available In the traditional inventory system, it was implicitly assumed that the buyer pays to the seller as soon as he receives the items. In today’s competitive industry, however, the seller usually offers the buyer a delay period to settle the account of the goods. Not only the seller but also the buyer may apply trade credit as a strategic tool to stimulate his customers’ demands. This paper investigates the effects of the latter policy, two-level trade credit, on a retailer’s optimal ordering decisions within the economic order quantity framework and allowable shortages. Unlike most of the previous studies, the demand function of the customers is considered to increase with time. The objective of the retailer’s inventory model is to maximize the profit. The replenishment decisions optimally are obtained using genetic algorithm. Two special cases of the proposed model are discussed and the impacts of parameters on the decision variables are finally investigated. Numerical examples demonstrate the profitability of the developed two-level supply chain with backorder.

  4. Political legitimacy and European monetary union: contracts, constitutionalism and the normative logic of two-level games.

    Science.gov (United States)

    Bellamy, Richard; Weale, Albert

    2015-02-07

    The crisis of the euro area has severely tested the political authority of the European Union (EU). The crisis raises questions of normative legitimacy both because the EU is a normative order and because the construction of economic and monetary union (EMU) rested upon a theory that stressed the normative value of the depoliticization of money. However, this theory neglected the normative logic of the two-level game implicit in EMU. It also neglected the need for an impartial and publically acceptable constitutional order to acknowledge reasonable disagreements. By contrast, we contend that any reconstruction of the EU's economic constitution has to pay attention to reconciling a European monetary order with the legitimacy of member state governance. The EU requires a two-level contract to meet this standard. Member states must treat each other as equals and be representative of and accountable to their citizens on an equitable basis. These criteria entail that the EU's political legitimacy requires a form of demoi cracy that we call 'republican intergovernmentalism'. Only rules that could be acceptable as the product of a political constitution among the peoples of Europe can ultimately meet the required standards of political legitimacy. Such a political constitution could be brought about through empowering national parliaments in EU decision-making.

  5. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  6. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  7. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  8. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  9. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  10. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  11. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  12. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  13. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  14. Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation

    Science.gov (United States)

    Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.

    1996-01-01

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.

  15. Parallel direct solver for finite element modeling of manufacturing processes

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, P.A.F.

    2017-01-01

    The central processing unit (CPU) time is of paramount importance in finite element modeling of manufacturing processes. Because the most significant part of the CPU time is consumed in solving the main system of equations resulting from finite element assemblies, different approaches have been...

  16. Parallelized Seeded Region Growing Using CUDA

    Directory of Open Access Journals (Sweden)

    Seongjin Park

    2014-01-01

    Full Text Available This paper presents a novel method for parallelizing the seeded region growing (SRG algorithm using Compute Unified Device Architecture (CUDA technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.

  17. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  18. Supply chain model with price- and trade credit-sensitive demand under two-level permissible delay in payments

    Science.gov (United States)

    Giri, B. C.; Maiti, T.

    2013-05-01

    This article develops a single-manufacturer and single-retailer supply chain model under two-level permissible delay in payments when the manufacturer follows a lot-for-lot policy in response to the retailer's demand. The manufacturer offers a trade credit period to the retailer with the contract that the retailer must share a fraction of the profit earned during the trade credit period. On the other hand, the retailer provides his customer a partial trade credit which is less than that of the manufacturer. The demand at the retailer is assumed to be dependent on the selling price and the trade credit period offered to the customers. The average net profit of the supply chain is derived and an algorithm for finding the optimal solution is developed. Numerical examples are given to demonstrate the coordination policy of the supply chain and examine the sensitivity of key model-parameters.

  19. Inverse problem for a two-level medium with an inhomgeneously broadened transition in the field of a periodic wave

    International Nuclear Information System (INIS)

    Zabolotskii, A.A.

    1995-01-01

    The inverse problem is considered for a spectral problem, which is formally equivalent to a system of Bloch equations for an inhomogeneously broadened transition interacting with the electric field. Two cases are considered to demonstrate that, for any given frequency interval, one can determine the pulse of the shape which corresponds to the interaction with only this frequency interval. In the general case, the pulse shape is described by a nonlinear periodic wave. The first example is the resonance interaction of light with a gas of two-level atoms. The second example is interaction of a linearly polarized light with the molecular J-J transition, where J much-gt 1. In the latter case, the role of inhomogeneous broadening belongs to the frequency shift induced by the applied magnetic field. 10 refs

  20. Quantum correlations between each two-level system in a pair of atoms and general coherent fields

    Directory of Open Access Journals (Sweden)

    S. Abdel-Khalek

    Full Text Available The quantitative description of the quantum correlations between each two-level system in a two-atom system and the coherent fields initially defined in a coherent state in the framework of power-law potentials (PLPCSs is considered. Specifically, we consider two atoms locally interacting with PLPCSs and take into account the different terms of interactions, the entanglement and quantum discord are studied including the time-dependent coupling and photon transition effects. Using the monogamic relation between the entanglement of formation and quantum discord in tripartite systems, we show that the control and preservation of the different kinds of quantum correlations greatly benefit from the combination of the choice of the physical quantities. Finally, we explore the link between the dynamical behavior of quantum correlations and nonclassicality of the fields with and without atomic motion effect. Keywords: Quantum correlations, Monogamic relation, Coherent states, Power-law potentials, Wehrl entropy

  1. Cascaded two-photon nonlinearity in a one-dimensional waveguide with multiple two-level emitters

    Science.gov (United States)

    Roy, Dibyendu

    2013-01-01

    We propose and theoretically investigate a model to realize cascaded optical nonlinearity with few atoms and photons in one-dimension (1D). The optical nonlinearity in our system is mediated by resonant interactions of photons with two-level emitters, such as atoms or quantum dots in a 1D photonic waveguide. Multi-photon transmission in the waveguide is nonreciprocal when the emitters have different transition energies. Our theory provides a clear physical understanding of the origin of nonreciprocity in the presence of cascaded nonlinearity. We show how various two-photon nonlinear effects including spatial attraction and repulsion between photons, background fluorescence can be tuned by changing the number of emitters and the coupling between emitters (controlled by the separation). PMID:23948782

  2. Temporal Bell-type inequalities for two-level Rydberg atoms coupled to a high-Q resonator

    International Nuclear Information System (INIS)

    Huelga, S.F.; Marshall, T.W.; Santos, E.

    1996-01-01

    Following the strategy of showing specific quantum effects by means of the violation of a classical inequality, a pair of Bell-type inequalities is derived on the basis of certain additional assumptions, whose plausibility is discussed in detail. Such inequalities are violated by the quantum mechanical predictions for the interaction of a two-level Rydberg atom with a single mode sustained by a high-Q resonator. The experimental conditions required in order to show the existence of forbidden values, according to a hidden variables formalism, in a real experiment are analyzed for various initial field statistics. In particular, the revival dynamics expected for the interaction with a coherent field leads to classically forbidden values, which would indicate a purely quantum effect. copyright 1996 The American Physical Society

  3. Dynamics of a trapped two-level and three-level atom interacting with classical electromagnetic field

    International Nuclear Information System (INIS)

    Ray, Aditi

    2004-01-01

    The dynamics of a two-level atom driven by a single laser beam and three-level atom (Lambda configuration) irradiated by two laser beams are studied taking into account of the quantized center-of-mass motion of the atom. It is shown that the trapped atom system under appropriate resonance condition exhibits the large time-scale revivals when the index of the vibrational sideband responsible for the atomic electronic transition is greater than unity. The revival times are shown to be dependent on the initial number of vibrational excitations and the magnitude of the Lamb-Dicke parameter. The sub-Poissonian statistics in vibrational quantum number is observed at certain time intervals. The minimum time of interaction for which the squeezed states of motional quadrature are generated is found to be decreasing with the increase in the Lamb-Dicke parameter

  4. LCL filter design for three-phase two-level power factor correction using line impedance stabilization network

    DEFF Research Database (Denmark)

    Kouchaki, Alireza; Nymand, Morten

    2016-01-01

    This paper presents LCL filter design method for three-phase two-level power factor correction (PFC) using line impedance stabilization network (LISN). A straightforward LCL filter design along with variation in grid impedance is not simply achievable and inevitably lead to an iterative solution...... for filter. By introducing of fast power switches for PFC applications such as silicon-carbide, major current harmonics around the switching frequency drops in the region that LISN can actively provide well-defined impedance for measuring the harmonics (i.e. 9 kHz- 30MHz). Therefore, LISN can be replaced...... is derived using the current ripple behavior of converter-side inductor. The grid-side inductor is achieved as a function of LISN impedance to fulfill the grid regulation. To verify the analyses, an LCL filter is designed for a 5 kW SiC-based PFC. The simulation and experimental results support the validity...

  5. Loschmidt echo of a two-level qubit coupled to nonuniform anisotropic XY chains in a transverse field

    International Nuclear Information System (INIS)

    Zhong Ming; Tong Peiqing

    2011-01-01

    The Loschmidt echo (LE) of a central two-level qubit coupled to nonuniform anisotropic XY chains in a transverse field is studied. A general formula for LE is derived, which we use to discuss the influence of the criticality of the environment on LE. It is found that for the periodic XY chain the behaviors of LE in the vicinity of the critical points are similar to those of the uniform case. It is different for the disordered transverse Ising chains. For the aperiodic chains, if the surrounding systems are bounded chains, the behaviors of LE are similar to those of the uniform case, while if the surrounding systems are unbounded chains, they are similar to those of the disordered case.

  6. The EPQ model under conditions of two levels of trade credit and limited storage capacity in supply chain management

    Science.gov (United States)

    Chung, Kun-Jen

    2013-09-01

    An inventory problem involves a lot of factors influencing inventory decisions. To understand it, the traditional economic production quantity (EPQ) model plays rather important role for inventory analysis. Although the traditional EPQ models are still widely used in industry, practitioners frequently question validities of assumptions of these models such that their use encounters challenges and difficulties. So, this article tries to present a new inventory model by considering two levels of trade credit, finite replenishment rate and limited storage capacity together to relax the basic assumptions of the traditional EPQ model to improve the environment of the use of it. Keeping in mind cost-minimisation strategy, four easy-to-use theorems are developed to characterise the optimal solution. Finally, the sensitivity analyses are executed to investigate the effects of the various parameters on ordering policies and the annual total relevant costs of the inventory system.

  7. Dynamics of a Landau-Zener transitions in a two-level system driven by a dissipative environment

    Science.gov (United States)

    Ateuafack, M. E.; Diffo, J. T.; Fai, L. C.

    2016-02-01

    The paper investigates the effects of a two-level quantum system coupled to transversal and longitudinal dissipative environment. The time-dependent phase accumulation, LZ transition probability and entropy in the presence of fast-ohmic, sub-ohmic and super-ohmic quantum noise are derived. Analytical results are obtained in terms of temperature, dissipation strength, LZ parameter and bath cutoff frequency. The bath is observed to modify the standard occupation difference by a decaying random phase factor and also produces dephasing during the transfer of population. The dephasing characteristics or the initial non-zero decoherence rate are observed to increase in time with the bath temperature and depend on the system-bath coupling strength and cutoff frequency. These parameters are found to strongly affect the memory and thus tailor the coherence process of the system.

  8. Dynamics of a Landau–Zener transitions in a two-level system driven by a dissipative environment

    Energy Technology Data Exchange (ETDEWEB)

    Ateuafack, M.E., E-mail: esouamath@yahoo.fr [Mesoscopic and Multilayer Structures Laboratory, Department of Physics, Faculty of Science, University of Dschang (Cameroon); Diffo, J.T., E-mail: diffojaures@yahoo.com [Mesoscopic and Multilayer Structures Laboratory, Department of Physics, Faculty of Science, University of Dschang (Cameroon); Department of Physics, Higher Teachers' Training College, The University of Maroua, PO Box 55 Maroua (Cameroon); Fai, L.C., E-mail: corneliusfai@yahoo.fr [Mesoscopic and Multilayer Structures Laboratory, Department of Physics, Faculty of Science, University of Dschang (Cameroon)

    2016-02-15

    The paper investigates the effects of a two-level quantum system coupled to transversal and longitudinal dissipative environment. The time-dependent phase accumulation, LZ transition probability and entropy in the presence of fast-ohmic, sub-ohmic and super-ohmic quantum noise are derived. Analytical results are obtained in terms of temperature, dissipation strength, LZ parameter and bath cutoff frequency. The bath is observed to modify the standard occupation difference by a decaying random phase factor and also produces dephasing during the transfer of population. The dephasing characteristics or the initial non-zero decoherence rate are observed to increase in time with the bath temperature and depend on the system-bath coupling strength and cutoff frequency. These parameters are found to strongly affect the memory and thus tailor the coherence process of the system.

  9. An acceleration of the characteristics by a space-angle two-level method using surface discontinuity factors

    Energy Technology Data Exchange (ETDEWEB)

    Grassi, G. [Commissariat a l' Energie Atomique, CEA de Saclay, DM2S/SERMA/LENR, 91191, Gif-sur-Yvette (France)

    2006-07-01

    We present a non-linear space-angle two-level acceleration scheme for the method of the characteristics (MOC). To the fine level on which the MOC transport calculation is performed, we associate a more coarsely discretized phase space in which a low-order problem is solved as an acceleration step. Cross sections on the coarse level are obtained by a flux-volume homogenisation technique, which entails the non-linearity of the acceleration. Discontinuity factors per surface are introduced as additional degrees of freedom on the coarse level in order to ensure the equivalence of the heterogeneous and the homogenised problem. After each fine transport iteration, a low-order transport problem is iteratively solved on the homogenised grid. The solution of this problem is then used to correct the angular moments of the flux resulting from the previous free transport sweep. Numerical tests for a given benchmark have been performed. Results are discussed. (authors)

  10. An acceleration of the characteristics by a space-angle two-level method using surface discontinuity factors

    International Nuclear Information System (INIS)

    Grassi, G.

    2006-01-01

    We present a non-linear space-angle two-level acceleration scheme for the method of the characteristics (MOC). To the fine level on which the MOC transport calculation is performed, we associate a more coarsely discretized phase space in which a low-order problem is solved as an acceleration step. Cross sections on the coarse level are obtained by a flux-volume homogenisation technique, which entails the non-linearity of the acceleration. Discontinuity factors per surface are introduced as additional degrees of freedom on the coarse level in order to ensure the equivalence of the heterogeneous and the homogenised problem. After each fine transport iteration, a low-order transport problem is iteratively solved on the homogenised grid. The solution of this problem is then used to correct the angular moments of the flux resulting from the previous free transport sweep. Numerical tests for a given benchmark have been performed. Results are discussed. (authors)

  11. A Two-level-games Analysis of AFTA Agreements: What Caused ASEAN States to Move towards Economic Integration?

    Directory of Open Access Journals (Sweden)

    Yi-hung Chiou

    2010-04-01

    Full Text Available The goal of this article is to investigate the conditions under which ASEAN states are more likely to pursue regional economic integration, namely, a series of ASEAN Free Trade Area (AFTA agreements/ protocols. Adopting Putnam’s two-level-games model, this article examines the influences of domestic politics, political elites’ preferences, economic performance, and external impacts. Through the construction of a set of hypotheses, this article investigates five AFTA agreements/ protocols and the conditions of ASEAN states during the 1992–2003 period. The findings indicate that political leaders’ preferences have played a pivotal role in the development of the AFTA. Economic performance and domestic support in individual states has also affected the AFTA. The close link between AFTA agreements and external impacts reveals that the AFTA’s inherent nature is defensive.

  12. Design of a Two-level Adaptive Multi-Agent System for Malaria Vectors driven by an ontology

    Directory of Open Access Journals (Sweden)

    Etang Josiane

    2007-07-01

    Full Text Available Abstract Background The understanding of heterogeneities in disease transmission dynamics as far as malaria vectors are concerned is a big challenge. Many studies while tackling this problem don't find exact models to explain the malaria vectors propagation. Methods To solve the problem we define an Adaptive Multi-Agent System (AMAS which has the property to be elastic and is a two-level system as well. This AMAS is a dynamic system where the two levels are linked by an Ontology which allows it to function as a reduced system and as an extended system. In a primary level, the AMAS comprises organization agents and in a secondary level, it is constituted of analysis agents. Its entry point, a User Interface Agent, can reproduce itself because it is given a minimum of background knowledge and it learns appropriate "behavior" from the user in the presence of ambiguous queries and from other agents of the AMAS in other situations. Results Some of the outputs of our system present a series of tables, diagrams showing some factors like Entomological parameters of malaria transmission, Percentages of malaria transmission per malaria vectors, Entomological inoculation rate. Many others parameters can be produced by the system depending on the inputted data. Conclusion Our approach is an intelligent one which differs from statistical approaches that are sometimes used in the field. This intelligent approach aligns itself with the distributed artificial intelligence. In terms of fight against malaria disease our system offers opportunities of reducing efforts of human resources who are not obliged to cover the entire territory while conducting surveys. Secondly the AMAS can determine the presence or the absence of malaria vectors even when specific data have not been collected in the geographical area. In the difference of a statistical technique, in our case the projection of the results in the field can sometimes appeared to be more general.

  13. A preventive maintenance model with a two-level inspection policy based on a three-stage failure process

    International Nuclear Information System (INIS)

    Wang, Wenbin; Zhao, Fei; Peng, Rui

    2014-01-01

    Inspection is always an important preventive maintenance (PM) activity and can have different depths and cover all or part of plant systems. This paper introduces a two-level inspection policy model for a single component plant system based on a three-stage failure process. Such a failure process divides the system′s life into three stages: good, minor defective and severe defective stages. The first level of inspection, the minor inspection, can only identify the minor defective stage with a certain probability, but can always reveal the severe defective stage. The major inspection can however identify both defective stages perfectly. Once the system is found to be in the minor defective stage, a shortened inspection interval is adopted. If however the system is found to be in the severe defective stage, we may delay the maintenance action if the time to the next planned PM window is less than a threshold level, but otherwise, replace immediately. This corresponds to the well adopted maintenance policy in practice such as periodic inspections with planned PMs. A numerical example is presented to demonstrate the proposed model by comparing with other models. - Highlights: • The system′s deterioration goes through a three-stage process, namely, normal, minor defective and severe defective. • Two levels of inspections are proposed, e.g., minor and major inspections. • Once the minor defective stage is found, instead of taking a maintenance action, a shortened inspection interval is recommended. • When the severe defective stage is found, we delay the maintenance according to the threshold to the next PM. • The decision variables are the inspection intervals and the threshold to PM

  14. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  15. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  16. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  17. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  18. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  19. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  20. A development framework for parallel CFD applications: TRIOU project

    International Nuclear Information System (INIS)

    Calvin, Ch.

    2003-01-01

    We present in this paper the parallel structure of a thermal-hydraulic framework: Trio-U. This development platform has been designed in order to solve large 3-dimensional structured or unstructured CFD (computational fluid dynamics) problems. The code is intrinsically parallel, and an object-oriented design, UML, is used. The implementation language chosen is C++. All the parallelism management and the communication routines have been encapsulated. Parallel I/O and communication classes over standard I/O streams of C++ have been defined, which allows the developer an easy use of the different modules of the application without dealing with basic parallel process management and communications. Moreover, the encapsulation of the communication routines, guarantees the portability of the application and allows an efficient tuning of basic communication methods in order to achieve the best performances of the target architecture. The speed-up of parallel applications designed using the Trio U framework are very good since we obtained, for instance, on complex turbulent flow Large Eddy Simulation (LES) simulations an efficiency of up to 90% on 20 processors. The efficiencies obtained on direct numerical simulations of two phase flow fluids are similar since the speed-up is nearly equals to 7.5 for a 3-dimensional simulation using a one million element mesh on 8 processors. The purpose of this paper is to focus on the main concepts and their implementation that were the guidelines of the design of the parallel architecture of the code. (author)

  1. Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python

    Science.gov (United States)

    Laura, Jason R.; Rey, Sergio J.

    2017-01-01

    Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.

  2. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  3. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  4. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  5. Comparison of the effects of growth hormone on acylated ghrelin and following acute intermittent exercise in two levels of obesity

    Directory of Open Access Journals (Sweden)

    Majid Gholipour

    2013-08-01

    Full Text Available Background: The prevalence of obesity has risen enormously over the past few decad-es. Both food intake (Appetite and energy expenditure can influence body weight. Acylated ghrelin enhances appetite, and its plasma level is suppressed by growth horm-one. The present study, examines the effects of an intermittent exercise with progress-ive intensities on acylated ghrelin, appetite, and growth hormone in inactive male students with two levels of obesity.Methods: Eleven inactive males were allocated into two groups on the basis of their body mass index (BMI. Six subjects in group one, BMI= 31.18±0.92 kg/m2, and five subjects in group two, BMI= 36.94±2.25 kg/m2, ran on the treadmill with progressive intensities of 50, 60, 70 and 80% of VO2max for 10, 10, 5, and 2 min respectively. Blood samples were collected before the exercise (as the resting values, after each workload (during the exercise, and at 30, 60, and 120 min (during recovery.Results: Plasma acylated ghrelin concentrations and hunger ratings in two groups were decreased and remained significantly lower than resting values (P=0.008 and P=0.002 respectively at the end of the trial and there was no significant differences between groups. Growth hormone levels in two groups were increased and remained significant-ly higher than resting values (groups one P=0.012, group two P=0.005 at the end of the trial and there was no significant differences between groups. In addition, there were no significant differences between area under the curves (AUC values over total periods for acylated ghrelin, hunger ratings, and growth hormone in two groups.Conclusion: These findings indicate that individuals with two levels of obesity have the same response to the different intensities of treadmill running and two hours thereafter during recovery period, which can be considered for designing a more effective weighting loss training program.

  6. Parallelization of a numerical simulation code for isotropic turbulence

    International Nuclear Information System (INIS)

    Sato, Shigeru; Yokokawa, Mitsuo; Watanabe, Tadashi; Kaburaki, Hideo.

    1996-03-01

    A parallel pseudospectral code which solves the three-dimensional Navier-Stokes equation by direct numerical simulation is developed and execution time, parallelization efficiency, load balance and scalability are evaluated. A vector parallel supercomputer, Fujitsu VPP500 with up to 16 processors is used for this calculation for Fourier modes up to 256x256x256 using 16 processors. Good scalability for number of processors is achieved when number of Fourier mode is fixed. For small Fourier modes, calculation time of the program is proportional to NlogN which is ideal complexity of calculation for 3D-FFT on vector parallel processors. It is found that the calculation performance decreases as the increase of the Fourier modes. (author)

  7. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    Science.gov (United States)

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  8. Controlled ultrafast transfer and stability degree of generalized coherent states of a kicked two-level ion

    Science.gov (United States)

    Chen, Hao; Kong, Chao; Hai, Wenhua

    2018-06-01

    We investigate quantum dynamics of a two-level ion trapped in the Lamb-Dicke regime of a δ -kicked optical lattice, based on the exact generalized coherent states rotated by a π / 2 pulse of Ramsey type experiment. The spatiotemporal evolutions of the spin-motion entangled states in different parameter regions are illustrated, and the parameter regions of different degrees of quantum stability described by the quantum fidelity are found. Time evolutions of the probability for the ion being in different pseudospin states reveal that the ultrafast entanglement generation and population transfers of the system can be analytically controlled by managing the laser pulses. The probability in an initially disentangled state shows periodic collapses (entanglement) and revivals (de-entanglement). Reduction of the stability degree results in enlarging the period of de-entanglement, while the instability and potential chaos will cause the sustained entanglement. The results could be justified experimentally in the existing setups and may be useful in engineering quantum dynamics for quantum information processing.

  9. A two-level discount model for coordinating a decentralized supply chain considering stochastic price-sensitive demand

    Science.gov (United States)

    Heydari, Jafar; Norouzinasab, Yousef

    2015-12-01

    In this paper, a discount model is proposed to coordinate pricing and ordering decisions in a two-echelon supply chain (SC). Demand is stochastic and price sensitive while lead times are fixed. Decentralized decision making where downstream decides on selling price and order size is investigated. Then, joint pricing and ordering decisions are extracted where both members act as a single entity aim to maximize whole SC profit. Finally, a coordination mechanism based on quantity discount is proposed to coordinate both pricing and ordering decisions simultaneously. The proposed two-level discount policy can be characterized from two aspects: (1) marketing viewpoint: a retail price discount to increase the demand, and (2) operations management viewpoint: a wholesale price discount to induce the retailer to adjust its order quantity and selling price jointly. Results of numerical experiments demonstrate that the proposed policy is suitable to coordinate SC and improve the profitability of SC as well as all SC members in comparison with decentralized decision making.

  10. Statistical Analysis of the Figure of Merit of a Two-Level Thermoelectric System: A Random Matrix Approach

    KAUST Repository

    Abbout, Adel

    2016-08-05

    Using the tools of random matrix theory we develop a statistical analysis of the transport properties of thermoelectric low-dimensional systems made of two electron reservoirs set at different temperatures and chemical potentials, and connected through a low-density-of-states two-level quantum dot that acts as a conducting chaotic cavity. Our exact treatment of the chaotic behavior in such devices lies on the scattering matrix formalism and yields analytical expressions for the joint probability distribution functions of the Seebeck coefficient and the transmission profile, as well as the marginal distributions, at arbitrary Fermi energy. The scattering matrices belong to circular ensembles which we sample to numerically compute the transmission function, the Seebeck coefficient, and their relationship. The exact transport coefficients probability distributions are found to be highly non-Gaussian for small numbers of conduction modes, and the analytical and numerical results are in excellent agreement. The system performance is also studied, and we find that the optimum performance is obtained for half-transparent quantum dots; further, this optimum may be enhanced for systems with few conduction modes.

  11. Optomechanically induced transparency in multi-cavity optomechanical system with and without one two-level atom.

    Science.gov (United States)

    Sohail, Amjad; Zhang, Yang; Zhang, Jun; Yu, Chang-Shui

    2016-06-28

    We analytically study the optomechanically induced transparency (OMIT) in the N-cavity system with the Nth cavity driven by pump, probing laser fields and the 1st cavity coupled to mechanical oscillator. We also consider that one atom could be trapped in the ith cavity. Instead of only illustrating the OMIT in such a system, we are interested in how the number of OMIT windows is influenced by the cavities and the atom and what roles the atom could play in different cavities. In the resolved sideband regime, we find that, the number of cavities precisely determines the maximal number of OMIT windows. It is interesting that, when the two-level atom is trapped in the even-labeled cavity, the central absorptive peak (odd N) or dip (even N) is split and forms an extra OMIT window, but if the atom is trapped in the odd-labeled cavity, the central absorptive peak (odd N) or dip (even N) is only broadened and thus changes the width of the OMIT windows rather than induces an extra window.

  12. Cavity quantum electrodynamics using a near-resonance two-level system: Emergence of the Glauber state

    Energy Technology Data Exchange (ETDEWEB)

    Sarabi, B.; Ramanayaka, A. N. [Laboratory for Physical Sciences, College Park, Maryland 20740 (United States); Department of Physics, University of Maryland, College Park, Maryland 20742 (United States); Burin, A. L. [Department of Chemistry, Tulane University, New Orleans, Louisiana 70118 (United States); Wellstood, F. C. [Department of Physics, University of Maryland, College Park, Maryland 20742 (United States); Joint Quantum Institute, University of Maryland, College Park, Maryland 20742 (United States); Osborn, K. D. [Laboratory for Physical Sciences, College Park, Maryland 20740 (United States); Joint Quantum Institute, University of Maryland, College Park, Maryland 20742 (United States)

    2015-04-27

    Random tunneling two-level systems (TLSs) in dielectrics have been of interest recently because they adversely affect the performance of superconducting qubits. The coupling of TLSs to qubits has allowed individual TLS characterization, which has previously been limited to TLSs within (thin) Josephson tunneling barriers made from aluminum oxide. Here, we report on the measurement of an individual TLS within the capacitor of a lumped-element LC microwave resonator, which forms a cavity quantum electrodynamics (CQED) system and allows for individual TLS characterization in a different structure and material than demonstrated with qubits. Due to the reduced volume of the dielectric (80 μm{sup 3}), even with a moderate dielectric thickness (250 nm), we achieve the strong coupling regime as evidenced by the vacuum Rabi splitting observed in the cavity spectrum. A TLS with a coherence time of 3.2 μs was observed in a film of silicon nitride as analyzed with a Jaynes-Cummings spectral model, which is larger than seen from superconducting qubits. As the drive power is increased, we observe an unusual but explicable set of continuous and discrete crossovers from the vacuum Rabi split transitions to the Glauber (coherent) state.

  13. Statistical Analysis of the Figure of Merit of a Two-Level Thermoelectric System: A Random Matrix Approach

    KAUST Repository

    Abbout, Adel; Ouerdane, Henni; Goupil, Christophe

    2016-01-01

    Using the tools of random matrix theory we develop a statistical analysis of the transport properties of thermoelectric low-dimensional systems made of two electron reservoirs set at different temperatures and chemical potentials, and connected through a low-density-of-states two-level quantum dot that acts as a conducting chaotic cavity. Our exact treatment of the chaotic behavior in such devices lies on the scattering matrix formalism and yields analytical expressions for the joint probability distribution functions of the Seebeck coefficient and the transmission profile, as well as the marginal distributions, at arbitrary Fermi energy. The scattering matrices belong to circular ensembles which we sample to numerically compute the transmission function, the Seebeck coefficient, and their relationship. The exact transport coefficients probability distributions are found to be highly non-Gaussian for small numbers of conduction modes, and the analytical and numerical results are in excellent agreement. The system performance is also studied, and we find that the optimum performance is obtained for half-transparent quantum dots; further, this optimum may be enhanced for systems with few conduction modes.

  14. Quantum phase transition in a coupled two-level system embedded in anisotropic three-dimensional photonic crystals.

    Science.gov (United States)

    Shen, H Z; Shao, X Q; Wang, G C; Zhao, X L; Yi, X X

    2016-01-01

    The quantum phase transition (QPT) describes a sudden qualitative change of the macroscopic properties mapped from the eigenspectrum of a quantum many-body system. It has been studied intensively in quantum systems with the spin-boson model, but it has barely been explored for systems in coupled spin-boson models. In this paper, we study the QPT with coupled spin-boson models consisting of coupled two-level atoms embedded in three-dimensional anisotropic photonic crystals. The dynamics of the system is derived exactly by means of the Laplace transform method, which has been proven to be equivalent to the dissipationless non-Markovian dynamics. Drawing on methods for analyzing the ground state, we obtain the phase diagrams through two exact critical equations and two QPTs are found: one QPT is that from the phase without one bound state to the phase with one bound state and another is that from one phase with the bound state having one eigenvalue to another phase where the bound state has two eigenvalues. Our analytical results also suggest a way of control to overcome the effect of decoherence by engineering the spectrum of the reservoirs to approach the non-Markovian regime and to form the bound state of the whole system for quantum devices and quantum statistics.

  15. Phase-controlled all-optical switching based on coherent population oscillation in a two-level system

    International Nuclear Information System (INIS)

    Liao, Ping; Yu, Song; Luo, Bin; Shen, Jing; Gu, Wanyi; Guo, Hong

    2011-01-01

    We theoretically propose a scheme of phase-controlled all-optical switching due to the effect of degenerate four-wave mixing (FWM) and coherent population oscillation (CPO) in a two-level system driven by a strong coupling field and two weak symmetrically detuned fields. The results show that the phase of the FWM field can be utilized to switch between constructive and destructive interference, which can lead to the transmission or attenuation of the probe field and thus switch the field on or off. We also find the intensity of the coupling field and the propagation distance have great influence on the performance of the switching. In our scheme, due to the quick response in semiconductor systems, a fast all-optical switching can be realized at low light level. -- Highlights: ► We study a new all-optical switching based on coherent population oscillation. ► The phase of the FWM field can be utilized to switch the probe field on or off. ► A fast and low-light-level switching can be realized in semiconductors.

  16. Quantum driving of a two level system: quantum speed limit and superadiabatic protocols – an experimental investigation

    International Nuclear Information System (INIS)

    Malossi, N; Arimondo, E; Ciampini, D; Mannella, R; Bason, M G; Viteau, M; Morsch, O

    2013-01-01

    A fundamental requirement in quantum information processing and in many other areas of science is the capability of precisely controlling a quantum system by preparing a quantum state with the highest fidelity and/or in the fastest possible way. Here we present an experimental investigation of a two level system, characterized by a time-dependent Landau-Zener Hamiltonian, aiming to test general and optimal high-fidelity control protocols. The experiment is based on a Bose-Einstein condensate (BEC) loaded into an optical lattice, then accelerated, which provides a high degree of control over the experimental parameters. We implement generalized Landau-Zener sweeps, comparing them with the well-known linear Landau-Zener sweep. We drive the system from an initial state to a final state with fidelity close to unity in the shortest possible time (quantum brachistochrone), thus reaching the ultimate speed limit imposed by quantum mechanics. On the opposite extreme of the quantum control spectrum, the aim is not to minimize the total transition time but to maximize the adiabaticity during the time-evolution, the system being constrained to the adiabatic ground state at any time. We implement such transitionless superadiabatic protocols by an appropriate transformation of the Hamiltonian parameters. This transformation is general and independent of the physical system.

  17. Dissipative two-level system under strong ac driving: A combination of Floquet and Van Vleck perturbation theory

    International Nuclear Information System (INIS)

    Hausinger, Johannes; Grifoni, Milena

    2010-01-01

    We study the dissipative dynamics of a two-level system (TLS) exposed to strong ac driving. By combining Floquet theory with Van Vleck perturbation theory in the TLS tunneling matrix element, we diagonalize the time-dependent Hamiltonian and provide corrections to the renormalized Rabi frequency of the TLS, which are valid for both a biased and unbiased TLS and go beyond the known high-frequency and rotating-wave results. In order to mimic environmental influences on the TLS, we couple the system weakly to a thermal bath and solve analytically the corresponding Floquet-Bloch-Redfield master equation. We give a closed expression for the relaxation and dephasing rates of the TLS and discuss their behavior under variation of the driving amplitude. Further, we examine the robustness of coherent destruction of tunneling (CDT) and driving-induced tunneling oscillations (DITO). We show that also for a moderate driving frequency an almost complete suppression of tunneling can be achieved for short times and demonstrate the sensitiveness of DITO to a change of the external parameters.

  18. Demand response strategy management with active and reactive power incentive in the smart grid: a two-level optimization approach

    Directory of Open Access Journals (Sweden)

    Ryuto Shigenobu

    2017-05-01

    Full Text Available High penetration of distributed generators (DGs using renewable energy sources (RESs is raising some important issues in the operation of modern po­wer system. The output power of RESs fluctuates very steeply, and that include uncertainty with weather conditions. This situation causes voltage deviation and reverse power flow. Several methods have been proposed for solving these problems. Fundamentally, these methods involve reactive power control for voltage deviation and/or the installation of large battery energy storage system (BESS at the interconnection point for reverse power flow. In order to reduce the installation cost of static var compensator (SVC, Distribution Company (DisCo gives reactive power incentive to the cooperating customers. On the other hand, photovoltaic (PV generator, energy storage and electric vehicle (EV are introduced in customer side with the aim of achieving zero net energy homes (ZEHs. This paper proposes not only reactive power control but also active power flow control using house BESS and EV. Moreover, incentive method is proposed to promote participation of customers in the control operation. Demand response (DR system is verified with several DR menu. To create profit for both side of DisCo and customer, two level optimization approach is executed in this research. Mathematical modeling of price elasticity and detailed simulations are executed by case study. The effectiveness of the proposed incentive menu is demonstrated by using heuristic optimization method.

  19. Effects of phase memory in spectroscopy of test field of two level system at small frequencies of collisions

    International Nuclear Information System (INIS)

    Parkhomenko, A.I.; Shalagin, A.M.

    2006-01-01

    One studied theoretically spectrum of absorption (intensification) of a weak sounding field by two-level atoms moving in a strong resonance laser field and colliding with buffer gas atoms. The analysis was performed for the case of small frequencies of collisions in contrast to the Doppler width of absorption line (gas low pressure) with regard to the arbitrary variation of a radiation induced dipole moment phase at elastic collisions of gas particles. The effects of phase memory are found to result in very strong quantitative and qualitative transformation of a test field spectrum even in case of infrequent collisions when the well-known Dike mechanism of manifestation of phase memory effects (elimination of the Doppler widening due to limitation of spatial motion of particles by collisions) does not work. Strong influence of phase memory effects on spectral resonances at gas low pressure results from the fact that phase retaining collisions change dependence on velocity of the partial index of refraction n(v) (index of refraction for particles moving with v velocity) [ru

  20. Ordering policies of a deteriorating item in an EOQ model with backorder under two-level partial trade credit

    Science.gov (United States)

    Molamohamadi, Zohreh; Arshizadeh, Rahman; Ismail, Napsiah

    2015-05-01

    In the classical inventory model, it was assumed that the retailer must settle the accounts of the purchased items as soon as they are received. In practice, however, the supplier usually offers a full or partial delay period to the retailer to pay for the amount of the purchasing costs. In the partial trade credit contract, which is mostly applied to avoid non-payment risks, the retailer must pay for a portion of the purchased goods at the time of ordering and may delay settling the rest until the end of the predefined agreed upon period, so-called credit period. This paper assumes a two-level partial trade credit where both supplier and retailer offer a partial trade credit to their downstream members. The objective here is to determine the retailer's ordering policy of a deteriorating item by formulating his economic order quantity (EOQ) inventory system with backorder as a cost minimization problem. The sensitivity of the variables on different parameters has been also analyzed by applying numerical examples.

  1. A SEMI-LAGRANGIAN TWO-LEVEL PRECONDITIONED NEWTON-KRYLOV SOLVER FOR CONSTRAINED DIFFEOMORPHIC IMAGE REGISTRATION.

    Science.gov (United States)

    Mang, Andreas; Biros, George

    2017-01-01

    We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.

  2. Parallelization Issues and Particle-In Codes.

    Science.gov (United States)

    Elster, Anne Cathrine

    1994-01-01

    the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.

  3. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  4. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  5. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  6. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  7. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  8. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  9. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  10. Parallel plate transmission line transformer

    NARCIS (Netherlands)

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

  11. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  12. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  13. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  14. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  15. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  16. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  17. Portable programming on parallel/networked computers using the Application Portable Parallel Library (APPL)

    Science.gov (United States)

    Quealy, Angela; Cole, Gary L.; Blech, Richard A.

    1993-01-01

    The Application Portable Parallel Library (APPL) is a subroutine-based library of communication primitives that is callable from applications written in FORTRAN or C. APPL provides a consistent programmer interface to a variety of distributed and shared-memory multiprocessor MIMD machines. The objective of APPL is to minimize the effort required to move parallel applications from one machine to another, or to a network of homogeneous machines. APPL encompasses many of the message-passing primitives that are currently available on commercial multiprocessor systems. This paper describes APPL (version 2.3.1) and its usage, reports the status of the APPL project, and indicates possible directions for the future. Several applications using APPL are discussed, as well as performance and overhead results.

  18. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-01-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. (orig.)

  19. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    Science.gov (United States)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  20. Concatenating algorithms for parallel numerical simulations coupling radiation hydrodynamics with neutron transport

    International Nuclear Information System (INIS)

    Mo Zeyao

    2004-11-01

    Multiphysics parallel numerical simulations are usually essential to simplify researches on complex physical phenomena in which several physics are tightly coupled. It is very important on how to concatenate those coupled physics for fully scalable parallel simulation. Meanwhile, three objectives should be balanced, the first is efficient data transfer among simulations, the second and the third are efficient parallel executions and simultaneously developments of those simulation codes. Two concatenating algorithms for multiphysics parallel numerical simulations coupling radiation hydrodynamics with neutron transport on unstructured grid are presented. The first algorithm, Fully Loosely Concatenation (FLC), focuses on the independence of code development and the independence running with optimal performance of code. The second algorithm. Two Level Tightly Concatenation (TLTC), focuses on the optimal tradeoffs among above three objectives. Theoretical analyses for communicational complexity and parallel numerical experiments on hundreds of processors on two parallel machines have showed that these two algorithms are efficient and can be generalized to other multiphysics parallel numerical simulations. In especial, algorithm TLTC is linearly scalable and has achieved the optimal parallel performance. (authors)

  1. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  2. Effects of apple branch biochar on soil C mineralization and nutrient cycling under two levels of N.

    Science.gov (United States)

    Li, Shuailin; Liang, Chutao; Shangguan, Zhouping

    2017-12-31

    The incorporation of biochar into soil has been proposed as a strategy for enhancing soil fertility and crop productivity. However, there is limited information regarding the responses of soil respiration and the C, N and P cycles to the addition of apple branch biochar at different rates to soil with different levels of N. A 108-day incubation experiment was conducted to investigate the effects of the rate of biochar addition (0, 1, 2 and 4% by mass) on soil respiration and nutrients and the activities of enzymes involved in C, N and P cycling under two levels of N. Our results showed that the application of apple branch biochar at rates of 2% and 4% increased the C-mineralization rate, while biochar amendment at 1% decreased the C-mineralization rate, regardless of the N level. The soil organic C and microbial biomass C and P contents increased as the rate of biochar addition was increased to 2%. The biochar had negative effects on β-glucosidase, N-acetyl-β-glucosaminidase and urease activity in N-poor soil but exerted a positive effect on all of these factors in N-rich soil. Alkaline phosphatase activity increased with an increase in the rate of biochar addition, but the available P contents after all biochar addition treatments were lower than those obtained in the treatments without biochar. Biochar application at rates of 2% and 4% reduced the soil nitrate content, particularly in N-rich soil. Thus, apple branch biochar has the potential to sequester C and improve soil fertility, but the responses of soil C mineralization and nutrient cycling depend on the rate of addition and soil N levels. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Productive and metabolic response to two levels of corn silage supplementation in grazing dairy cows in early lactation during autumn

    Directory of Open Access Journals (Sweden)

    Álvaro Morales

    2014-04-01

    Full Text Available Corn (Zea mays L. silage (CS is a nutritious food that can be used as a supplement in dairy cows. The aim of this study was to determine the effect of supplementation with two amounts of CS on milk production and composition, live weight and body condition, as well as on some blood indicators for energy and protein metabolism on dairy cows in early lactation and grazing low mass pasture during autumn. The study was carried out in 40 Holstein Friesian cows over 57 d. Prior to experimental treatment, milk production and days of lactation averaged 24.1 ± 2.8 kg d-1 and 62 ± 14 d, respectively. The dietary treatments consisted of two levels of supplementation with CS; 4.5 and 9 kg DM cow-1 d-1 (treatments LCS and HCS, respectively. Additionally, all the cows received a pasture allowance of 21 and 3 kg DM cow-1 d-1 of concentrate. Milk composition was determined using infrared spectrophotometry, while blood indicators were obtained using an autoanalyzer. There were not differences between treatments regarding milk production or composition, total DM or energy intake. Herbage and protein intake was higher for LCS treatment (P < 0.001. Increasing supplementation decreased (P < 0.001 daily weight gain but did not affect body condition. Plasma concentrations of βOH-butyrate were lower (P = 0.038 for the LCS treatment; while urea concentrations were higher (P = 0.003, with no differences for non-esterified fatty acids (NEFA concentrations. Supplementation with 4.5 kg d-1 of CS was sufficient to meet the production requirements of the cows.

  4. Comparison of Helicopter Emergency Medical Services Transport Types and Delays on Patient Outcomes at Two Level I Trauma Centers.

    Science.gov (United States)

    Nolan, Brodie; Tien, Homer; Sawadsky, Bruce; Rizoli, Sandro; McFarlan, Amanda; Phillips, Andrea; Ackery, Alun

    2017-01-01

    Helicopter emergency medical services (HEMS) have become an engrained component of trauma systems. In Ontario, transportation for trauma patients is through one of three ways: scene call, modified scene call, or interfacility transfer. We hypothesize that differences exist between these types of transports in both patient demographics and patient outcomes. This study compares the characteristics of patients transported by each of these methods to two level 1 trauma centers and assesses for any impact on morbidity or mortality. As a secondary outcome reasons for delay were identified. A local trauma registry was used to identify and abstract data for all patients transported to two trauma centers by HEMS over a 36-month period. Further chart abstraction using the HEMS patient care reports was done to identify causes of delay during HEMS transport. During the study period HEMS transferred a total of 911 patients of which 139 were scene calls, 333 were modified scene calls and 439 were interfacility transfers. Scene calls had more patients with an ISS of less than 15 and had more patients discharged home from the ED. Modified scene calls had more patients with an ISS greater than 25. The most common delays that were considered modifiable included the sending physician doing a procedure, waiting to meet a land EMS crew, delays for diagnostic imaging and confirming disposition or destination. Differences exist between the types of transports done by HEMS for trauma patients. Many identified reasons for delay to HEMS transport are modifiable and have practical solutions. Future research should focus on solutions to identified delays to HEMS transport. Key words: helicopter emergency medical services; trauma; prehospital care; delays.

  5. Strategic production modeling for defective items with imperfect inspection process, rework, and sales return under two-level trade credit

    Directory of Open Access Journals (Sweden)

    Aditi Khanna

    2017-01-01

    Full Text Available Quality decisions are one of the major decisions in inventory management. It affects customer’s demand, loyalty and customer satisfaction and also inventory costs. Every manufacturing process is inherent to have some chance causes of variation which may lead to some defectives in the lot. So, in order to cater the customers with faultless products, an inspection process is inevitable, which may also be prone to errors. Thus for an operations manager, maintaining the quality of the lot and the screening process becomes a challenging task, when his objective is to determine the optimal order quantity for the inventory system. Besides these operational tasks, the goal is also to increase the customer base which eventually leads to higher profits. So, as a promotional tool, trade credit is being offered by both the retailer and supplier to their respective customers to encourage more frequent and higher volume purchases. Thus taking into account of these facts, a strategic production model is formulated here to study the combined effects of imperfect quality items, faulty inspection process, rework process, sales return under two level trade credit. The present study is a general framework for many articles and classical EPQ model. An analytical method is employed which jointly optimizes the retailer’s credit period and order quantity, so as to maximize the expected total profit per unit time. To study the behavior and application of the model, a numerical example has been cited and a comprehensive sensitivity analysis has been performed. The model can be widely applicable in manufacturing industries like textile, footwear, plastics, electronics, furniture etc.

  6. On the Performance Optimization of Two-Level Three-Phase Grid-Feeding Voltage-Source Inverters

    Directory of Open Access Journals (Sweden)

    Issam A. Smadi

    2018-02-01

    Full Text Available The performance optimization of the two-level, three-phase, grid-feeding, voltage-source inverter (VSI is studied in this paper, which adopts an online adaptive switching frequency algorithm (OASF. A new degree of freedom has been added to the employed OASF algorithm for optimal selection of the weighting factor and overall system optimization design. Toward that end, a full mathematical formulation, including the impact of the coupling inductor and the controller response time, is presented. At first, the weighting factor is selected to favor the switching losses, and the controller gains are optimized by minimizing the integral time-weighted absolute error (ITAE of the output active and reactive power. Different loading and ambient temperature conditions are considered to validate the optimized controller and its fast response through online field programmable gate array (FPGA-in-the-loop. Then, the weighting factor is optimally selected to reduce the cost of the L-filter and the heat-sink. An optimization problem to minimize the cost design at the worst case of loading condition for grid-feeding VSI is formulated. The results from this optimization problem are the filter inductance, the thermal resistance of the heat-sink, and the optimal switching frequency with the optimal weighting factor. The VSI test-bed using the optimized parameters is used to verify the proposed work experimentally. Adopting the OASF algorithm that employs the optimal weighting factor for grid-feeding VSI, the percentages of the reductions in the slope of the steady state junction temperature profile compared to fixed frequencies of 10 kHz, 14.434 kHz, and 20 kHz are about 6%, 30%, and 18%, respectively.

  7. The biopsychosocial characteristics proceding the pregnancy in the teenages from two level one medical centers in Popayán

    Directory of Open Access Journals (Sweden)

    Sandra Yamile Martínez

    2010-12-01

    Full Text Available Objetive: To identify biopsychosocial characteristics preceding the pregnancy in teenagers that went to see the doctor in two level one medical centers in Popayán. Method: Descriptive study, gathering and analysing qualitative and quantitative information. Results: 38 teenagers with an average age of 16.37 years at conception. 90% (34 were first-time mothers. 73% (28 were attending high school and 68% (26 were from a low socioeconomic background. 36.8% (14 were planning a future involving study and work. 46% (17 had dropped out from school. The young girls average age and of commencing sexual activities are 12.89 and 15.32 respectively. 71% 27 had a sexual partner and mentioned that the main reasons for getting pregnant were falling in love and loneliness. Dysfunctional families were a notable feature with 32% (12 coming from broken nuclear families. In order of frequency, social activities in their free time 22/38; 34.2% (13 spend time with their boyfriends. 55%( 21 did not use any contraceptive. 50% (19 heard negative comments against teenage motherhood before their pregnancy. 63% (24 did not plan to get pregnant. 71% 27 had their mother, cousins or a friend with a history of teenage pregnancy. Conclusions: In this population, pregnancy is perhaps a way to establish the sexual identity. It is probable that there is an influence of the repetitive generational pattern of pregnancy at an early age. Teenagers find it viable to adopt adult roles to establish their identity creating a false identity, in addition the limited support from their parents lead them to a marriage or pregnancy as a way to reaffirm their role.

  8. Two-Level Chebyshev Filter Based Complementary Subspace Method: Pushing the Envelope of Large-Scale Electronic Structure Calculations.

    Science.gov (United States)

    Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E

    2018-06-12

    We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).

  9. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  10. Parallel computing in plasma physics: Nonlinear instabilities

    International Nuclear Information System (INIS)

    Pohn, E.; Kamelander, G.; Shoucri, M.

    2000-01-01

    A Vlasov-Poisson-system is used for studying the time evolution of the charge-separation at a spatial one- as well as a two-dimensional plasma-edge. Ions are advanced in time using the Vlasov-equation. The whole three-dimensional velocity-space is considered leading to very time-consuming four-resp. five-dimensional fully kinetic simulations. In the 1D simulations electrons are assumed to behave adiabatic, i.e. they are Boltzmann-distributed, leading to a nonlinear Poisson-equation. In the 2D simulations a gyro-kinetic approximation is used for the electrons. The plasma is assumed to be initially neutral. The simulations are performed at an equidistant grid. A constant time-step is used for advancing the density-distribution function in time. The time-evolution of the distribution function is performed using a splitting scheme. Each dimension (x, y, υ x , υ y , υ z ) of the phase-space is advanced in time separately. The value of the distribution function for the next time is calculated from the value of an - in general - interstitial point at the present time (fractional shift). One-dimensional cubic-spline interpolation is used for calculating the interstitial function values. After the fractional shifts are performed for each dimension of the phase-space, a whole time-step for advancing the distribution function is finished. Afterwards the charge density is calculated, the Poisson-equation is solved and the electric field is calculated before the next time-step is performed. The fractional shift method sketched above was parallelized for p processors as follows. Considering first the shifts in y-direction, a proper parallelization strategy is to split the grid into p disjoint υ z -slices, which are sub-grids, each containing a different 1/p-th part of the υ z range but the whole range of all other dimensions. Each processor is responsible for performing the y-shifts on a different slice, which can be done in parallel without any communication between

  11. Parallelizing AT with MatlabMPI

    International Nuclear Information System (INIS)

    2011-01-01

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  12. Parallel Libraries to support High-Level Programming

    DEFF Research Database (Denmark)

    Larsen, Morten Nørgaard

    and the Microsoft .NET iv framework. Normally, one would not directly think of the .NET framework when talking scientific applications, but Microsoft has in the last couple of versions of .NET introduce a number of tools for writing parallel and high performance code. The first section examines how programmers can...

  13. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  14. Parallel diffusion length on thermal neutrons in rod type lattices

    International Nuclear Information System (INIS)

    Ahmed, T.; Siddiqui, S.A.M.M.; Khan, A.M.

    1981-11-01

    Calculation of diffusion lengths of thermal neutrons in lead-water and aluminum water lattices in direction parallel to the rods are performed using one group diffusion equation together with Shevelev transport correction. The formalism is then applied to two practical cases, the Kawasaki (Hitachi) and the Douglas point (Candu) reactor lattices. Our results are in good agreement with the observed values. (author)

  15. Implementing parallel elliptic solver on a Beowulf cluster

    Directory of Open Access Journals (Sweden)

    Marcin Paprzycki

    1999-12-01

    Full Text Available In a recent paper cite{zara} a parallel direct solver for the linear systems arising from elliptic partial differential equations has been proposed. The aim of this note is to present the initial evaluation of the performance characteristics of this algorithm on Beowulf-type cluster. In this context the performance of PVM and MPI based implementations is compared.

  16. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  17. Parallel adaptation of a vectorised quantumchemical program system

    International Nuclear Information System (INIS)

    Van Corler, L.C.H.; Van Lenthe, J.H.

    1987-01-01

    Supercomputers, like the CRAY 1 or the Cyber 205, have had, and still have, a marked influence on Quantum Chemistry. Vectorization has led to a considerable increase in the performance of Quantum Chemistry programs. However, clockcycle times more than a factor 10 smaller than those of the present supercomputers are not to be expected. Therefore future supercomputers will have to depend on parallel structures. Recently, the first examples of such supercomputers have been installed. To be prepared for this new generation of (parallel) supercomputers one should consider the concepts one wants to use and the kind of problems one will encounter during implementation of existing vectorized programs on those parallel systems. The authors implemented four important parts of a large quantumchemical program system (ATMOL), i.e. integrals, SCF, 4-index and Direct-CI in the parallel environment at ECSEC (Rome, Italy). This system offers simulated parallellism on the host computer (IBM 4381) and real parallellism on at most 10 attached processors (FPS-164). Quantumchemical programs usually handle large amounts of data and very large, often sparse matrices. The transfer of that many data can cause problems concerning communication and overhead, in view of which shared memory and shared disks must be considered. The strategy and the tools that were used to parallellise the programs are shown. Also, some examples are presented to illustrate effectiveness and performance of the system in Rome for these type of calculations

  18. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  19. Quantum theory of phonon-mediated decoherence and relaxation of two-level systems in a structured electromagnetic reservoir

    Science.gov (United States)

    Roy, Chiranjeeb

    In this thesis we study the role of nonradiative degrees of freedom on quantum optical properties of mesoscopic quantum dots placed in the structured electromagnetic reservoir of a photonic crystal. We derive a quantum theory of the role of acoustic and optical phonons in modifying the optical absorption lineshape, polarization dynamics, and population dynamics of a two-level atom (quantum dot) in the "colored" electromagnetic vacuum of a photonic band gap (PBG) material. This is based on a microscopic Hamiltonian describing both radiative and vibrational processes quantum mechanically. Phonon sidebands in an ordinary electromagnetic reservoir are recaptured in a simple model of optical phonons using a mean-field factorization of the atomic and lattice displacement operators. Our formalism is then used to treat the non-Markovian dynamics of the same system within the structured electromagnetic density of states of a photonic crystal. We elucidate the extent to which phonon-assisted decay limits the lifetime of a single photon-atom bound state and derive the modified spontaneous emission dynamics due to coupling to various phonon baths. We demonstrate that coherent interaction with undamped phonons can lead to enhanced lifetime of a photon-atom bound state in a PBG by (i) dephasing and reducing the transition electric dipole moment of the atom and (ii) reducing the quantum mechanical overlap of the state vectors of the excited and ground state (polaronic shift). This results in reduction of the steady-state atomic polarization but an increase in the fractionalized upper state population in the photon-atom bound state. We demonstrate, on the other hand, that the lifetime of the photon-atom bound state in a PBG is limited by the lifetime of phonons due to lattice anharmonicities (break-up of phonons into lower energy phonons) and purely nonradiative decay. We demonstrate how these additional damping effects limit the extent of the polaronic (Franck-Condon) shift of

  20. A tandem parallel plate analyzer

    International Nuclear Information System (INIS)

    Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.

    1996-11-01

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  1. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  2. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  3. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  4. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  5. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  6. Integrated parallel reception, excitation, and shimming (iPRES).

    Science.gov (United States)

    Han, Hui; Song, Allen W; Truong, Trong-Kha

    2013-07-01

    To develop a new concept for a hardware platform that enables integrated parallel reception, excitation, and shimming. This concept uses a single coil array rather than separate arrays for parallel excitation/reception and B0 shimming. It relies on a novel design that allows a radiofrequency current (for excitation/reception) and a direct current (for B0 shimming) to coexist independently in the same coil. Proof-of-concept B0 shimming experiments were performed with a two-coil array in a phantom, whereas B0 shimming simulations were performed with a 48-coil array in the human brain. Our experiments show that individually optimized direct currents applied in each coil can reduce the B0 root-mean-square error by 62-81% and minimize distortions in echo-planar images. The simulations show that dynamic shimming with the 48-coil integrated parallel reception, excitation, and shimming array can reduce the B0 root-mean-square error in the prefrontal and temporal regions by 66-79% as compared with static second-order spherical harmonic shimming and by 12-23% as compared with dynamic shimming with a 48-coil conventional shim array. Our results demonstrate the feasibility of the integrated parallel reception, excitation, and shimming concept to perform parallel excitation/reception and B0 shimming with a unified coil system as well as its promise for in vivo applications. Copyright © 2013 Wiley Periodicals, Inc.

  7. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  8. Cosmic Shear With ACS Pure Parallels

    Science.gov (United States)

    Rhodes, Jason

    2002-07-01

    Small distortions in the shapes of background galaxies by foreground mass provide a powerful method of directly measuring the amount and distribution of dark matter. Several groups have recently detected this weak lensing by large-scale structure, also called cosmic shear. The high resolution and sensitivity of HST/ACS provide a unique opportunity to measure cosmic shear accurately on small scales. Using 260 parallel orbits in Sloan textiti {F775W} we will measure for the first time: beginlistosetlength sep0cm setlengthemsep0cm setlengthopsep0cm em the cosmic shear variance on scales Omega_m^0.5, with signal-to-noise {s/n} 20, and the mass density Omega_m with s/n=4. They will be done at small angular scales where non-linear effects dominate the power spectrum, providing a test of the gravitational instability paradigm for structure formation. Measurements on these scales are not possible from the ground, because of the systematic effects induced by PSF smearing from seeing. Having many independent lines of sight reduces the uncertainty due to cosmic variance, making parallel observations ideal.

  9. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  10. New parallel SOR method by domain partitioning

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Dexuan [Courant Inst. of Mathematical Sciences New York Univ., NY (United States)

    1996-12-31

    In this paper, we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning together with an interprocessor data-communication technique. For the 5-point approximation to the Poisson equation on a square, we show that the ordering of the PSOR based on the strip partition leads to a consistently ordered matrix, and hence the PSOR and the SOR using the row-wise ordering have the same convergence rate. However, in general, the ordering used in PSOR may not be {open_quote}consistently ordered{close_quotes}. So, there is a need to analyze the convergence of PSOR directly. In this paper, we present a PSOR theory, and show that the PSOR method can have the same asymptotic rate of convergence as the corresponding sequential SOR method for a wide class of linear systems in which the matrix is {open_quotes}consistently ordered{close_quotes}. Finally, we demonstrate the parallel performance of the PSOR method on four different message passing multiprocessors (a KSR1, the Intel Delta, an Intel Paragon and an IBM SP2), along with a comparison with the point Red-Black and four-color SOR methods.

  11. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  12. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  13. Effects of increased CO[sub 2] concentration and temperature on growth and yield of winter wheat at two levels of nitrogen application

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R.A.C.; Mitchell, V.J.; Driscoll, S.P.; Franklin, J.; Lawlor, D.W. (Institute of Arable Crops Research, Harpenden (United Kingdom). Dept. of Biochemistry and Physiology)

    1993-06-01

    Winter wheat was grown in chambers under light and temperature conditions similar to the UK field environment for the 1990/1991 growing season at two levels each of atmospheric CO[sub 2] concentration (seasonal means: 361 nd 692 [mu]mol mol[sup -1]), temperature (tracking ambient and ambient +4[degree]C) and nitrogen application (equivalent to 87 and 489 kg ha[sub -1] total N applied). Total dry matter productivity through the season, the maximum number of shoots and final ear number were stimulated by CO[sub 2] enrichment at both levels of the temperature and N treatments. At high N, there was a CO[sub 2]-induced stimulation of grain yield (+15%) similar to that for total crop dry mass (+12%), and there was no significant interaction with temperature. Temperature had a direct, negative effect on yield at both levels of the N and CO[sub 2] treatments. This could be explained by the temperature-dependent shortening of the phenological stages, and therefore, the time available for accumulating resources for grain formation. At high N, there was also a reduction in grain set at ambient +4[degree]C temperature, but the overall negative effect of warmer temperature was greater on the number of grains (-37%) than on yield (-18%), due to a compensating increase in average grain mass. At low N, despite increasing total crop dry mass and the number of ears, elevated CO[sub 2] did not increase grain yield and caused a significant decrease under ambient temperature conditions. This can be explained in terms of a stimulation of early vegetative growth by CO[sub 2] enrichment leading to a reduction in the amount of N available later for the formation and filling of grain.

  14. Parallel operation of voltage-source converters: issues and applications

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, F.C.B.; Silva, D.S. [Federal University of Juiz de Fora (UFJF), MG (Brazil)], Emails: felipe.brum@engenharia.ufjf.br, salomaoime@yahoo.com.br; Ribeiro, P.F. [Calvin College, Grand Rapids, MI (United States); Federal University of Juiz de Fora (UFJF), MG (Brazil)], E-mail: pfribeiro@ieee.org

    2009-07-01

    Technological advancements in power electronics have prompted the development of advanced AC/DC conversion systems with high efficiency and flexible performance. Among these devices, the Voltage-Source Converter (VSC) has become an essential building block. This paper considers the parallel operation of VSCs under different system conditions and how they can assist the operation of highly complex power networks. A multi-terminal VSC-based High Voltage Direct Current (M-VSC-HVDC) system is chosen to be modeled, simulated and then analyzed as an example of VSCs operating in parallel. (author)

  15. OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS

    Directory of Open Access Journals (Sweden)

    G. М. Levin

    2016-01-01

    Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.

  16. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  17. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  18. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  19. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  20. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...