WorldWideScience

Sample records for rightward parallel shift

  1. Improving image quality of parallel phase-shifting digital holography

    International Nuclear Information System (INIS)

    Awatsuji, Yasuhiro; Tahara, Tatsuki; Kaneko, Atsushi; Koyama, Takamasa; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2008-01-01

    The authors propose parallel two-step phase-shifting digital holography to improve the image quality of parallel phase-shifting digital holography. The proposed technique can increase the effective number of pixels of hologram twice in comparison to the conventional parallel four-step technique. The increase of the number of pixels makes it possible to improve the image quality of the reconstructed image of the parallel phase-shifting digital holography. Numerical simulation and preliminary experiment of the proposed technique were conducted and the effectiveness of the technique was confirmed. The proposed technique is more practical than the conventional parallel phase-shifting digital holography, because the composition of the digital holographic system based on the proposed technique is simpler.

  2. Hybrid parallel computing architecture for multiview phase shifting

    Science.gov (United States)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  3. Competitive action video game players display rightward error bias during on-line video game play.

    Science.gov (United States)

    Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria

    2017-09-12

    Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.

  4. Parallel phase-shifting digital holography based on the fractional Talbot effect

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Leon, Lluis; Climent, Vicent; Lancis, Jesus; Tajahuerce, Enrique [GROC-UJI, Departament de Fisica, Universitat Jaume I, 12071 Castello (Spain); Araiza-E, Maria [Laboratorio de Procesamiento Digital de Senales, Universidad Autonoma de Zacatecas, Zacatecas (Mexico); Javidi, Bahram [Department of Electrical and Computer Engineering, University of Connecticut, CT 06269-2157 (United States); Andres, Pedro, E-mail: enrique.tajahuerce@uji.e [Departament d' Optica, Universitat de Valencia, 46100 Burjassot (Spain)

    2010-02-01

    A method for recording on-axis single-shot digital holograms based on the self-imaging phenomenon is reported. A simple binary two-dimensional periodic amplitude is used to codify the reference beam in a Mach-Zehnder interferometer, generating a periodic three-step phase distribution with uniform irradiance over the sensor plane by fractional Talbot effect. An image sensor records only one shot of the interference between the light field scattered by the object and the codified parallel reference beam. Images of the object are digitally reconstructed from the digital hologram through the numerical evaluation of the Fresnel diffraction integral. This scheme provides an efficient way to perform dynamic phase-shifting interferometric techniques to determine the amplitude and phase of the object light field. Unlike other parallel phase-shifting techniques, neither complex pixelated polarization devices nor special phase diffractive elements are required. Experimental results confirm the feasibility and flexibility of our method.

  5. Mechanical origins of rightward torsion in early chick brain development

    Science.gov (United States)

    Chen, Zi; Guo, Qiaohang; Dai, Eric; Taber, Larry

    2015-03-01

    During early development, the neural tube of the chick embryo undergoes a combination of progressive ventral bending and rightward torsion. This torsional deformation is one of the major organ-level left-right asymmetry events in development. Previous studies suggested that bending is mainly due to differential growth, however, the mechanism for torsion remains poorly understood. Since the heart almost always loops rightwards that the brain twists, researchers have speculated that heart looping affects the direction of brain torsion. However, direct evidence is lacking, nor is the mechanical origin of such torsion understood. In our study, experimental perturbations show that the bending and torsional deformations in the brain are coupled and that the vitelline membrane applies an external load necessary for torsion to occur. Moreover, the asymmetry of the looping heart gives rise to the chirality of the twisted brain. A computational model and a 3D printed physical model are employed to help interpret these findings. Our work clarifies the mechanical origins of brain torsion and the associated left-right asymmetry, and further reveals that the asymmetric development in one organ can induce the asymmetry of another developing organ through mechanics, reminiscent of D'Arcy Thompson's view of biological form as ``diagram of forces''. Z.C. is supported by the Society in Science - Branco Weiss fellowship, administered by ETH Zurich. L.A.T acknowledges the support from NIH Grants R01 GM075200 and R01 NS070918.

  6. Strong rightward lateralization of the dorsal attentional network in left-handers with right sighting-eye: an evolutionary advantage.

    Science.gov (United States)

    Petit, Laurent; Zago, Laure; Mellet, Emmanuel; Jobard, Gaël; Crivello, Fabrice; Joliot, Marc; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie

    2015-03-01

    Hemispheric lateralization for spatial attention and its relationships with manual preference strength and eye preference were studied in a sample of 293 healthy individuals balanced for manual preference. Functional magnetic resonance imaging was used to map this large sample while performing visually guided saccadic eye movements. This activated a bilateral distributed cortico-subcortical network in which dorsal and ventral attentional/saccadic pathways elicited rightward asymmetrical activation depending on manual preference strength and sighting eye. While the ventral pathway showed a strong rightward asymmetry irrespective of both manual preference strength and eye preference, the dorsal frontoparietal network showed a robust rightward asymmetry in strongly left-handers, even more pronounced in left-handed subjects with a right sighting-eye. Our findings brings support to the hypothesis that the origin of the rightward hemispheric dominance for spatial attention may have a manipulo-spatial origin neither perceptual nor motor per se but rather reflecting a mechanism by which a spatial context is mapped onto the perceptual and motor activities, including the exploration of the spatial environment with eyes and hands. Within this context, strongly left-handers with a right sighting-eye may benefit from the advantage of having the same right hemispheric control of their dominant hand and visuospatial attention processing. We suggest that this phenomenon explains why left-handed right sighting-eye athletes can outperform their competitors in sporting duels and that the prehistoric and historical constancy of the left-handers ratio over the general population may relate in part on the hemispheric specialization of spatial attention. © 2014 Wiley Periodicals, Inc.

  7. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    International Nuclear Information System (INIS)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-01-01

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results

  8. Pupil dilations reflect why Rembrandt biased female portraits leftward and males rightward

    Directory of Open Access Journals (Sweden)

    James A Schirillo

    2014-01-01

    Full Text Available Portrait painters are experts at examining faces and since emotional content may be expressed differently on each side of the face, consider that Rembrandt biased his male portraits to show their right cheek more often and female portraits to show their left cheek more often. This raises questions regarding the emotional significance of such biased positions. I presented rightward and leftward facing male and female portraits. I measured observers’ pupil size while asking observers to report how (displeasing they found each image. This was a methodological improvement over the type of research initially done by Eckhard Hess who claimed that pupils dilate to pleasant images and constrict to unpleasant images. His work was confounded since his images’ luminances and contrasts across conditions were inconsistent potentially affecting pupil size. To overcome this limitation I presented rightward or leftward facing male and female portraits by Rembrandt to observers in either their original or mirror-reversed position. I found that in viewing male portraits pupil diameter was a function of arousal. That is, larger pupil diameter occurred for images rated both low and high in pleasantness. This was not the case with female portraits. I discuss these findings in regard to the perceived dominance of males and how emotional expressions may be driven by hemispheric laterality.

  9. Determination of accurate 1H positions of an alanine tripeptide with anti-parallel and parallel β-sheet structures by high resolution 1H solid state NMR and GIPAW chemical shift calculation.

    Science.gov (United States)

    Yazawa, Koji; Suzuki, Furitsu; Nishiyama, Yusuke; Ohata, Takuya; Aoki, Akihiro; Nishimura, Katsuyuki; Kaji, Hironori; Shimizu, Tadashi; Asakura, Tetsuo

    2012-11-25

    The accurate (1)H positions of alanine tripeptide, A(3), with anti-parallel and parallel β-sheet structures could be determined by highly resolved (1)H DQMAS solid-state NMR spectra and (1)H chemical shift calculation with gauge-including projector augmented wave calculations.

  10. Research on Gear Shifting Process without Disengaging Clutch for a Parallel Hybrid Electric Vehicle Equipped with AMT

    Directory of Open Access Journals (Sweden)

    Hui-Long Yu

    2014-01-01

    Full Text Available Dynamic models of a single-shaft parallel hybrid electric vehicle (HEV equipped with automated mechanical transmission (AMT were described in different working stages during a gear shifting process without disengaging clutch. Parameters affecting the gear shifting time, components life, and gear shifting jerk in different transient states during a gear shifting process were deeply analyzed. The mathematical models considering the detailed synchronizer working process which can explain the gear shifting failure, long time gear shifting, and frequent synchronizer failure phenomenon in HEV were derived. Dynamic coordinated control strategy of the engine, motor, and actuators in different transient states considering the detailed working stages of synchronizer in a gear shifting process of a HEV is for the first time innovatively proposed according to the state of art references. Bench test and real road test results show that the proposed control strategy can improve the gear shifting quality in all its evaluation indexes significantly.

  11. Hardware system of parallel processing for fast CT image reconstruction based on circular shifting float memory architecture

    International Nuclear Information System (INIS)

    Wang Shi; Kang Kejun; Wang Jingjin

    1995-01-01

    Computerized Tomography (CT) is expected to become an inevitable diagnostic technique in the future. However, the long time required to reconstruct an image has been one of the major drawbacks associated with this technique. Parallel process is one of the best way to solve this problem. This paper gives the architecture and hardware design of PIRS-4 (4-processor Parallel Image Reconstruction System) which is a parallel processing system for fast 3D-CT image reconstruction by circular shifting float memory architecture. It includes structure and component of the system, the design of cross bar switch and details of control model. The test results are described

  12. A comparison of temporal, spatial and parallel phase shifting algorithms for digital image plane holography

    International Nuclear Information System (INIS)

    Arroyo, M P; Lobera, J

    2008-01-01

    This paper investigates the performance of several phase shifting (PS) techniques when using digital image plane holography (DIPH) as a fluid velocimetry technique. The main focus is on increasing the recording system aperture in order to overcome the limitation on the little light available in fluid applications. Some experiments with small rotations of a fluid-like solid object have been used to test the ability of PS-DIPH to faithfully reconstruct the object complex amplitude. Holograms for several apertures and for different defocusing distances have been recorded using spatial phase shifting (SPS) or temporal phase shifting (TPS) techniques. The parallel phase shifted holograms (H PPS ) have been generated from the TPS holograms (H TPS ). The data obtained from TPS-DIPH have been taken as the true object complex amplitude, which is used to benchmark that recovered using the other techniques. The findings of this work show that SPS and PPS are very similar indeed, and suggest that both can work for bigger apertures yet retain phase information

  13. Shifting Control Algorithm for a Single-Axle Parallel Plug-In Hybrid Electric Bus Equipped with EMT

    Directory of Open Access Journals (Sweden)

    Yunyun Yang

    2014-01-01

    Full Text Available Combining the characteristics of motor with fast response speed, an electric-drive automated mechanical transmission (EMT is proposed as a novel type of transmission in this paper. Replacing the friction synchronization shifting of automated manual transmission (AMT in HEVs, the EMT can achieve active synchronization of speed shifting. The dynamic model of a single-axle parallel PHEV equipped with the EMT is built up, and the dynamic properties of the gearshift process are also described. In addition, the control algorithm is developed to improve the shifting quality of the PHEV equipped with the EMT in all its evaluation indexes. The key techniques of changing the driving force gradient in preshifting and shifting compensation phases as well as of predicting the meshing speed in the gear meshing phase are also proposed. Results of simulation, bench test, and real road test demonstrate that the proposed control algorithm can reduce the gearshift jerk and the power interruption time noticeably.

  14. The parallel processing system for fast 3D-CT image reconstruction by circular shifting float memory architecture

    International Nuclear Information System (INIS)

    Wang Shi; Kang Kejun; Wang Jingjin

    1996-01-01

    Computerized Tomography (CT) is expected to become an inevitable diagnostic technique in the future. However, the long time required to reconstruct an image has been one of the major drawbacks associated with this technique. Parallel process is one of the best way to solve this problem. This paper gives the architecture, hardware and software design of PIRS-4 (4-processor Parallel Image Reconstruction System), which is a parallel processing system for fast 3D-CT image reconstruction by circular shifting float memory architecture. It includes the structure and components of the system, the design of crossbar switch and details of control model, the description of RPBP image reconstruction, the choice of OS (Operate System) and language, the principle of imitating EMS, direct memory R/W of float and programming in the protect model. Finally, the test results are given

  15. Optical path difference measurements with a two-step parallel phase shifting interferometer based on a modified Michelson configuration

    Science.gov (United States)

    Toto-Arellano, Noel Ivan; Serrano-Garcia, David I.; Rodriguez-Zurita, Gustavo

    2017-09-01

    We report an optical implementation of a parallel phase-shifting quasi-common path interferometer using two modified Michelson interferometers to generate two interferograms. By using a displaceable polarizer's array, placed on the image plane, we can obtain four phase-shifted interferograms in two captures. The system operates as a quasi-common path interferometer generating four beams, which are to interfere with alignment procedures on the mirrors of the Michelson configurations. The optical phase data are retrieved using the well-known four-step algorithms. To present the capabilities of the system, experimental results obtained from transparent structures are presented.

  16. Shifting brain asymmetry: the link between meditation and structural lateralization.

    Science.gov (United States)

    Kurth, Florian; MacKenzie-Graham, Allan; Toga, Arthur W; Luders, Eileen

    2015-01-01

    Previous studies have revealed an increased fractional anisotropy and greater thickness in the anterior parts of the corpus callosum in meditation practitioners compared with control subjects. Altered callosal features may be associated with an altered inter-hemispheric integration and the degree of brain asymmetry may also be shifted in meditation practitioners. Therefore, we investigated differences in gray matter asymmetry as well as correlations between gray matter asymmetry and years of meditation practice in 50 long-term meditators and 50 controls. We detected a decreased rightward asymmetry in the precuneus in meditators compared with controls. In addition, we observed that a stronger leftward asymmetry near the posterior intraparietal sulcus was positively associated with the number of meditation practice years. In a further exploratory analysis, we observed that a stronger rightward asymmetry in the pregenual cingulate cortex was negatively associated with the number of practice years. The group difference within the precuneus, as well as the positive correlations with meditation years in the pregenual cingulate cortex, suggests an adaptation of the default mode network in meditators. The positive correlation between meditation practice years and asymmetry near the posterior intraparietal sulcus may suggest that meditation is accompanied by changes in attention processing. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. [Experiences in short-term dialyses with 2 capillary dialyzers in parallel and serial circuit in 2 and 3 shift operations].

    Science.gov (United States)

    Gerhardt, W; Krohs, G

    1976-02-01

    It is reported on experiences made in 200 short-term dialyses with every 2 capillary dialysators. 167 dialyses of which 112 were performed by means of parallel arrangement and 55 by means of serial arrangement of the dialysators are analysed in detail. In their effectivity of dialysis the two variants proved to be nearly equivalent, in which cases the series-connection is of practical advantage. In two shifts of nurses up to 3 shifts of patients could be treated. In the large area dialysis an adaptation of the dialysate and a more intensive control of the patients must be performed. Advantages and disadvantages of this method are discussed.

  18. Comparison of microbial community shifts in two parallel multi-step drinking water treatment processes.

    Science.gov (United States)

    Xu, Jiajiong; Tang, Wei; Ma, Jun; Wang, Hong

    2017-07-01

    Drinking water treatment processes remove undesirable chemicals and microorganisms from source water, which is vital to public health protection. The purpose of this study was to investigate the effects of treatment processes and configuration on the microbiome by comparing microbial community shifts in two series of different treatment processes operated in parallel within a full-scale drinking water treatment plant (DWTP) in Southeast China. Illumina sequencing of 16S rRNA genes of water samples demonstrated little effect of coagulation/sedimentation and pre-oxidation steps on bacterial communities, in contrast to dramatic and concurrent microbial community shifts during ozonation, granular activated carbon treatment, sand filtration, and disinfection for both series. A large number of unique operational taxonomic units (OTUs) at these four treatment steps further illustrated their strong shaping power towards the drinking water microbial communities. Interestingly, multidimensional scaling analysis revealed tight clustering of biofilm samples collected from different treatment steps, with Nitrospira, the nitrite-oxidizing bacteria, noted at higher relative abundances in biofilm compared to water samples. Overall, this study provides a snapshot of step-to-step microbial evolvement in multi-step drinking water treatment systems, and the results provide insight to control and manipulation of the drinking water microbiome via optimization of DWTP design and operation.

  19. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  20. Parallel computing in plasma physics: Nonlinear instabilities

    International Nuclear Information System (INIS)

    Pohn, E.; Kamelander, G.; Shoucri, M.

    2000-01-01

    A Vlasov-Poisson-system is used for studying the time evolution of the charge-separation at a spatial one- as well as a two-dimensional plasma-edge. Ions are advanced in time using the Vlasov-equation. The whole three-dimensional velocity-space is considered leading to very time-consuming four-resp. five-dimensional fully kinetic simulations. In the 1D simulations electrons are assumed to behave adiabatic, i.e. they are Boltzmann-distributed, leading to a nonlinear Poisson-equation. In the 2D simulations a gyro-kinetic approximation is used for the electrons. The plasma is assumed to be initially neutral. The simulations are performed at an equidistant grid. A constant time-step is used for advancing the density-distribution function in time. The time-evolution of the distribution function is performed using a splitting scheme. Each dimension (x, y, υ x , υ y , υ z ) of the phase-space is advanced in time separately. The value of the distribution function for the next time is calculated from the value of an - in general - interstitial point at the present time (fractional shift). One-dimensional cubic-spline interpolation is used for calculating the interstitial function values. After the fractional shifts are performed for each dimension of the phase-space, a whole time-step for advancing the distribution function is finished. Afterwards the charge density is calculated, the Poisson-equation is solved and the electric field is calculated before the next time-step is performed. The fractional shift method sketched above was parallelized for p processors as follows. Considering first the shifts in y-direction, a proper parallelization strategy is to split the grid into p disjoint υ z -slices, which are sub-grids, each containing a different 1/p-th part of the υ z range but the whole range of all other dimensions. Each processor is responsible for performing the y-shifts on a different slice, which can be done in parallel without any communication between

  1. Efficient multitasking: parallel versus serial processing of multiple tasks.

    Science.gov (United States)

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  2. INVESTIGATION OF FLIP-FLOP PERFORMANCE ON DIFFERENT TYPE AND ARCHITECTURE IN SHIFT REGISTER WITH PARALLEL LOAD APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Dwi Purnomo

    2015-08-01

    Full Text Available Register is one of the computer components that have a key role in computer organisation. Every computer contains millions of registers that are manifested by flip-flop. This research focuses on the investigation of flip-flop performance based on its type (D, T, S-R, and J-K and architecture (structural, behavioural, and hybrid. Each type of flip-flop on each architecture would be tested in different bit of shift register with parallel load applications. The experiment criteria that will be assessed are power consumption, resources required, memory required, latency, and efficiency. Based on the experiment, it could be shown that D flip-flop and hybrid architecture showed the best performance in required memory, latency, power consumption, and efficiency. In addition, the experiment results showed that the greater the register number, the less efficient the system would be.

  3. Three-dimensional motion-picture imaging of dynamic object by parallel-phase-shifting digital holographic microscopy using an inverted magnification optical system

    Science.gov (United States)

    Fukuda, Takahito; Shinomura, Masato; Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Matoba, Osamu

    2017-04-01

    We constructed a parallel-phase-shifting digital holographic microscopy (PPSDHM) system using an inverted magnification optical system, and succeeded in three-dimensional (3D) motion-picture imaging for 3D displacement of a microscopic object. In the PPSDHM system, the inverted and afocal magnification optical system consisted of a microscope objective (16.56 mm focal length and 0.25 numerical aperture) and a convex lens (300 mm focal length and 82 mm aperture diameter). A polarization-imaging camera was used to record multiple phase-shifted holograms with a single-shot exposure. We recorded an alum crystal, sinking down in aqueous solution of alum, by the constructed PPSDHM system at 60 frames/s for about 20 s and reconstructed high-quality 3D motion-picture image of the crystal. Then, we calculated amounts of displacement of the crystal from the amounts in the focus plane and the magnifications of the magnification optical system, and obtained the 3D trajectory of the crystal by that amounts.

  4. The specificity of learned parallelism in dual-memory retrieval.

    Science.gov (United States)

    Strobach, Tilo; Schubert, Torsten; Pashler, Harold; Rickard, Timothy

    2014-05-01

    Retrieval of two responses from one visually presented cue occurs sequentially at the outset of dual-retrieval practice. Exclusively for subjects who adopt a mode of grouping (i.e., synchronizing) their response execution, however, reaction times after dual-retrieval practice indicate a shift to learned retrieval parallelism (e.g., Nino & Rickard, in Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 373-388, 2003). In the present study, we investigated how this learned parallelism is achieved and why it appears to occur only for subjects who group their responses. Two main accounts were considered: a task-level versus a cue-level account. The task-level account assumes that learned retrieval parallelism occurs at the level of the task as a whole and is not limited to practiced cues. Grouping response execution may thus promote a general shift to parallel retrieval following practice. The cue-level account states that learned retrieval parallelism is specific to practiced cues. This type of parallelism may result from cue-specific response chunking that occurs uniquely as a consequence of grouped response execution. The results of two experiments favored the second account and were best interpreted in terms of a structural bottleneck model.

  5. No effect of pinealectomy on the parallel shift in circadain rhythms of adrenocortical activity and food intake in blinded rats.

    Science.gov (United States)

    Takahashi, K; Inoue, K; Takahashi, Y

    1976-10-01

    Twenty-four-hr patterns of plasma corticosterone levels were determined at 4-hr intervals every 3-4 weeks in sighted and blinded pinealectomized rats of adult age. Through the whole period of the experiment, 24-hr patterns of food intake were also measured weekly. The sighted rats manifested the same 24-hr patterns of plasma corticosterone levels and food intake for 15 weeks after pinealectomy as those observed in the intact control rats. The magnitude of peak levels of plasma corticosterone and the amount of food intake did not differ between the two groups. A phase shift in circadian rhythms of plasma corticosterone levels and food intake was observed in both groups of blinded rats, with and without pinealectomy. Between the two groups, the patterns of phase shift were essentially similar for 10 weeks examined after optic enucleation. The peak elevation of plasma levels took place at 11 p.m. at the end of the 4th week after optic enucleation. Thereafter, 4- to 8-hr delay of peak appearance was observed every 3 weeks. No significant differences were found in peak values between the two groups of blinded rats. Furthermore, the circadian rhythm of food intake shifted in parallel with that of plasma corticosterone levels. A phase reversal of these two activities was observed between the 8th and 10th week after the operation. These results indicate that the pineal gland does not play any important role either in the maintenance of normal circadian periodicities of adrenocortical activity and food intake or in the shift in circadian rhythms of the two activities in the blinded rats.

  6. Single-shot femtosecond-pulsed phase-shifting digital holography.

    Science.gov (United States)

    Kakue, Takashi; Itoh, Seiya; Xia, Peng; Tahara, Tatsuki; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2012-08-27

    Parallel phase-shifting digital holography is capable of three-dimensional measurement of a dynamically moving object with a single-shot recording. In this letter, we demonstrated a parallel phase-shifting digital holography using a single femtosecond light pulse whose central wavelength and temporal duration were 800 nm and 96 fs, respectively. As an object, we set spark discharge in atmospheric pressure air induced by applying a high voltage to between two electrodes. The instantaneous change in phase caused by the spark discharge was clearly reconstructed. The reconstructed phase image shows the change of refractive index of air was -3.7 × 10(-4).

  7. Signaling a Change of Heart

    DEFF Research Database (Denmark)

    Schumacher, Gijs

    2011-01-01

    introduced welfare state retrenchment measures. Social Democrats can win votes and join coalitions by shifting rightwards. In contrast, they can pursue policy objectives by shifting leftwards. To communicate these shifts, in other words, ‘changes of heart’, parties send signals to voters and other parties...... after having signalled ‘a change of heart’....

  8. Parallel computing and networking; Heiretsu keisanki to network

    Energy Technology Data Exchange (ETDEWEB)

    Asakawa, E; Tsuru, T [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T [Japan Petroleum Exploration Co. Ltd., Tokyo (Japan)

    1996-05-01

    This paper describes the trend of parallel computers used in geophysical exploration. Around 1993 was the early days when the parallel computers began to be used for geophysical exploration. Classification of these computers those days was mainly MIMD (multiple instruction stream, multiple data stream), SIMD (single instruction stream, multiple data stream) and the like. Parallel computers were publicized in the 1994 meeting of the Geophysical Exploration Society as a `high precision imaging technology`. Concerning the library of parallel computers, there was a shift to PVM (parallel virtual machine) in 1993 and to MPI (message passing interface) in 1995. In addition, the compiler of FORTRAN90 was released with support implemented for data parallel and vector computers. In 1993, networks used were Ethernet, FDDI, CDDI and HIPPI. In 1995, the OC-3 products under ATM began to propagate. However, ATM remains to be an interoffice high speed network because the ATM service has not spread yet for the public network. 1 ref.

  9. Digital tomosynthesis parallel imaging computational analysis with shift and add and back projection reconstruction algorithms.

    Science.gov (United States)

    Chen, Ying; Balla, Apuroop; Rayford II, Cleveland E; Zhou, Weihua; Fang, Jian; Cong, Linlin

    2010-01-01

    Digital tomosynthesis is a novel technology that has been developed for various clinical applications. Parallel imaging configuration is utilised in a few tomosynthesis imaging areas such as digital chest tomosynthesis. Recently, parallel imaging configuration for breast tomosynthesis began to appear too. In this paper, we present the investigation on computational analysis of impulse response characterisation as the start point of our important research efforts to optimise the parallel imaging configurations. Results suggest that impulse response computational analysis is an effective method to compare and optimise imaging configurations.

  10. Radio frequency feedback method for parallelized droplet microfluidics

    KAUST Repository

    Conchouso Gonzalez, David

    2016-12-19

    This paper reports on a radio frequency micro-strip T-resonator that is integrated to a parallel droplet microfluidic system. The T-resonator works as a feedback system to monitor uniform droplet production and to detect, in real-time, any malfunctions due to channel fouling or clogging. Emulsions at different W/O flow-rate ratios are generated in a microfluidic device containing 8 parallelized generators. These emulsions are then guided towards the RF sensor, which is then read using a Network Analyzer to obtain the frequency response of the system. The proposed T-resonator shows frequency shifts of 45MHz for only 5% change in the emulsion\\'s water in oil content. These shifts can then be used as a feedback system to trigger alarms and notify production and quality control engineers about problems in the droplet generation process.

  11. Radio frequency feedback method for parallelized droplet microfluidics

    KAUST Repository

    Conchouso Gonzalez, David; Carreno, Armando Arpys Arevalo; McKerricher, Garret; Castro, David; Foulds, Ian G.

    2016-01-01

    This paper reports on a radio frequency micro-strip T-resonator that is integrated to a parallel droplet microfluidic system. The T-resonator works as a feedback system to monitor uniform droplet production and to detect, in real-time, any malfunctions due to channel fouling or clogging. Emulsions at different W/O flow-rate ratios are generated in a microfluidic device containing 8 parallelized generators. These emulsions are then guided towards the RF sensor, which is then read using a Network Analyzer to obtain the frequency response of the system. The proposed T-resonator shows frequency shifts of 45MHz for only 5% change in the emulsion's water in oil content. These shifts can then be used as a feedback system to trigger alarms and notify production and quality control engineers about problems in the droplet generation process.

  12. Effect of Phase Shift in Dual-Rail Perfect State Transfer

    International Nuclear Information System (INIS)

    Wang Zhao-Ming; Zhang Zhong-Jun; Gu Yong-Jian

    2014-01-01

    We investigate the effect of phase shift on the perfect state transfer through two parallel one-dimensional ring-shaped spin chains. We find that the total success probability can be significantly enhanced by phase shift control when the communication channel consists of two odd chains. The average time to gain unit success probability is discussed, showing that a proper phase shift can be used to enhance the efficiency of state transmission. (general)

  13. Laterality patterns of brain functional connectivity: gender effects.

    Science.gov (United States)

    Tomasi, Dardo; Volkow, Nora D

    2012-06-01

    Lateralization of brain connectivity may be essential for normal brain function and may be sexually dimorphic. Here, we study the laterality patterns of short-range (implicated in functional specialization) and long-range (implicated in functional integration) connectivity and the gender effects on these laterality patterns. Parallel computing was used to quantify short- and long-range functional connectivity densities in 913 healthy subjects. Short-range connectivity was rightward lateralized and most asymmetrical in areas around the lateral sulcus, whereas long-range connectivity was rightward lateralized in lateral sulcus and leftward lateralizated in inferior prefrontal cortex and angular gyrus. The posterior inferior occipital cortex was leftward lateralized (short- and long-range connectivity). Males had greater rightward lateralization of brain connectivity in superior temporal (short- and long-range), inferior frontal, and inferior occipital cortices (short-range), whereas females had greater leftward lateralization of long-range connectivity in the inferior frontal cortex. The greater lateralization of the male's brain (rightward and predominantly short-range) may underlie their greater vulnerability to disorders with disrupted brain asymmetries (schizophrenia, autism).

  14. Dynamic shifting in thalamocortical processing during different behavioural states.

    OpenAIRE

    Nicolelis, Miguel A L; Fanselow, Erika E

    2002-01-01

    Recent experiments in our laboratory have indicated that as rats shift the behavioural strategy employed to explore their surrounding environment, there is a parallel change in the physiological properties of the neuronal ensembles that define the main thalamocortical loop of the trigeminal somatosensory system. Based on experimental evidence from several laboratories, we propose that this concurrent shift in behavioural strategy and thalamocortical physiological properties provides rats with...

  15. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  16. A simple image-reject mixer based on two parallel phase modulators

    Science.gov (United States)

    Hu, Dapeng; Zhao, Shanghong; Zhu, Zihang; Li, Xuan; Qu, Kun; Lin, Tao; Zhang, Kun

    2018-02-01

    A simple photonic microwave image-reject mixer (IRM) using two parallel phase modulators is proposed. First, a photonic microwave mixer with phase shift ability is achieved using two parallel phase modulators (PMs), an optical bandpass filter, three polarization controllers, three polarization beam splitters and two balanced photodetectors. At the output of the mixer, two frequency downconverted signals with tunable frequency difference can be obtained. By adjusting the phase difference as 90° and utilizing an electrical 90° hybrid, the useless components can be eliminated, and the image reject operation is realized. The key advantage of the proposed scheme is the usage of PM, which avoid the DC bias shifting problem and make the system simple and stable. A simulation is performed to verify the proposed scheme, a relative - 90° or 90° phase shift can be obtained between the two output ports of the photonic microwave mixer, at the output of the IRM, 60 dB image-reject ratio is obtained.

  17. Enhanced Phase-Shifted Current Control for Harmonic Cancellation in Three-Phase Multiple Adjustable Speed Drive Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Davari, Pooya; Zare, Firuz

    2017-01-01

    A phase-shifted current control can be employed to mitigate certain harmonics induced by the Diode Rectifiers (DR) and Silicon-Controlled Rectifiers (SCR) as the front-ends of multiple parallel Adjustable Speed Drive (ASD) systems. However, the effectiveness of the phase-shifted control relies...... on the loading condition of each drive unit as well as the number of drives in parallel. In order to enhance the harmonic cancellation by means of the phase-shifted current control, the currents drawn by the rectifiers should be maintained almost at the same level. Thus, this paper firstly analyzes the impact...... of unequal loading among the parallel drives, and a scheme to enhance the performance is introduced to improve the quality of the total grid current, where partial loading operation should be enabled. Simulation and experimental case studies on multidrive systems have demonstrated that the enhanced phase...

  18. Rightward shift in temporal order judgements in the wake of the attentional blink

    Directory of Open Access Journals (Sweden)

    Mitchell Valdés-Sosa

    2008-01-01

    Full Text Available Cambio hacia la derecha en los juicios de orden temporal durante el parpadeo atencional. El orden temporal de dos eventos, cada uno de ellos presentado en un hemicampo visual diferente, puede ser juzgado correctamente por observadores típicos inclusive cuando la diferencia de tiempo entre las presentaciones sea muy pequeña. El presente trabajo analiza la influencia de un proceso endógeno sobre el juicio de orden temporal (JOT y nos muestra que la percepción del orden temporal está también afectada cuando los recursos atencionales disponibles son reducidos mediante un paradigma de parpadeo atencional (PA. A los participantes se les presentaron los siguientes estímulos: un primer estímulo visual (T1 en el centro de fijación y luego de un intervalo de tiempo variable (280 ó 1030 ms, un par de estímulos lateralizados (T2. Para la tarea dual con el intervalo de tiempo de 280 ms entre T1 y T2, la precisión en el JOT se deterioró, evidenciando un PA. Sin embargo, durante el PA en lugar de la asimetría favorable al lado izquierdo, aparece un significativo sesgo en contra de ese lado.

  19. Universal shift register implementation using quantum dot cellular automata

    Directory of Open Access Journals (Sweden)

    Tamoghna Purkayastha

    2018-06-01

    Full Text Available Quantum-dot Cellular Automata (QCA demands to be a promising alternative of CMOS in ultra large scale circuit integration. Arithmetic and logic unit designs using QCA are of high research interest. A layout of four and eight bit universal shift register (USR has been proposed. Initially QCA layouts of D flip-flop with clear and 4 to 1 multiplexer are designed, which are extended to design 4 and 8-bit parallel in parallel out (PIPO shift register. Finally the PIPO is utilized to design 4-bit and 8-bit USR. By the comparative analysis it is observed that the proposed D Flip-flop achieved 40% clock delay improvement, whereas the modified layout of 4 to 1 multiplexer achieved 30% cell count reduction and 17% clock delay reduction from the previous works. This results in 31% reduction in cell count, 45% reduction in area and 55% reduction in clock cycle delay in 8 bit USR layout.

  20. Anatomical sites of colorectal cancer in a Semi-Urban Nigerian ...

    African Journals Online (AJOL)

    region from big urban cities have shown that the incidence of colorectal cancer is rising and with a proportionate right-ward shift. Objective: To assess the sub-site distribution and surgical treatment patterns of colorectal cancer in a semi-urban ...

  1. Differences in cortisol profiles and circadian adjustment time between nurses working night shifts and regular day shifts: A prospective longitudinal study.

    Science.gov (United States)

    Niu, Shu-Fen; Chung, Min-Huey; Chu, Hsin; Tsai, Jui-Chen; Lin, Chun-Chieh; Liao, Yuan-Mei; Ou, Keng-Liang; O'Brien, Anthony Paul; Chou, Kuei-Ru

    2015-07-01

    This study explored the differences in the circadian salivary cortisol profiles between nurses working night shifts and regular day shifts following a slow rotating shift schedule to assess the number of days required for adjusting the circadian rhythm of salivary cortisol levels in nurses working consecutive night shifts and the number of days off required to restore the diurnal circadian rhythm of salivary cortisol levels. This was a prospective, longitudinal, parallel-group comparative study. The participants were randomly assigned to night and day-shift groups, and saliva samples were collected to measure their cortisol levels and circadian secretion patterns. Significant differences were observed in the overall salivary cortisol pattern parameters (cortisol awakening response, changes in cortisol profiles between 6 and 12h after awakening, and changes in cortisol profiles between 30 min and 12 h after awakening) from Days 2 to 4 of the workdays between both groups. However, on Day 2 of the days off, both groups exhibited similar cortisol profiles and the cortisol profiles in the night-shift group were restored. Nurses working night shifts require at least 4 days to adjust their circadian rhythms of cortisol secretions. Moreover, on changing from night shift to other shifts, nurses must be allowed more than 2 days off work. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Effect of gear shift and engine start losses on control strategies for hybrid electric vehicles

    NARCIS (Netherlands)

    Ngo, V.; Hofman, T.; Steinbuch, M.; Serrarens, A.

    2012-01-01

    In this paper, energetic loss models in the events of shifting gear and starting engine in a parallel Hybrid Electric Vehicle equipped with an Automated Manual Transmission (AMT) will be introduced. The optimal control algorithm for the start-stop, power split and gear shift problem based on Dynamic

  3. How to detect the gravitationally induced phase shift of electromagnetic waves by optical-fiber interferometry

    International Nuclear Information System (INIS)

    Tanaka, K.

    1983-01-01

    Attention is called to a laboratory experiment of an optical-fiber interferometer which can show the gravitationally induced phase shift of optical waves. A phase shift of approx.10 -6 rad is anticipated for the Earth's gravitational potential difference of 1 m when a He-Ne laser and two multiple-turn optical-fiber loops of length 5 km are used. The phase shift can be varied by rotating the loops about an axis parallel to the Earth's surface. This order of phase shifts can be detected by current optical-fiber interferometric techniques

  4. Islanding and strain-induced shifts in the infrared absorption peaks of cubic boron nitride thin films

    International Nuclear Information System (INIS)

    Fahy, S.; Taylor, C.A. II and; Clarke, R.

    1997-01-01

    Experimental and theoretical investigations of the infrared-active, polarization-dependent phonon frequencies of cubic boron nitride films have been performed in light of recent claims that large frequency shifts during initial nucleation are the result of strain caused by highly nonequilibrium growth conditions. We show that the formation of small, separate grains of cubic boron nitride during the initial growth leads to a frequency shift in the infrared-active transverse-optic mode, polarized normal to the substrate, which is opposite in sign and twice the magnitude of the shift for modes polarized parallel to the substrate. In contrast, film strain causes a frequency shift in the mode polarized normal to the substrate, which is much smaller in magnitude than the frequency shift for modes polarized parallel to the substrate. Normal and off-normal incidence absorption measurements, performed at different stages of nucleation and growth, show that large frequency shifts in the transverse-optic-phonon modes during the initial stage of growth are not compatible with the expected effects of strain, but are in large part due to nucleation of small isolated cubic BN grains which coalesce to form a uniform layer. Numerical results from a simple model of island nucleation and growth are in good agreement with experimental results. copyright 1997 The American Physical Society

  5. Wideband Dual-Polarization Patch Antenna Array With Parallel Strip Line Balun Feeding

    DEFF Research Database (Denmark)

    Zhang, Jin; Lin, Xianqi; Nie, Liying

    2016-01-01

    A wideband dual-polarization patch antenna array is proposed in this letter. The array is fed by a parallel strip line balun, which is adopted to generate 180° phase shift in a wide frequency range. In addition, this balun has simple structure, very small phase shift error, and good ports isolati...... is higher than 30 dB. The simulation and measurement turns out to be similar. This antenna array can be used in TD-LTE base stations, and the design methods are also useful to other wideband microstrip antennas....

  6. Lorentz transformations, sideways shift and massless spinning particles

    Science.gov (United States)

    Bolonek-Lasoń, K.; Kosiński, P.; Maślanka, P.

    2017-06-01

    Recently (Stone et al. (2015) [16]) the influence of the so called ;Wigner translations; (more generally-Lorentz transformations) on circularly polarized Gaussian packets (providing the solution to Maxwell equations in paraxial approximation) has been studied. It appears that, within this approximation, the Wigner translations have an effect of shifting the wave packet trajectory parallel to itself by an amount proportional to the photon helicity. It has been suggested that this shift may result from specific properties of the algebra of Poincare generators for massless particles. In the present letter we describe the general relation between transformation properties of electromagnetic field on quantum and classical levels. It allows for a straightforward derivation of the helicity-dependent transformation rules. We present also an elementary derivation of the formula for sideways shift based on classical Maxwell theory. Some comments are made concerning the generalization to higher helicities and the relation to the coordinate operator defined long time ago by Pryce.

  7. Lorentz transformations, sideways shift and massless spinning particles

    Directory of Open Access Journals (Sweden)

    K. Bolonek-Lasoń

    2017-06-01

    Full Text Available Recently (Stone et al. (2015 [16] the influence of the so called “Wigner translations” (more generally-Lorentz transformations on circularly polarized Gaussian packets (providing the solution to Maxwell equations in paraxial approximation has been studied. It appears that, within this approximation, the Wigner translations have an effect of shifting the wave packet trajectory parallel to itself by an amount proportional to the photon helicity. It has been suggested that this shift may result from specific properties of the algebra of Poincare generators for massless particles. In the present letter we describe the general relation between transformation properties of electromagnetic field on quantum and classical levels. It allows for a straightforward derivation of the helicity-dependent transformation rules. We present also an elementary derivation of the formula for sideways shift based on classical Maxwell theory. Some comments are made concerning the generalization to higher helicities and the relation to the coordinate operator defined long time ago by Pryce.

  8. Lorentz transformations, sideways shift and massless spinning particles

    Energy Technology Data Exchange (ETDEWEB)

    Bolonek-Lasoń, K. [Department of Statistical Methods, Faculty of Economics and Sociology (Poland); Kosiński, P. [Department of Computer Science, Faculty of Physics and Applied Informatics, University of Łódź, Pomorska 149/153, 90-236 Łódź (Poland); Maślanka, P., E-mail: pmaslan@uni.lodz.pl [Department of Computer Science, Faculty of Physics and Applied Informatics, University of Łódź, Pomorska 149/153, 90-236 Łódź (Poland)

    2017-06-10

    Recently (Stone et al. (2015) ) the influence of the so called “Wigner translations” (more generally-Lorentz transformations) on circularly polarized Gaussian packets (providing the solution to Maxwell equations in paraxial approximation) has been studied. It appears that, within this approximation, the Wigner translations have an effect of shifting the wave packet trajectory parallel to itself by an amount proportional to the photon helicity. It has been suggested that this shift may result from specific properties of the algebra of Poincare generators for massless particles. In the present letter we describe the general relation between transformation properties of electromagnetic field on quantum and classical levels. It allows for a straightforward derivation of the helicity-dependent transformation rules. We present also an elementary derivation of the formula for sideways shift based on classical Maxwell theory. Some comments are made concerning the generalization to higher helicities and the relation to the coordinate operator defined long time ago by Pryce.

  9. In Vitro Functional Characterization of GET73 as Possible Negative Allosteric Modulator of Metabotropic Glutamate Receptor 5.

    Science.gov (United States)

    Beggiato, Sarah; Borelli, Andrea C; Tomasini, Maria C; Castelli, M Paola; Pintori, Nicholas; Cacciaglia, Roberto; Loche, Antonella; Ferraro, Luca

    2018-01-01

    The present study was aimed to further characterize the pharmacological profile of N-[4-(trifluoromethyl) benzyl]-4-methoxybutyramide (GET73), a putative negative allosteric modulator (NAM) of metabotropic glutamate subtype 5 receptor (mGluR5) under development as a novel medication for the treatment of alcohol dependence. This aim has been accomplished by means of a series of in vitro functional assays. These assays include the measure of several down-stream signaling [intracellular Ca ++ levels, inositol phosphate (IP) formation and CREB phosphorylation (pCREB)] which are generally affected by mGluR5 ligands. In particular, GET73 (0.1 nM-10 μM) was explored for its ability to displace the concentration-response curve of some mGluR5 agonists/probes (glutamate, L-quisqualate, CHPG) in different native preparations. GET73 produced a rightward shift of concentration-response curves of glutamate- and CHPG-induced intracellular Ca ++ levels in primary cultures of rat cortical astrocytes. The compound also induced a rightward shift of concentration response curve of glutamate- and L-quisqualate-induced increase in IP turnover in rat hippocampus slices, along with a reduction of CHPG (10 mM)-induced increase in IP formation. Moreover, GET73 produced a rightward shift of concentration-response curve of glutamate-, CHPG- and L-quisqualate-induced pCREB levels in rat cerebral cortex neurons. Although the engagement of other targets cannot be definitively ruled out, these data support the view that GET73 acts as an mGluR5 NAM and support the significance of further investigating the possible mechanism of action of the compound.

  10. Non-Cartesian Parallel Imaging Reconstruction of Undersampled IDEAL Spiral 13C CSI Data

    DEFF Research Database (Denmark)

    Hansen, Rie Beck; Hanson, Lars G.; Ardenkjær-Larsen, Jan Henrik

    scan times based on spatial information inherent to each coil element. In this work, we explored the combination of non-cartesian parallel imaging reconstruction and spatially undersampled IDEAL spiral CSI1 acquisition for efficient encoding of multiple chemical shifts within a large FOV with high...

  11. Experimental investigation of zero phase shift effects for Coriolis flowmeters due to pipe imperfections

    DEFF Research Database (Denmark)

    Enz, Stephanie; Thomsen, Jon Juel; Neumeyer, Stefan

    2011-01-01

    mass as well as temperature changes could be causes contributing to a time-varying measured zero shift, as observed with some commercial CFMs. The conducted experimental tests of the theoretically based hypotheses have shown that simple mathematical models and approximate analysis allow general......, the flexural vibrations of two bent, parallel, non-fluid-conveying pipes are studied experimentally, employing an industrial CFM. Special attention has been paid on the phase shift in the case of zero mass flow, i.e. the zero shift, caused by various imperfections to the ‘‘perfect’’ CFM, i.e. non-uniform pipe...... damping and mass, and on ambient temperature changes. Experimental observations confirm the hypothesis that asymmetry in the axial distribution of damping will induce zero shifts similar to the phase shifts due to fluid flow. Axially symmetrically distributed damping was observed to influence phase shift...

  12. How directional change in reading/writing habits relates to directional change in displayed pictures.

    Science.gov (United States)

    Lee, Hachoung; Oh, Songjoo

    2016-01-01

    It has been suggested that reading/writing habits may influence the appreciation of pictures. For example, people who read and write in a rightward direction have an aesthetic preference for pictures that face rightward over pictures that face leftward, and vice versa. However, correlations for this phenomenon have only been found in cross-cultural studies. Will a directional change in reading/writing habits within a culture relate to changes in picture preference? Korea is a good place to research this question because the country underwent gradual changes in reading/writing direction habits, from leftward to rightward, during the 20th century. In this study, we analyzed the direction of drawings and photos published in the two oldest newspapers in Korea from 1920-2013. The results show that the direction of the drawings underwent a clear shift from the left to the right, but the direction of the photos did not change. This finding suggests a close psychological link between the habits of reading/writing and drawing that cannot be accounted for simply by an accidental correspondence across different cultures.

  13. Shift in Language Policy in Malaysia: Unravelling Reasons for Change, Conflict and Compromise in Mother-Tongue Education

    Science.gov (United States)

    Gill, Saran Kaur

    2007-01-01

    Malaysia experienced a major shift in language policy in 2003 for the subjects of science and maths. This meant a change in the language of education for both national and national-type schools. For national schools, this resulted in a shift from Bahasa Malaysia, the national language to English. Parallel with this, to ensure homogeneity of impact…

  14. Overnight shift work: factors contributing to diagnostic discrepancies.

    Science.gov (United States)

    Hanna, Tarek N; Loehfelm, Thomas; Khosa, Faisal; Rohatgi, Saurabh; Johnson, Jamlik-Omari

    2016-02-01

    The aims of the study are to identify factors contributing to preliminary interpretive discrepancies on overnight radiology resident shifts and apply this data in the context of known literature to draw parallels to attending overnight shift work schedules. Residents in one university-based training program provided preliminary interpretations of 18,488 overnight (11 pm–8 am) studies at a level 1 trauma center between July 1, 2013 and December 31, 2014. As part of their normal workflow and feedback, attendings scored the reports as major discrepancy, minor discrepancy, agree, and agree--good job. We retrospectively obtained the preliminary interpretation scores for each study. Total relative value units (RVUs) per shift were calculated as an indicator of overnight workload. The dataset was supplemented with information on trainee level, number of consecutive nights on night float, hour, modality, and per-shift RVU. The data were analyzed with proportional logistic regression and Fisher's exact test. There were 233 major discrepancies (1.26 %). Trainee level (senior vs. junior residents; 1.08 vs. 1.38 %; p performance. Increased workload affected more junior residents' performance, with R3 residents performing significantly worse on busier nights. Hour of the night was not significantly associated with performance, but there was a trend toward best performance at 2 am, with subsequent decreased accuracy throughout the remaining shift hours. Improved performance occurred after the first six night float shifts, presumably as residents acclimated to a night schedule. As overnight shift work schedules increase in popularity for residents and attendings, focused attention to factors impacting interpretative accuracy is warranted.

  15. Optokinetic stimulation modulates neglect for the number space: Evidence from mental number interval bisection

    Directory of Open Access Journals (Sweden)

    Konstantinos ePriftis

    2012-02-01

    Full Text Available Behavioral, neuropsychological, and neuroimaging data support the idea that numbers are represented along a mental number line (MNL, an analogical, visuo-spatial representation of number magnitude. The MNL is left-to-right oriented, with small numbers on the left and larger numbers on the right. Left neglect patients are impaired in processing the left side of the MNL and show a rightward deviation in the mental bisection of numerical intervals. In the present study we investigated the effects of optokinetic stimulation (OKS -a technique inducing spatial attention shifts by means of activation of the optokinetic nystagmus- on mental number interval bisection. One patient with left neglect following right hemisphere stroke (BG and four control patients with right hemisphere damage, but without neglect, performed the mental number interval bisection task in three experimental conditions of OKS: static, leftward, and rightward. In the static condition, BG misbisected to the right of the true midpoint. BG misbisected to the left following leftward OKS, but again to the right of the midpoint following rightward OKS. In contrast, the performance of controls was not significantly affected by the direction of OKS. We argue that shifts of visuospatial attention, induced by OKS, may affect the mental number interval bisection, suggesting the presence of an interaction between the processing of number magnitude and the processing of the perceptual space, in patients with neglect for the mental number space.

  16. Verification of the shift Monte Carlo code with the C5G7 reactor benchmark

    International Nuclear Information System (INIS)

    Sly, N. C.; Mervin, B. T.; Mosher, S. W.; Evans, T. M.; Wagner, J. C.; Maldonado, G. I.

    2012-01-01

    Shift is a new hybrid Monte Carlo/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift's Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark problems. Comparisons were made between the benchmark eigenvalues and those output by the Shift code. In addition, mesh-based scalar flux tally results generated by Shift were compared to those obtained using MCNP5 on an identical model and tally grid. The Shift-generated eigenvalues were within three standard deviations of the benchmark and MCNP5-1.60 values in all cases. The flux tallies generated by Shift were found to be in very good agreement with those from MCNP. (authors)

  17. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  18. Novel dimeric bis(7)-tacrine proton-dependently inhibits NMDA-activated currents

    International Nuclear Information System (INIS)

    Luo, Jialie; Li, Wenming; Liu, Yuwei; Zhang, Wei; Fu, Hongjun; Lee, Nelson T.K.; Yu, Hua; Pang, Yuanping; Huang, Pingbo; Xia, Jun; Li, Zhi-Wang; Li, Chaoying; Han, Yifan

    2007-01-01

    Bis(7)-tacrine has been shown to prevent glutamate-induced neuronal apoptosis by blocking NMDA receptors. However, the characteristics of the inhibition have not been fully elucidated. In this study, we further characterize the features of bis(7)-tacrine inhibition of NMDA-activated current in cultured rat hippocampal neurons. The results show that with the increase of extracellular pH, the inhibitory effect decreases dramatically. At pH 8.0, the concentration-response curve of bis(7)-tacrine is shifted rightwards with the IC 50 value increased from 0.19 ± 0.03 μM to 0.41 ± 0.04 μM. In addition, bis(7)-tacrine shifts the proton inhibition curve rightwards. Furthermore, the inhibitory effect of bis(7)-tacrine is not altered by the presence of the NMDA receptor proton sensor shield spermidine. These results indicate that bis(7)-tacrine inhibits NMDA-activated current in a pH-dependent manner by sensitizing NMDA receptors to proton inhibition, rendering it potentially beneficial therapeutic effects under acidic conditions associated with stroke and ischemia

  19. QDP++: Data Parallel Interface for QCD

    Energy Technology Data Exchange (ETDEWEB)

    Robert Edwards

    2003-03-01

    This is a user's guide for the C++ binding for the QDP Data Parallel Applications Programmer Interface developed under the auspices of the US Department of Energy Scientific Discovery through Advanced Computing (SciDAC) program. The QDP Level 2 API has the following features: (1) Provides data parallel operations (logically SIMD) on all sites across the lattice or subsets of these sites. (2) Operates on lattice objects, which have an implementation-dependent data layout that is not visible above this API. (3) Hides details of how the implementation maps onto a given architecture, namely how the logical problem grid (i.el lattice) is mapped onto the machine architecture. (4) Allows asynchronous (non-blocking) shifts of lattice level objects over any permutation map of site sonto sites. However, from the user's view these instructions appear blocking and in fact may be so in some implementation. (5) Provides broadcast operations (filling a lattice quantity from a scalar value(s)), global reduction operations, and lattice-wide operations on various data-type primitives, such as matrices, vectors, and tensor products of matrices (propagators). (6) Operator syntax that support complex expression constructions.

  20. Plasmon Geometric Phase and Plasmon Hall Shift

    Science.gov (United States)

    Shi, Li-kun; Song, Justin C. W.

    2018-04-01

    The collective plasmonic modes of a metal comprise a simple pattern of oscillating charge density that yields enhanced light-matter interaction. Here we unveil that beneath this familiar facade plasmons possess a hidden internal structure that fundamentally alters its dynamics. In particular, we find that metals with nonzero Hall conductivity host plasmons with an intricate current density configuration that sharply departs from that of ordinary zero Hall conductivity metals. This nontrivial internal structure dramatically enriches the dynamics of plasmon propagation, enabling plasmon wave packets to acquire geometric phases as they scatter. At boundaries, these phases accumulate allowing plasmon waves that reflect off to experience a nonreciprocal parallel shift. This plasmon Hall shift, tunable by Hall conductivity as well as plasmon wavelength, displaces the incident and reflected plasmon trajectories and can be readily probed by near-field photonics techniques. Anomalous plasmon geometric phases dramatically enrich the nanophotonics toolbox, and yield radical new means for directing plasmonic beams.

  1. Optokinetic Stimulation Modulates Neglect for the Number Space: Evidence from Mental Number Interval Bisection

    Science.gov (United States)

    Priftis, Konstantinos; Pitteri, Marco; Meneghello, Francesca; Umiltà, Carlo; Zorzi, Marco

    2012-01-01

    Behavioral, neuropsychological, and neuroimaging data support the idea that numbers are represented along a mental number line (MNL), an analogical, visuospatial representation of number magnitude. The MNL is left-to-right oriented in Western cultures, with small numbers on the left and larger numbers on the right. Left neglect patients are impaired in the mental bisection of numerical intervals, with a bias toward larger numbers that are relatively to the right on the MNL. In the present study we investigated the effects of optokinetic stimulation (OKS) – a technique inducing visuospatial attention shifts by means of activation of the optokinetic nystagmus – on number interval bisection. One patient with left neglect following right-hemisphere stroke (BG) and four control patients with right-hemisphere damage, but without neglect, performed the number interval bisection task in three conditions of OKS: static, leftward, and rightward. In the static condition, BG misbisected to the right of the true midpoint. BG misbisected to the left following leftward OKS, and again to the right of the midpoint following rightward OKS. Moreover, the variability of BG’s performance was smaller following both leftward and rightward OKS, suggesting that the attentional bias induced by OKS reduced the “indifference zone” that is thought to underlie the length effect reported in bisection tasks. We argue that shifts of visuospatial attention, induced by OKS, may affect number interval bisection, thereby revealing an interaction between the processing of the perceptual space and the processing of the number space. PMID:22363280

  2. The Mediterranean Sea regime shift at the end of the 1980s, and intriguing parallelisms with other European basins.

    Directory of Open Access Journals (Sweden)

    Alessandra Conversi

    Full Text Available BACKGROUND: Regime shifts are abrupt changes encompassing a multitude of physical properties and ecosystem variables, which lead to new regime conditions. Recent investigations focus on the changes in ecosystem diversity and functioning associated to such shifts. Of particular interest, because of the implication on climate drivers, are shifts that occur synchronously in separated basins. PRINCIPAL FINDINGS: In this work we analyze and review long-term records of Mediterranean ecological and hydro-climate variables and find that all point to a synchronous change in the late 1980s. A quantitative synthesis of the literature (including observed oceanic data, models and satellite analyses shows that these years mark a major change in Mediterranean hydrographic properties, surface circulation, and deep water convection (the Eastern Mediterranean Transient. We provide novel analyses that link local, regional and basin scale hydrological properties with two major indicators of large scale climate, the North Atlantic Oscillation index and the Northern Hemisphere Temperature index, suggesting that the Mediterranean shift is part of a large scale change in the Northern Hemisphere. We provide a simplified scheme of the different effects of climate vs. temperature on pelagic ecosystems. CONCLUSIONS: Our results show that the Mediterranean Sea underwent a major change at the end of the 1980s that encompassed atmospheric, hydrological, and ecological systems, for which it can be considered a regime shift. We further provide evidence that the local hydrography is linked to the larger scale, northern hemisphere climate. These results suggest that the shifts that affected the North, Baltic, Black and Mediterranean (this work Seas at the end of the 1980s, that have been so far only partly associated, are likely linked as part a northern hemisphere change. These findings bear wide implications for the development of climate change scenarios, as synchronous shifts

  3. The Mediterranean Sea regime shift at the end of the 1980s, and intriguing parallelisms with other European basins.

    Science.gov (United States)

    Conversi, Alessandra; Fonda Umani, Serena; Peluso, Tiziana; Molinero, Juan Carlos; Santojanni, Alberto; Edwards, Martin

    2010-05-19

    Regime shifts are abrupt changes encompassing a multitude of physical properties and ecosystem variables, which lead to new regime conditions. Recent investigations focus on the changes in ecosystem diversity and functioning associated to such shifts. Of particular interest, because of the implication on climate drivers, are shifts that occur synchronously in separated basins. In this work we analyze and review long-term records of Mediterranean ecological and hydro-climate variables and find that all point to a synchronous change in the late 1980s. A quantitative synthesis of the literature (including observed oceanic data, models and satellite analyses) shows that these years mark a major change in Mediterranean hydrographic properties, surface circulation, and deep water convection (the Eastern Mediterranean Transient). We provide novel analyses that link local, regional and basin scale hydrological properties with two major indicators of large scale climate, the North Atlantic Oscillation index and the Northern Hemisphere Temperature index, suggesting that the Mediterranean shift is part of a large scale change in the Northern Hemisphere. We provide a simplified scheme of the different effects of climate vs. temperature on pelagic ecosystems. Our results show that the Mediterranean Sea underwent a major change at the end of the 1980s that encompassed atmospheric, hydrological, and ecological systems, for which it can be considered a regime shift. We further provide evidence that the local hydrography is linked to the larger scale, northern hemisphere climate. These results suggest that the shifts that affected the North, Baltic, Black and Mediterranean (this work) Seas at the end of the 1980s, that have been so far only partly associated, are likely linked as part a northern hemisphere change. These findings bear wide implications for the development of climate change scenarios, as synchronous shifts may provide the key for distinguishing local (i.e., basin

  4. Robust time-shifted spoke pulse design in the presence of large B0 variations with simultaneous reduction of through-plane dephasing, B1+ effects, and the specific absorption rate using parallel transmission.

    Science.gov (United States)

    Guérin, Bastien; Stockmann, Jason P; Baboli, Mehran; Torrado-Carvajal, Angel; Stenger, Andrew V; Wald, Lawrence L

    2016-08-01

    To design parallel transmission spokes pulses with time-shifted profiles for joint mitigation of intensity variations due to B1+ effects, signal loss due to through-plane dephasing, and the specific absorption rate (SAR) at 7T. We derived a slice-averaged small tip angle (SA-STA) approximation of the magnetization signal at echo time that depends on the B1+ transmit profiles, the through-slice B0 gradient and the amplitude and time-shifts of the spoke waveforms. We minimize a magnitude least-squares objective based on this signal equation using a fast interior-point approach with analytical expressions of the Jacobian and Hessian. Our algorithm runs in less than three minutes for the design of two-spoke pulses subject to hundreds of local SAR constraints. On a B0/B1+ head phantom, joint optimization of the channel-dependent time-shifts and spoke amplitudes allowed signal recovery in high-B0 regions at no increase of SAR. Although the method creates uniform magnetization profiles (ie, uniform intensity), the flip angle varies across the image, which makes it ill-suited to T1-weighted applications. The SA-STA approach presented in this study is best suited to T2*-weighted applications with long echo times that require signal recovery around high B0 regions. Magn Reson Med 76:540-554, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  5. Thermal residual stress evaluation based on phase-shift lateral shearing interferometry

    Science.gov (United States)

    Dai, Xiangjun; Yun, Hai; Shao, Xinxing; Wang, Yanxia; Zhang, Donghuan; Yang, Fujun; He, Xiaoyuan

    2018-06-01

    An interesting phase-shift lateral shearing interferometry system was proposed to evaluate the thermal residual stress distribution in transparent specimen. The phase-shift interferograms was generated by moving a parallel plane plate. Based on analyzing the fringes deflected by deformation and refractive index change, the stress distribution can be obtained. To verify the validity of the proposed method, a typical experiment was elaborately designed to determine thermal residual stresses of a transparent PMMA plate subjected to the flame of a lighter. The sum of in-plane stress distribution was demonstrated. The experimental data were compared with values measured by digital gradient sensing method. Comparison of the results reveals the effectiveness and feasibility of the proposed method.

  6. Losing the left side of the world: rightward shift in human spatial attention with sleep onset.

    Science.gov (United States)

    Bareham, Corinne A; Manly, Tom; Pustovaya, Olga V; Scott, Sophie K; Bekinschtein, Tristan A

    2014-05-28

    Unilateral brain damage can lead to a striking deficit in awareness of stimuli on one side of space called Spatial Neglect. Patient studies show that neglect of the left is markedly more persistent than of the right and that its severity increases under states of low alertness. There have been suggestions that this alertness-spatial awareness link may be detectable in the general population. Here, healthy human volunteers performed an auditory spatial localisation task whilst transitioning in and out of sleep. We show, using independent electroencephalographic measures, that normal drowsiness is linked with a remarkable unidirectional tendency to mislocate left-sided stimuli to the right. The effect may form a useful healthy model of neglect and help in understanding why leftward inattention is disproportionately persistent after brain injury. The results also cast light on marked changes in conscious experience before full sleep onset.

  7. Impact of patient motion on myocardial perfusion SPECT

    International Nuclear Information System (INIS)

    Huang Kemin; Feng Yanlin; He Xiaohong; Wen Guanghua; Yu Fengwen; Liu Shusheng; Liu Dejun; Yuan Jianwei; Yang Ming

    2008-01-01

    Objective: It is well known that patient motion may cause artifacts in myocardial SPECT images and affect clinical diagnosis. The aim of the study was to evaluate the effects of motion on quality and semi-quantitative results of myocardial perfusion images. Methods: Six healthy volunteers un- derwent myocardial perfusion SPECT. The raw data in each case was manually shifted 1-6 frames and 1 4 pixels, respectively by using the motion correction software. The shifted raw data were then reconstructed. A semi-quantitative software was used to assess the myocardial perfusion of left ventricle. The quality and semi-quantitative results of the tomographic images reconstructed from the raw data with and without motion were compared and analyzed. SPSS 12.0 was used for data analysis. Results: There was no visible artifact and semi-quantitative difference on the data with 1 frame and (or) 1 pixel shift when compared with the original data without shift. The image artifacts became significantly deteriorated when the number of flame and (or) pixel shift was increased. In general, the image artifact of inferior and posterior wall was related to the upward shift, and that of anterior and infero-posterior wall was related to the downward shift, that of septal, anterior, infero-postefior wall and apex was related to right-ward shift, and the septal and infero-posterior wall was related to the left-ward shift. The differences along the x-axis shift were more prominent than that of the y-axis (t=2.848, P<0.01), and the differences in the downward and rightward shift were more severe than the upward and leftward shift (t=2.941, 6.598; all P<0.01), respectively. Conclusions: Image artifacts became significant when there was motion induced by manual shift of more than one flame and (or) one pixel. Different motion directions were closely related to different segments of left ventricle. (authors)

  8. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  9. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  10. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  11. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  12. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  13. Agmatine attenuates the discriminative stimulus and hyperthermic effects of methamphetamine in male rats.

    Science.gov (United States)

    Thorn, David A; Li, Jiuzhou; Qiu, Yanyan; Li, Jun-Xu

    2016-09-01

    Methamphetamine abuse remains an alarming public heath challenge, with no approved pharmacotherapies available. Agmatine is a naturally occurring cationic polyamine that has previously been shown to attenuate the rewarding and psychomotor-sensitizing effects of methamphetamine. This study examined the effects of agmatine on the discriminative stimulus and hyperthermic effects of methamphetamine. Adult male rats were trained to discriminate 0.32 mg/kg methamphetamine from saline. Methamphetamine dose dependently increased drug-associated lever responding. The nonselective dopamine receptor antagonist haloperidol (0.1 mg/kg) significantly attenuated the discriminative stimulus effects of methamphetamine (5.9-fold rightward shift). Agmatine (10-100 mg/kg) did not substitute for methamphetamine, but significantly attenuated the stimulus effects of methamphetamine, leading to a maximum of a 3.5-fold rightward shift. Acute 10 mg/kg methamphetamine increased the rectal temperature by a maximum of 1.96±0.17°C. Agmatine (10-32 mg/kg) pretreatment significantly attenuated the hyperthermic effect of methamphetamine. Agmatine (10 mg/kg) also significantly reversed methamphetamine-induced temperature increase. Together, these results support further exploration of the value that agmatine may have for the treatment of methamphetamine abuse and overdose.

  14. Is human sentence parsing serial or parallel? Evidence from event-related brain potentials.

    Science.gov (United States)

    Hopf, Jens-Max; Bader, Markus; Meng, Michael; Bayer, Josef

    2003-01-01

    In this ERP study we investigate the processes that occur in syntactically ambiguous German sentences at the point of disambiguation. Whereas most psycholinguistic theories agree on the view that processing difficulties arise when parsing preferences are disconfirmed (so-called garden-path effects), important differences exist with respect to theoretical assumptions about the parser's recovery from a misparse. A key distinction can be made between parsers that compute all alternative syntactic structures in parallel (parallel parsers) and parsers that compute only a single preferred analysis (serial parsers). To distinguish empirically between parallel and serial parsing models, we compare ERP responses to garden-path sentences with ERP responses to truly ungrammatical sentences. Garden-path sentences contain a temporary and ultimately curable ungrammaticality, whereas truly ungrammatical sentences remain so permanently--a difference which gives rise to different predictions in the two classes of parsing architectures. At the disambiguating word, ERPs in both sentence types show negative shifts of similar onset latency, amplitude, and scalp distribution in an initial time window between 300 and 500 ms. In a following time window (500-700 ms), the negative shift to garden-path sentences disappears at right central parietal sites, while it continues in permanently ungrammatical sentences. These data are taken as evidence for a strictly serial parser. The absence of a difference in the early time window indicates that temporary and permanent ungrammaticalities trigger the same kind of parsing responses. Later differences can be related to successful reanalysis in garden-path but not in ungrammatical sentences. Copyright 2003 Elsevier Science B.V.

  15. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  16. CONTRIBUTION OF QUADRATIC RESIDUE DIFFUSERS TO EFFICIENCY OF TILTED PROFILE PARALLEL HIGHWAY NOISE BARRIERS

    Directory of Open Access Journals (Sweden)

    M. R. Monazzam ، P. Nassiri

    2009-10-01

    Full Text Available This paper presents the results of an investigation on the acoustic performance of tilted profile parallel barriers with quadratic residue diffuser (QRD tops and faces. A 2D boundary element method (BEM is used to predict the barrier insertion loss. The results of rigid and with absorptive coverage are also calculated for comparisons. Using QRD on the top surface and faces of all tilted profile parallel barrier models introduced here is found to improve the efficiency of barriers compared with rigid equivalent parallel barrier at the examined receiver positions. Applying a QRD with frequency design of 400 Hz on 5 degrees tilted parallel barrier improves the overall performance of its equivalent rigid barrier by 1.8 dB(A. Increase in the treated surfaces with reactive elements shifts the effective performance toward lower frequencies. It is found that by tilting the barriers from 0 to 10 degrees in parallel set up, the degradation effects in parallel barriers is reduced but the absorption effect of fibrous materials and also diffusivity of the quadratic residue diffuser is reduced significantly. In this case all the designed barriers have better performance with 10 degrees tilting in parallel set up. The most economic traffic noise parallel barrier which produces significantly high performance, is achieved by covering the top surface of the barrier closed to the receiver by just a QRD with frequency design of 400 Hz and tilting angle of 10 degrees. The average A-weighted insertion loss in this barrier is predicted to be 16.3 dB (A.

  17. Stimulus- and state-dependence of systematic bias in spatial attention: additive effects of stimulus-size and time-on-task.

    Science.gov (United States)

    Benwell, Christopher S Y; Harvey, Monika; Gardner, Stephanie; Thut, Gregor

    2013-03-01

    Systematic biases in spatial attention are a common finding. In the general population, a systematic leftward bias is typically observed (pseudoneglect), possibly as a consequence of right hemisphere dominance for visuospatial attention. However, this leftward bias can cross-over to a systematic rightward bias with changes in stimulus and state factors (such as line length and arousal). The processes governing these changes are still unknown. Here we tested models of spatial attention as to their ability to account for these effects. To this end, we experimentally manipulated both stimulus and state factors, while healthy participants performed a computerized version of a landmark task. State was manipulated by time-on-task (>1 h) leading to increased fatigue and a reliable left- to rightward shift in spatial bias. Stimulus was manipulated by presenting either long or short lines which was associated with a shift of subjective midpoint from a reliable leftward bias for long to a more rightward bias for short lines. Importantly, we found time-on-task and line length effects to be additive suggesting a common denominator for line bisection across all conditions, which is in disagreement with models that assume that bisection decisions in long and short lines are governed by distinct processes (Magnitude estimation vs Global/local distinction). Our findings emphasize the dynamic rather than static nature of spatial biases in midline judgement. They are best captured by theories of spatial attention positing that spatial bias is flexibly modulated, and subject to inter-hemispheric balance which can change over time or conditions to accommodate task demands or reflect fatigue. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. The Processing of Somatosensory Information shifts from an early parallel into a serial processing mode: a combined fMRI/MEG study.

    Directory of Open Access Journals (Sweden)

    Carsten Michael Klingner

    2016-12-01

    Full Text Available The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG data collected during sustained (260 ms tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII receives information regarding a new stimulus in parallel with the primary somatosensory area (SI, whereas later processing in the SII is dominated by the preprocessed input from the SI.

  19. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study.

    Science.gov (United States)

    Klingner, Carsten M; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI.

  20. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  1. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  2. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  3. Tolerance to the Diuretic Effects of Cannabinoids and Cross-Tolerance to a κ-Opioid Agonist in THC-Treated Mice.

    Science.gov (United States)

    Chopda, Girish R; Parge, Viraj; Thakur, Ganesh A; Gatley, S John; Makriyannis, Alexandros; Paronis, Carol A

    2016-08-01

    Daily treatment with cannabinoids results in tolerance to many, but not all, of their behavioral and physiologic effects. The present studies investigated the effects of 7-day exposure to 10 mg/kg daily of Δ(9)-tetrahydrocannabinol (THC) on the diuretic and antinociceptive effects of THC and the synthetic cannabinoid AM4054. Comparison studies determined diuretic responses to the κ-opioid agonist U50,488 and furosemide. After determination of control dose-response functions, mice received 10 mg/kg daily of THC for 7 days, and dose-response functions were re-determined 24 hours, 7 days, or 14 days later. THC and AM4054 had biphasic diuretic effects under control conditions with maximum effects of 30 and 35 ml/kg of urine, respectively. In contrast, antinociceptive effects of both drugs increased monotonically with dose to >90% of maximal possible effect. Treatment with THC produced 9- and 7-fold rightward shifts of the diuresis and antinociception dose-response curves for THC and, respectively, 7- and 3-fold rightward shifts in the AM4054 dose-response functions. U50,488 and furosemide increased urine output to >35 ml/kg under control conditions. The effects of U50,488 were attenuated after 7-day treatment with THC, whereas the effects of furosemide were unaltered. Diuretic effects of THC and AM4054 recovered to near-baseline levels within 14 days after stopping daily THC injections, whereas tolerance to the antinociceptive effects persisted longer than 14 days. The tolerance induced by 7-day treatment with THC was accompanied by a 55% decrease in the Bmax value for cannabinoid receptors (CB1). These data indicate that repeated exposure to THC produces similar rightward shifts in the ascending and descending limbs of cannabinoid diuresis dose-effect curves and to antinociceptive effects while resulting in a flattening of the U50,488 diuresis dose-effect function. Copyright © 2016 by The American Society for Pharmacology and Experimental Therapeutics.

  4. Controlling nonsequential double ionization of Ne with parallel-polarized two-color laser pulses.

    Science.gov (United States)

    Luo, Siqiang; Ma, Xiaomeng; Xie, Hui; Li, Min; Zhou, Yueming; Cao, Wei; Lu, Peixiang

    2018-05-14

    We measure the recoil-ion momentum distributions from nonsequential double ionization of Ne by two-color laser pulses consisting of a strong 800-nm field and a weak 400-nm field with parallel polarizations. The ion momentum spectra show pronounced asymmetries in the emission direction, which depend sensitively on the relative phase of the two-color components. Moreover, the peak of the doubly charged ion momentum distribution shifts gradually with the relative phase. The shifted range is much larger than the maximal vector potential of the 400-nm laser field. Those features are well recaptured by a semiclassical model. Through analyzing the correlated electron dynamics, we found that the energy sharing between the two electrons is extremely unequal at the instant of recollison. We further show that the shift of the ion momentum corresponds to the change of the recollision time in the two-color laser field. By tuning the relative phase of the two-color components, the recollision time is controlled with attosecond precision.

  5. A Modular Active Front-End Rectifier with Electronic Phase-Shifting for Harmonic Mitigation in Motor Drive Applications

    DEFF Research Database (Denmark)

    Zare, Firuz; Davari, Pooya; Blaabjerg, Frede

    2017-01-01

    In this paper, an electronic phase-shifting strategy has been optimized for a multi-parallel configuration of line-commutated rectifiers with a common dc-bus voltage used in motor drive application. This feature makes the performance of the system independent of the load profile and maximizes its...

  6. Pair-breaking effects by parallel magnetic field in electric-field-induced surface superconductivity

    International Nuclear Information System (INIS)

    Nabeta, Masahiro; Tanaka, Kenta K.; Onari, Seiichiro; Ichioka, Masanori

    2016-01-01

    Highlights: • Zeeman effect shifts superconducting gaps of sub-band system, towards pair-breaking. • Higher-level sub-bands become normal-state-like electronic states by magnetic fields. • Magnetic field dependence of zero-energy DOS reflects multi-gap superconductivity. - Abstract: We study paramagnetic pair-breaking in electric-field-induced surface superconductivity, when magnetic field is applied parallel to the surface. The calculation is performed by Bogoliubov-de Gennes theory with s-wave pairing, including the screening effect of electric fields by the induced carriers near the surface. Due to the Zeeman shift by applied fields, electronic states at higher-level sub-bands become normal-state-like. Therefore, the magnetic field dependence of Fermi-energy density of states reflects the multi-gap structure in the surface superconductivity.

  7. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  8. Shift Colors

    Science.gov (United States)

    Publications & News Shift Colors Pages default Sign In NPC Logo Banner : Shift Colors Search Navy Personnel Command > Reference Library > Publications & News > Shift Colors Top Link Bar Navy Personnel Library Expand Reference Library Quick Launch Shift Colors Shift Colors Archives Mailing Address How to

  9. Contribution of diffuser surfaces to efficiency of tilted T shape parallel highway noise barriers

    Directory of Open Access Journals (Sweden)

    N. Javid Rouzi

    2009-04-01

    Full Text Available Background and aimsThe paper presents the results of an investigation on the acoustic  performance of tilted profile parallel barriers with quadratic residue diffuser tops and faces.MethodsA2D boundary element method (BEM is used to predict the barrier insertion loss. The results of rigid and with absorptive coverage are also calculated for comparisons. Using QRD on the top surface and faces of all tilted profile parallel barrier models introduced here is found to  improve the efficiency of barriers compared with rigid equivalent parallel barrier at the examined  receiver positions.Results Applying a QRD with frequency design of 400 Hz on 5 degrees tilted parallel barrier  improves the overall performance of its equivalent rigid barrier by 1.8 dB(A. Increase the treated surfaces with reactive elements shifts the effective performance toward lower frequencies. It is  found that by tilting the barriers from 0 to 10 degrees in parallel set up, the degradation effects in  parallel barriers is reduced but the absorption effect of fibrous materials and also diffusivity of thequadratic residue diffuser is reduced significantly. In this case all the designed barriers have better  performance with 10 degrees tilting in parallel set up.ConclusionThe most economic traffic noise parallel barrier, which produces significantly  high performance, is achieved by covering the top surface of the barrier closed to the receiver by  just a QRD with frequency design of 400 Hz and tilting angle of 10 degrees. The average Aweighted  insertion loss in this barrier is predicted to be 16.3 dB (A.

  10. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  11. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  12. Lanthanide shift reagents, binding, shift mechanisms and exchange

    International Nuclear Information System (INIS)

    Boer, J.W.M. de

    1977-01-01

    Paramagnetic lanthanide shift reagents, when added to a solution of a substrate, induce shifts in the nuclear magnetic resonance (NMR) spectrum of the substrate molecules. The induced shifts contain information about the structure of the shift reagent substrate complex. The structural information, however, may be difficult to extract because of the following effects: (1) different complexes between shift reagent and substrate may be present in solution, e.g. 1:1 and 1:2 complexes, and the shift observed is a weighed average of the shifts of the substrate nuclei in the different complexes; (2) the Fermi contact interaction, arising from the spin density at the nucleus, contributes to the induced shift; (3) chemical exchange effects may complicate the NMR spectrum. In this thesis, the results of an investigation into the influence of these effects on the NMR spectra of solutions containing a substrate and LSR are presented. The equations describing the pseudo contact and the Fermi contact shift are derived. In addition, it is shown how the modified Bloch equations describing the effect of the chemical exchange processes occurring in the systems studied can be reduced to the familiar equations for a two-site exchange case. The binding of mono- and bifunctional ethers to the shift reagent are reported. An analysis of the induced shifts is given. Finally, the results of the experiments performed to study the exchange behavior of dimethoxyethane and heptafluorodimethyloctanedionato ligands are presented

  13. Full-field parallel interferometry coherence probe microscope for high-speed optical metrology.

    Science.gov (United States)

    Safrani, A; Abdulhalim, I

    2015-06-01

    Parallel detection of several achromatic phase-shifted images is used to obtain a high-speed, high-resolution, full-field, optical coherence probe tomography system based on polarization interferometry. The high enface imaging speed, short coherence gate, and high lateral resolution provided by the system are exploited to determine microbump height uniformity in an integrated semiconductor chip at 50 frames per second. The technique is demonstrated using the Linnik microscope, although it can be implemented on any polarization-based interference microscopy system.

  14. Tilt shift determinations with spatial-carrier phase-shift method in temporal phase-shift interferometry

    International Nuclear Information System (INIS)

    Liu, Qian; Wang, Yang; He, Jianguo; Ji, Fang; Wang, Baorui

    2014-01-01

    An algorithm is proposed to deal with tilt-shift errors in temporal phase-shift interferometry (PSI). In the algorithm, the tilt shifts are detected with the spatial-carrier phase-shift (SCPS) method and then the tilt shifts are applied as priori information to the least-squares fittings of phase retrieval. The algorithm combines the best features of the SCPS and the temporal PSI. The algorithm could be applied to interferograms of arbitrary aperture without data extrapolation for the Fourier transform is not involved. Simulations and experiments demonstrate the effectiveness of the algorithm. The statistics of simulation results show a satisfied accuracy in detecting tilt-shift errors. Comparisons of the measurements with and without environmental vibration show that the proposed algorithm could compensate tilt-shift errors and retrieve wavefront phase accurately. The algorithm provides an approach to retrieve wavefront phase for the temporal PSI in vibrating environment. (paper)

  15. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  16. A Multi-Pulse Front-End Rectifier System with Electronic Phase-Shifting for Harmonic Mitigation in Motor Drive Applications

    DEFF Research Database (Denmark)

    Zare, Firuz; Davari, Pooya; Blaabjerg, Frede

    2016-01-01

    In this paper, an electronic phase-shifting strategy has been optimized for a multi-parallel configuration of line-commutated rectifiers with a common dc-bus voltage used in motor drive application. This feature makes the performance of the system independent of the load profile and maximizes its...

  17. Effective slip for Stokes flow between two grooved walls with an arbitrary phase shift

    Energy Technology Data Exchange (ETDEWEB)

    Ng, Chiu-On, E-mail: cong@hku.hk [Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road (Hong Kong)

    2017-04-15

    This work aims to determine how the effective slip length for a wall-bounded flow may depend on, among other geometrical parameters, the phase shift between patterns on the two walls. An analytical model is developed for Stokes flow through a channel bounded by walls patterned with a regular array of rectangular ribs and grooves, where the patterns on the two walls can be misaligned by any phase shift. This study incorporates several previous studies as limiting or special cases. It is shown that the phase shift can have qualitatively different effects on the flow rate and effective slip length, depending on the flow direction. In a narrow channel, increasing the phase shift may mildly decrease the flow rate and effective slip length for flow parallel to the grooves, but can dramatically increase the flow rate and effective slip length for flow transverse to the grooves. It is found that unless the channel height is much larger than the period of the wall pattern, the effect due to wall confinement has to be taken into account on evaluating the effective slip lengths. (paper)

  18. Parallel, distributed and GPU computing technologies in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-01-01

    An introduction to the current paradigm shift towards concurrency in software. Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined

  19. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  20. BitPAl: a bit-parallel, general integer-scoring sequence alignment algorithm.

    Science.gov (United States)

    Loving, Joshua; Hernandez, Yozen; Benson, Gary

    2014-11-15

    Mapping of high-throughput sequencing data and other bulk sequence comparison applications have motivated a search for high-efficiency sequence alignment algorithms. The bit-parallel approach represents individual cells in an alignment scoring matrix as bits in computer words and emulates the calculation of scores by a series of logic operations composed of AND, OR, XOR, complement, shift and addition. Bit-parallelism has been successfully applied to the longest common subsequence (LCS) and edit-distance problems, producing fast algorithms in practice. We have developed BitPAl, a bit-parallel algorithm for general, integer-scoring global alignment. Integer-scoring schemes assign integer weights for match, mismatch and insertion/deletion. The BitPAl method uses structural properties in the relationship between adjacent scores in the scoring matrix to construct classes of efficient algorithms, each designed for a particular set of weights. In timed tests, we show that BitPAl runs 7-25 times faster than a standard iterative algorithm. Source code is freely available for download at http://lobstah.bu.edu/BitPAl/BitPAl.html. BitPAl is implemented in C and runs on all major operating systems. jloving@bu.edu or yhernand@bu.edu or gbenson@bu.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  1. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  2. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  3. A 3D inversion for all-space magnetotelluric data with static shift correction

    Science.gov (United States)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  4. Interactions of numerical and temporal stimulus characteristics on the control of response location by brief flashes of light.

    Science.gov (United States)

    Fetterman, J Gregor; Killeen, P Richard

    2011-09-01

    Pigeons pecked on three keys, responses to one of which could be reinforced after 3 flashes of the houselight, to a second key after 6, and to a third key after 12. The flashes were arranged according to variable-interval schedules. Response allocation among the keys was a function of the number of flashes. When flashes were omitted, transitions occurred very late. Increasing flash duration produced a leftward shift in the transitions along a number axis. Increasing reinforcement probability produced a leftward shift, and decreasing reinforcement probability produced a rightward shift. Intermixing different flash rates within sessions separated allocations: Faster flash rates shifted the functions sooner in real time, but later in terms of flash count, and conversely for slower flash rates. A model of control by fading memories of number and time was proposed.

  5. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  6. One-loop mass shifts in O(32) open superstring theory

    International Nuclear Information System (INIS)

    Yamamoto, Hisashi.

    1987-08-01

    One-loop amplitudes of O(N) open superstring with emission of massive bosons are studied. Divergences appearing at λ = 0 (λ: the over-all Teichmueller parameter) are shown to be canceled if N = 32 just as in the massless case. We explicitly evaluate the two-point on-shell amplitudes for all the levels of bosons lying on the leading (m 2 = 2 l, J = l + 1, m:mass J:spin l:level number of an excited state) and the next-to-leading (m 2 = 2 l, J = l) Regge trajectories and observe that they are nonvanishing even at N = 32. This implies that O(32) open super-string one-loop amplitudes with massive bosons generally suffer from external-line divergences. Further the obtained expressions of on-shell self energies (mass shifts δm 2 (l)) seem to have nontrivial dependences on l (being not proportional to l), although mass degeneracies remain. This strongly suggests that the Regge trajectories form a set of parallel polygonal lines at one-loop level so that the mass shifts cannot be absorbed by the shift of the slope parameter. The divergences would have to be cured by the vertex operator renormalizations at every excited level. (author)

  7. An isodose shift technique for obliquely incident electron beams

    International Nuclear Information System (INIS)

    Ulin, K.; Sternick, E.S.

    1989-01-01

    It is well known that when an electron beam is incident obliquely on the surface of a phantom, the depth dose curve measured normal to the surface is shifted toward the surface. Based on geometrical arguments alone, the depth of the nth isodose line for an electron beam incident at an angle θ should be equal to the product of cos θ and the depth of the nth isodose line at normal incidence. This method, however, ignores the effects of scatter and can lead to significant errors in isodose placement for beams at large angles of incidence. A semi-empirical functional relationship and a table of isodose shift factors have been developed with which one may easily calculate the depth of any isodose line for beams at incident angles of 0 degree to 60 degree. The isodose shift factors are tabulated in terms of beam energy (6--22 MeV) and isodose line (10%--90%) and are shown to be relatively independent of beam size and incident angle for angles <60 degree. Extensive measurements have been made on a Varian Clinac 2500 linear accelerator with a parallel-plate chamber and polystyrene phantom. The dependence of the chamber response on beam angulation has been checked, and the scaling factor of the polystyrene phantom has been determined to be equal to 1.00

  8. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  9. A design concept of parallel elasticity extracted from biological muscles for engineered actuators.

    Science.gov (United States)

    Chen, Jie; Jin, Hongzhe; Iida, Fumiya; Zhao, Jie

    2016-08-23

    Series elastic actuation that takes inspiration from biological muscle-tendon units has been extensively studied and used to address the challenges (e.g. energy efficiency, robustness) existing in purely stiff robots. However, there also exists another form of passive property in biological actuation, parallel elasticity within muscles themselves, and our knowledge of it is limited: for example, there is still no general design strategy for the elasticity profile. When we look at nature, on the other hand, there seems a universal agreement in biological systems: experimental evidence has suggested that a concave-upward elasticity behaviour is exhibited within the muscles of animals. Seeking to draw possible design clues for elasticity in parallel with actuators, we use a simplified joint model to investigate the mechanisms behind this biologically universal preference of muscles. Actuation of the model is identified from general biological joints and further reduced with a specific focus on muscle elasticity aspects, for the sake of easy implementation. By examining various elasticity scenarios, one without elasticity and three with elasticity of different profiles, we find that parallel elasticity generally exerts contradictory influences on energy efficiency and disturbance rejection, due to the mechanical impedance shift thus caused. The trade-off analysis between them also reveals that concave parallel elasticity is able to achieve a more advantageous balance than linear and convex ones. It is expected that the results could contribute to our further understanding of muscle elasticity and provide a theoretical guideline on how to properly design parallel elasticity behaviours for engineering systems such as artificial actuators and robotic joints.

  10. What is adaptive about adaptive decision making? A parallel constraint satisfaction account.

    Science.gov (United States)

    Glöckner, Andreas; Hilbig, Benjamin E; Jekel, Marc

    2014-12-01

    There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. Herein, we contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts vs. multi-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes (PCS-DM) and contrast it theoretically and empirically against a multi-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns - as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking), and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model for decision making in particular. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  12. The Memory Trace Supporting Lose-Shift Responding Decays Rapidly after Reward Omission and Is Distinct from Other Learning Mechanisms in Rats.

    Science.gov (United States)

    Gruber, Aaron J; Thapa, Rajat

    2016-01-01

    The propensity of animals to shift choices immediately after unexpectedly poor reinforcement outcomes is a pervasive strategy across species and tasks. We report here that the memory supporting such lose-shift responding in rats rapidly decays during the intertrial interval and persists throughout training and testing on a binary choice task, despite being a suboptimal strategy. Lose-shift responding is not positively correlated with the prevalence and temporal dependence of win-stay responding, and it is inconsistent with predictions of reinforcement learning on the task. These data provide further evidence that win-stay and lose-shift are mediated by dissociated neural mechanisms and indicate that lose-shift responding presents a potential confound for the study of choice in the many operant choice tasks with short intertrial intervals. We propose that this immediate lose-shift responding is an intrinsic feature of the brain's choice mechanisms that is engaged as a choice reflex and works in parallel with reinforcement learning and other control mechanisms to guide action selection.

  13. Advances in non-Cartesian parallel magnetic resonance imaging using the GRAPPA operator

    International Nuclear Information System (INIS)

    Seiberlich, Nicole

    2008-01-01

    This thesis has presented several new non-Cartesian parallel imaging methods which simplify both gridding and the reconstruction of images from undersampled data. A novel approach which uses the concepts of parallel imaging to grid data sampled along a non-Cartesian trajectory called GRAPPA Operator Gridding (GROG) is described. GROG shifts any acquired k-space data point to its nearest Cartesian location, thereby converting non-Cartesian to Cartesian data. The only requirements for GROG are a multi-channel acquisition and a calibration dataset for the determination of the GROG weights. Then an extension of GRAPPA Operator Gridding, namely Self-Calibrating GRAPPA Operator Gridding (SC-GROG) is discussed. SC-GROG is a method by which non-Cartesian data can be gridded using spatial information from a multi-channel coil array without the need for an additional calibration dataset, as required in standard GROG. Although GROG can be used to grid undersampled datasets, it is important to note that this method uses parallel imaging only for gridding, and not to reconstruct artifact-free images from undersampled data. Thereafter a simple, novel method for performing modified Cartesian GRAPPA reconstructions on undersampled non-Cartesian k-space data gridded using GROG to arrive at a non-aliased image is introduced. Because the undersampled non-Cartesian data cannot be reconstructed using a single GRAPPA kernel, several Cartesian patterns are selected for the reconstruction. Finally a novel method of using GROG to mimic the bunched phase encoding acquisition (BPE) scheme is discussed. In MRI, it is generally assumed that an artifact-free image can be reconstructed only from sampled points which fulfill the Nyquist criterion. However, the BPE reconstruction is based on the Generalized Sampling Theorem of Papoulis, which states that a continuous signal can be reconstructed from sampled points as long as the points are on average sampled at the Nyquist frequency. A novel

  14. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  15. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  16. Immigration Ethnic Diversity and Political Outcomes

    DEFF Research Database (Denmark)

    Harmon, Nikolaj Arpe

    2017-01-01

    I study the impact of immigration and increasing ethnic diversity on political outcomes in immigrant-receiving countries, focusing on immigration and election outcomes in Danish municipalities 1981-2001. A rich set of control variables isolates ethnic diversity effects from those of other immigrant...... characteristics and a novel IV strategy based on historical housing stock data addresses issues of endogenous location choices of immigrants. Increases in local ethnic diversity lead to right-ward shifts in election outcomes by shifting electoral support away from traditional "big government" left-wing parties...... and towards anti-immigrant nationalist parties in particular. These effects appear in both local and national elections....

  17. Immigration Ethnic Diversity and Political Outcomes: Evidence from Denmark

    DEFF Research Database (Denmark)

    Harmon, Nikolaj Arpe

    I study the impact of immigration and increasing ethnic diversity on political outcomes in immigrant-receiving countries, focusing on immigration and election outcomes in Danish municipalities 1981-2001. A rich set of control variables isolates ethnic diversity effects from those of other immigrant...... characteristics and a novel IV strategy based on historical housing stock data addresses issues of endogenous location choices of immigrants. Increases in local ethnic diversity lead to right-ward shifts in election outcomes by shifting electoral support away from traditional "big government" left-wing parties...... and towards anti-immigrant nationalist parties in particular. These effects appear in both local and national elections....

  18. Ovariectomy induces a shift in fuel availability and metabolism in the hippocampus of the female transgenic model of familial Alzheimer's.

    Science.gov (United States)

    Ding, Fan; Yao, Jia; Zhao, Liqin; Mao, Zisu; Chen, Shuhua; Brinton, Roberta Diaz

    2013-01-01

    Previously, we demonstrated that reproductive senescence in female triple transgenic Alzheimer's (3×TgAD) mice was paralleled by a shift towards a ketogenic profile with a concomitant decline in mitochondrial activity in brain, suggesting a potential association between ovarian hormone loss and alteration in the bioenergetic profile of the brain. In the present study, we investigated the impact of ovariectomy and 17β-estradiol replacement on brain energy substrate availability and metabolism in a mouse model of familial Alzheimer's (3×TgAD). Results of these analyses indicated that ovarian hormones deprivation by ovariectomy (OVX) induced a significant decrease in brain glucose uptake indicated by decline in 2-[(18)F]fluoro-2-deoxy-D-glucose uptake measured by microPET-imaging. Mechanistically, OVX induced a significant decline in blood-brain-barrier specific glucose transporter expression, hexokinase expression and activity. The decline in glucose availability was accompanied by a significant rise in glial LDH5 expression and LDH5/LDH1 ratio indicative of lactate generation and utilization. In parallel, a significant rise in ketone body concentration in serum occurred which was coupled to an increase in neuronal MCT2 expression and 3-oxoacid-CoA transferase (SCOT) required for conversion of ketone bodies to acetyl-CoA. In addition, OVX-induced decline in glucose metabolism was paralleled by a significant increase in Aβ oligomer levels. 17β-estradiol preserved brain glucose-driven metabolic capacity and partially prevented the OVX-induced shift in bioenergetic substrate as evidenced by glucose uptake, glucose transporter expression and gene expression associated with aerobic glycolysis. 17β-estradiol also partially prevented the OVX-induced increase in Aβ oligomer levels. Collectively, these data indicate that ovarian hormone loss in a preclinical model of Alzheimer's was paralleled by a shift towards the metabolic pathway required for metabolism of

  19. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  20. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  1. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  2. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  3. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  4. Conversion and matched filter approximations for serial minimum-shift keyed modulation

    Science.gov (United States)

    Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.

    1982-01-01

    Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.

  5. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  6. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  7. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  8. Recent Improvements to the IMPACT-T Parallel Particle Tracking Code

    International Nuclear Information System (INIS)

    Qiang, J.; Pogorelov, I.V.; Ryne, R.

    2006-01-01

    The IMPACT-T code is a parallel three-dimensional quasi-static beam dynamics code for modeling high brightness beams in photoinjectors and RF linacs. Developed under the US DOE Scientific Discovery through Advanced Computing (SciDAC) program, it includes several key features including a self-consistent calculation of 3D space-charge forces using a shifted and integrated Green function method, multiple energy bins for beams with large energy spread, and models for treating RF standing wave and traveling wave structures. In this paper, we report on recent improvements to the IMPACT-T code including modeling traveling wave structures, short-range transverse and longitudinal wakefields, and longitudinal coherent synchrotron radiation through bending magnets

  9. Photoluminescence spectra of n-doped double quantum wells in a parallel magnetic field

    International Nuclear Information System (INIS)

    Huang, D.; Lyo, S.K.

    1999-01-01

    We show that the photoluminescence (PL) line shapes from tunnel-split ground sublevels of n-doped thin double quantum wells (DQW close-quote s) are sensitively modulated by an in-plane magnetic field B parallel at low temperatures (T). The modulation is caused by the B parallel -induced distortion of the electronic structure. The latter arises from the relative shift of the energy-dispersion parabolas of the two quantum wells (QW close-quote s) in rvec k space, both in the conduction and valence bands, and formation of an anticrossing gap in the conduction band. Using a self-consistent density-functional theory, the PL spectra and the band-gap narrowing are calculated as a function of B parallel , T, and the homogeneous linewidths. The PL spectra from symmetric and asymmetric DQW close-quote s are found to show strikingly different behavior. In symmetric DQW close-quote s with a high density of electrons, two PL peaks are obtained at B parallel =0, representing the interband transitions between the pair of the upper (i.e., antisymmetric) levels and that of the lower (i.e., symmetric) levels of the ground doublets. As B parallel increases, the upper PL peak develops an N-type kink, namely a maximum followed by a minimum, and merges with the lower peak, which rises monotonically as a function of B parallel due to the diamagnetic energy. When the electron density is low, however, only a single PL peak, arising from the transitions between the lower levels, is obtained. In asymmetric DQW close-quote s, the PL spectra show mainly one dominant peak at all B parallel close-quote s. In this case, the holes are localized in one of the QW close-quote s at low T and recombine only with the electrons in the same QW. At high electron densities, the upper PL peak shows an N-type kink like in symmetric DQW close-quote s. However, the lower peak is absent at low B parallel close-quote s because it arises from the inter-QW transitions. Reasonable agreement is obtained with recent

  10. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  11. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured ...

  13. Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization

    International Nuclear Information System (INIS)

    Baron, E.; Hauschildt, Peter H.

    1998-01-01

    We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000 - 300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. copyright 1998 The American Astronomical Society

  14. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  15. Second-order processing of four-stroke apparent motion.

    Science.gov (United States)

    Mather, G; Murdoch, L

    1999-05-01

    In four-stroke apparent motion displays, pattern elements oscillate between two adjacent positions and synchronously reverse in contrast, but appear to move unidirectionally. For example, if rightward shifts preserve contrast but leftward shifts reverse contrast, consistent rightward motion is seen. In conventional first-order displays, elements reverse in luminance contrast (e.g. light elements become dark, and vice-versa). The resulting perception can be explained by responses in elementary motion detectors turned to spatio-temporal orientation. Second-order motion displays contain texture-defined elements, and there is some evidence that they excite second-order motion detectors that extract spatio-temporal orientation following the application of a non-linear 'texture-grabbing' transform by the visual system. We generated a variety of second-order four-stroke displays, containing texture-contrast reversals instead of luminance contrast reversals, and used their effectiveness as a diagnostic test for the presence of various forms of non-linear transform in the second-order motion system. Displays containing only forward or only reversed phi motion sequences were also tested. Displays defined by variation in luminance, contrast, orientation, and size were effective. Displays defined by variation in motion, dynamism, and stereo were partially or wholly ineffective. Results obtained with contrast-reversing and four-stroke displays indicate that only relatively simple non-linear transforms (involving spatial filtering and rectification) are available during second-order energy-based motion analysis.

  16. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  17. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  18. Shift Work Disorder and Mental and Physical Effects of Shift Work

    Directory of Open Access Journals (Sweden)

    Pinar Guzel Ozdemir

    2018-03-01

    Full Text Available With the growing prevalence of shift work all over the the world, the relationship between the daily lives of irregular lifestyles and rhythms is being investigated for those working as shift workers and their families. The effect of shift work on physical and mental health is a very important field of research in recent years. The onset and persistence of medical complications in shift workers includes impaired synchronization between work schedule rhythms and circadian clock. In this context, studies have been carried out showing the increased risk of sleep-wake disorders, gastrointestinal problems, and cardiovascular diseases. There is little information about the actual frequency, effect on health and treatment of shift work disorder, known as circadian rhythm sleep disorder. Shift work disorder includes insomnia and/or excessive sleepiness related with the work schedule. The aim of this rewiev, mentioning about the physical and mental effects of shift work, and to provide information about the diagnosis, clinic and treatment methods of shift-work disorder.

  19. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  20. Influence of Paralleling Dies and Paralleling Half-Bridges on Transient Current Distribution in Multichip Power Modules

    DEFF Research Database (Denmark)

    Li, Helong; Zhou, Wei; Wang, Xiongfei

    2018-01-01

    This paper addresses the transient current distribution in the multichip half-bridge power modules, where two types of paralleling connections with different current commutation mechanisms are considered: paralleling dies and paralleling half-bridges. It reveals that with paralleling dies, both t...

  1. Backfilling with Fairness and Slack for Parallel Job Scheduling

    International Nuclear Information System (INIS)

    Sodan, Angela C; Wei Jin

    2010-01-01

    Parallel job scheduling typically combines a basic policy like FCFS with backfilling, i.e. moving jobs to an earlier than their regular scheduling position if they do not delay the jobs ahead in the queue according to the rules of the backfilling approach applied. Commonly used are conservative and easy backfilling which either have worse response times but better predictability or better response times and poor predictability. The paper proposes a relaxation of conservative backfilling by permitting to shift jobs within certain constraints to backfill more jobs and reduce fragmentation and subsequently obtain better response times. At the same time, deviation from fairness is kept low and predictability remains high. The results of the experimentation evaluation show that the goals are met, with response-time performance lying as expected between conservative and easy backfilling.

  2. Backfilling with Fairness and Slack for Parallel Job Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Sodan, Angela C; Wei Jin, E-mail: acsodan@uwindsor.ca [University of Windsor, Computer Science, Windsor, Ontario (Canada)

    2010-11-01

    Parallel job scheduling typically combines a basic policy like FCFS with backfilling, i.e. moving jobs to an earlier than their regular scheduling position if they do not delay the jobs ahead in the queue according to the rules of the backfilling approach applied. Commonly used are conservative and easy backfilling which either have worse response times but better predictability or better response times and poor predictability. The paper proposes a relaxation of conservative backfilling by permitting to shift jobs within certain constraints to backfill more jobs and reduce fragmentation and subsequently obtain better response times. At the same time, deviation from fairness is kept low and predictability remains high. The results of the experimentation evaluation show that the goals are met, with response-time performance lying as expected between conservative and easy backfilling.

  3. Non-occupational physical activity levels of shift workers compared with non-shift workers

    Science.gov (United States)

    Loef, Bette; Hulsegge, Gerben; Wendel-Vos, G C Wanda; Verschuren, W M Monique; Bakker, Marije F; van der Beek, Allard J; Proper, Karin I

    2017-01-01

    Objectives Lack of physical activity (PA) has been hypothesised as an underlying mechanism in the adverse health effects of shift work. Therefore, our aim was to compare non-occupational PA levels between shift workers and non-shift workers. Furthermore, exposure–response relationships for frequency of night shifts and years of shift work regarding non-occupational PA levels were studied. Methods Data of 5980 non-shift workers and 532 shift workers from the European Prospective Investigation into Cancer and Nutrition-Netherlands (EPIC-NL) were used in these cross-sectional analyses. Time spent (hours/week) in different PA types (walking/cycling/exercise/chores) and intensities (moderate/vigorous) were calculated based on self-reported PA. Furthermore, sports were operationalised as: playing sports (no/yes), individual versus non-individual sports, and non-vigorous-intensity versus vigorous-intensity sports. PA levels were compared between shift workers and non-shift workers using Generalized Estimating Equations and logistic regression. Results Shift workers reported spending more time walking than non-shift workers (B=2.3 (95% CI 1.2 to 3.4)), but shift work was not associated with other PA types and any of the sports activities. Shift workers who worked 1–4 night shifts/month (B=2.4 (95% CI 0.6 to 4.3)) and ≥5 night shifts/month (B=3.7 (95% CI 1.8 to 5.6)) spent more time walking than non-shift workers. No exposure–response relationships were found between years of shift work and PA levels. Conclusions Shift workers spent more time walking than non-shift workers, but we observed no differences in other non-occupational PA levels. To better understand if and how PA plays a role in the negative health consequences of shift work, our findings need to be confirmed in future studies. PMID:27872151

  4. Parallel sites implicate functional convergence of the hearing gene prestin among echolocating mammals.

    Science.gov (United States)

    Liu, Zhen; Qi, Fei-Yan; Zhou, Xin; Ren, Hai-Qing; Shi, Peng

    2014-09-01

    Echolocation is a sensory system whereby certain mammals navigate and forage using sound waves, usually in environments where visibility is limited. Curiously, echolocation has evolved independently in bats and whales, which occupy entirely different environments. Based on this phenotypic convergence, recent studies identified several echolocation-related genes with parallel sites at the protein sequence level among different echolocating mammals, and among these, prestin seems the most promising. Although previous studies analyzed the evolutionary mechanism of prestin, the functional roles of the parallel sites in the evolution of mammalian echolocation are not clear. By functional assays, we show that a key parameter of prestin function, 1/α, is increased in all echolocating mammals and that the N7T parallel substitution accounted for this functional convergence. Moreover, another parameter, V1/2, was shifted toward the depolarization direction in a toothed whale, the bottlenose dolphin (Tursiops truncatus) and a constant-frequency (CF) bat, the Stoliczka's trident bat (Aselliscus stoliczkanus). The parallel site of I384T between toothed whales and CF bats was responsible for this functional convergence. Furthermore, the two parameters (1/α and V1/2) were correlated with mammalian high-frequency hearing, suggesting that the convergent changes of the prestin function in echolocating mammals may play important roles in mammalian echolocation. To our knowledge, these findings present the functional patterns of echolocation-related genes in echolocating mammals for the first time and rigorously demonstrate adaptive parallel evolution at the protein sequence level, paving the way to insights into the molecular mechanism underlying mammalian echolocation. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  6. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  7. Data communications in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  8. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  9. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  10. Reduced Tolerance to Night Shift in Chronic Shift Workers: Insight From Fractal Regulation.

    Science.gov (United States)

    Li, Peng; Morris, Christopher J; Patxot, Melissa; Yugay, Tatiana; Mistretta, Joseph; Purvis, Taylor E; Scheer, Frank A J L; Hu, Kun

    2017-07-01

    Healthy physiology is characterized by fractal regulation (FR) that generates similar structures in the fluctuations of physiological outputs at different time scales. Perturbed FR is associated with aging and age-related pathological conditions. Shift work, involving repeated and chronic exposure to misaligned environmental and behavioral cycles, disrupts circadian coordination. We tested whether night shifts perturb FR in motor activity and whether night shifts affect FR in chronic shift workers and non-shift workers differently. We studied 13 chronic shift workers and 14 non-shift workers as controls using both field and in-laboratory experiments. In the in-laboratory study, simulated night shifts were used to induce a misalignment between the endogenous circadian pacemaker and the sleep-wake cycles (ie, circadian misalignment) while environmental conditions and food intake were controlled. In the field study, we found that FR was robust in controls but broke down in shift workers during night shifts, leading to more random activity fluctuations as observed in patients with dementia. The night shift effect was present even 2 days after ending night shifts. The in-laboratory study confirmed that night shifts perturbed FR in chronic shift workers and showed that FR in controls was more resilience to the circadian misalignment. Moreover, FR during real and simulated night shifts was more perturbed in those who started shift work at older ages. Chronic shift work causes night shift intolerance, which is probably linked to the degraded plasticity of the circadian control system. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  11. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  12. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  13. Non-occupational physical activity levels of shift workers compared with non-shift workers.

    Science.gov (United States)

    Loef, Bette; Hulsegge, Gerben; Wendel-Vos, G C Wanda; Verschuren, W M Monique; Vermeulen, Roel C H; Bakker, Marije F; van der Beek, Allard J; Proper, Karin I

    2017-05-01

    Lack of physical activity (PA) has been hypothesised as an underlying mechanism in the adverse health effects of shift work. Therefore, our aim was to compare non-occupational PA levels between shift workers and non-shift workers. Furthermore, exposure-response relationships for frequency of night shifts and years of shift work regarding non-occupational PA levels were studied. Data of 5980 non-shift workers and 532 shift workers from the European Prospective Investigation into Cancer and Nutrition-Netherlands (EPIC-NL) were used in these cross-sectional analyses. Time spent (hours/week) in different PA types (walking/cycling/exercise/chores) and intensities (moderate/vigorous) were calculated based on self-reported PA. Furthermore, sports were operationalised as: playing sports (no/yes), individual versus non-individual sports, and non-vigorous-intensity versus vigorous-intensity sports. PA levels were compared between shift workers and non-shift workers using Generalized Estimating Equations and logistic regression. Shift workers reported spending more time walking than non-shift workers (B=2.3 (95% CI 1.2 to 3.4)), but shift work was not associated with other PA types and any of the sports activities. Shift workers who worked 1-4 night shifts/month (B=2.4 (95% CI 0.6 to 4.3)) and ≥5 night shifts/month (B=3.7 (95% CI 1.8 to 5.6)) spent more time walking than non-shift workers. No exposure-response relationships were found between years of shift work and PA levels. Shift workers spent more time walking than non-shift workers, but we observed no differences in other non-occupational PA levels. To better understand if and how PA plays a role in the negative health consequences of shift work, our findings need to be confirmed in future studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  15. Morningness/eveningness and the synchrony effect for spatial attention.

    Science.gov (United States)

    Dorrian, Jillian; McLean, Benjamin; Banks, Siobhan; Loetscher, Tobias

    2017-02-01

    There is evidence that a decrease in alertness is associated with a rightward shift of attention. Alertness fluctuates throughout the day and peak times differ between individuals. Some individuals feel most alert in the morning; others in the evening. Our aim was to investigate the influence of morningness/eveningness and time of testing on spatial attention. It was predicted that attention would shift rightwards when individuals were tested at their non-optimal time as compared to tests at peak times. A crowdsourcing internet marketplace, Amazon Mechanical Turk (AMT) was used to collect data. Given questions surrounding the quality of data drawn from such virtual environments, this study also investigated the sensitivity of data to demonstrate known effects from the literature. Five-hundred and thirty right-handed participants took part between 6 am and 11 pm. Participants answered demographic questions, completed a question from the Horne and Östberg Morningness/Eveningness Scale, and performed a spatial attentional task (landmark task). For the landmark task, participants indicated whether the left or right segment of each of 72 pre-bisected lines was longer (longer side counterbalanced). Response bias was calculated by subtracting the 'number of left responses' from the 'number of right responses', and dividing by the number of trials. Negative values indicate a leftward attentional bias, and positive values a rightward bias. Well-supported relationships between variables were reflected in the dataset. Controlling for age, there was a significant interaction between morningness/eveningness and time of testing (morning=6 am-2.30 pm, evening=2.30 pm-11 pm) (pattention from peak to off-peak times of testing for those identifying as morning types, but not evening types. Findings support the utility of crowdsourcing internet marketplaces as data collection vehicles for research. Results also suggest that the deployment of spatial attention is modulated by an

  16. Advances in non-Cartesian parallel magnetic resonance imaging using the GRAPPA operator

    Energy Technology Data Exchange (ETDEWEB)

    Seiberlich, Nicole

    2008-07-21

    This thesis has presented several new non-Cartesian parallel imaging methods which simplify both gridding and the reconstruction of images from undersampled data. A novel approach which uses the concepts of parallel imaging to grid data sampled along a non-Cartesian trajectory called GRAPPA Operator Gridding (GROG) is described. GROG shifts any acquired k-space data point to its nearest Cartesian location, thereby converting non-Cartesian to Cartesian data. The only requirements for GROG are a multi-channel acquisition and a calibration dataset for the determination of the GROG weights. Then an extension of GRAPPA Operator Gridding, namely Self-Calibrating GRAPPA Operator Gridding (SC-GROG) is discussed. SC-GROG is a method by which non-Cartesian data can be gridded using spatial information from a multi-channel coil array without the need for an additional calibration dataset, as required in standard GROG. Although GROG can be used to grid undersampled datasets, it is important to note that this method uses parallel imaging only for gridding, and not to reconstruct artifact-free images from undersampled data. Thereafter a simple, novel method for performing modified Cartesian GRAPPA reconstructions on undersampled non-Cartesian k-space data gridded using GROG to arrive at a non-aliased image is introduced. Because the undersampled non-Cartesian data cannot be reconstructed using a single GRAPPA kernel, several Cartesian patterns are selected for the reconstruction. Finally a novel method of using GROG to mimic the bunched phase encoding acquisition (BPE) scheme is discussed. In MRI, it is generally assumed that an artifact-free image can be reconstructed only from sampled points which fulfill the Nyquist criterion. However, the BPE reconstruction is based on the Generalized Sampling Theorem of Papoulis, which states that a continuous signal can be reconstructed from sampled points as long as the points are on average sampled at the Nyquist frequency. A novel

  17. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  18. Blue and red shifted temperature dependence of implicit phonon shifts in graphene

    Science.gov (United States)

    Mann, Sarita; Jindal, V. K.

    2017-07-01

    We have calculated the implicit shift for various modes of frequency in a pure graphene sheet. Thermal expansion and Grüneisen parameter which are required for implicit shift calculation have already been studied and reported. For this calculation, phonon frequencies are obtained using force constants derived from dynamical matrix calculated using VASP code where the density functional perturbation theory (DFPT) is used in interface with phonopy software. The implicit phonon shift shows an unusual behavior as compared to the bulk materials. The frequency shift is large negative (red shift) for ZA and ZO modes and the value of negative shift increases with increase in temperature. On the other hand, blue shift arises for all other longitudinal and transverse modes with a similar trend of increase with increase in temperature. The q dependence of phonon shifts has also been studied. Such simultaneous red and blue shifts in transverse or out plane modes and surface modes, respectively leads to speculation of surface softening in out of plane direction in preference to surface melting.

  19. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  20. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  1. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  2. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  3. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  4. OpenShift Workshop

    CERN Multimedia

    CERN. Geneva; Rodriguez Peon, Alberto

    2017-01-01

    Workshop to introduce developers to the OpenShift platform available at CERN. Several use cases will be shown, including deploying an existing application into OpenShift. We expect attendees to realize about OpenShift features and general architecture of the service.

  5. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  6. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  7. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  8. Contributors to shift work tolerance in South Korean nurses working rotating shift.

    Science.gov (United States)

    Jung, Hye-Sun; Lee, Bokim

    2015-05-01

    Shift workers have rapidly increased in South Korea; however, there is no published research exploring shift work tolerance among South Korean workers. This study aimed to investigate factors related to shift work tolerance in South Korean nurses. The sample comprised of 660 nurses who worked shifts in a large hospital in South Korea. A structured questionnaire included following comprehensive variables: demographic (age and number of children), individual (morningness and self-esteem), psychosocial (social support and job stress), lifestyle (alcohol consumption, physical activity, and BMI), and working condition factors (number of night shifts and working hours). Shift work tolerance was measured in terms of insomnia, fatigue, and depression. The results of hierarchical regressions indicate that all variables, except for three, number of children, BMI, and working hours, were related to at least one of the symptoms associated with shift work tolerance. Based on these results, we offer some practical implications to help improve shift work tolerance of workers. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. The Halden Reactor Project Workshop on Studies of Operator Performance During Night Shift's

    International Nuclear Information System (INIS)

    Morisseau, Dolores S.; Braarud, Per Oeyvind; Collier, Steve; Droeivoldsmo, Asgeir; Larsen, Marit; Lirvall, Peter

    1996-01-01

    A workshop on Studies of Operator Performance during Nights Shifts was organised in Halden, February 27-28, 1996. The purpose of the workshop was to discuss and make recommendations on specific needs for the study of operator cognitive performance at night and identify the relevant research issues for which Halden could provide resolution. The workshop began with presentations by several invited speakers with expertise in studies of shift work and was then divided into three working groups that discussed the following issues in parallel: (1) Lines of Research to Be Pursued; (2) Methods and Measures to Be Used in Research on Cognitive Performance at Nights; and (3) Products of the Research on Operator Performance at Night. Each group produced specific recommendations that were summarised by the group's facilitator in a joint session of the workshop. This report summarises the presentation of the invited speakers, and the discussions and recommendations of the individual working groups. (author)

  10. The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition

    Directory of Open Access Journals (Sweden)

    Ashkan Tousimojarad

    2013-12-01

    Full Text Available We present the Glasgow Parallel Reduction Machine (GPRM, a novel, flexible framework for parallel task-composition based many-core programming. We allow the programmer to structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with functional semantics and parallel evaluation. In this paper we discuss the GPRM, the virtual machine framework that enables the parallel task composition approach. We focus the discussion on GPIR, the functional language used as the intermediate representation of the bytecode running on the GPRM. Using examples in this language we show the flexibility and power of our task composition framework. We demonstrate the potential using an implementation of a merge sort algorithm on a 64-core Tilera processor, as well as on a conventional Intel quad-core processor and an AMD 48-core processor system. We also compare our framework with OpenMP tasks in a parallel pointer chasing algorithm running on the Tilera processor. Our results show that the GPRM programs outperform the corresponding OpenMP codes on all test platforms, and can greatly facilitate writing of parallel programs, in particular non-data parallel algorithms such as reductions.

  11. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  12. Fixed field alternating gradient accelerator with small orbit shift and tune excursion

    Directory of Open Access Journals (Sweden)

    Suzanne L. Sheehy

    2010-04-01

    Full Text Available A new design principle of a nonscaling fixed field alternating gradient accelerator is proposed. It is based on optics that produce approximate scaling properties. A large field index k is chosen to squeeze the orbit shift as much as possible by setting the betatron oscillation frequency in the second stability region of Hill’s equation. Then, the lattice magnets and their alignment are simplified. To simplify the magnets, we expand the field profile of r^{k} into multipoles and keep only a few lower order terms. A rectangular-shaped magnet is assumed with lines of constant field parallel to the magnet axis. The lattice employs a triplet of rectangular magnets for focusing, which are parallel to one another to simplify alignment. These simplifications along with fringe fields introduce finite chromaticity and the fixed field alternating gradient accelerator is no longer a scaling one. However, the tune excursion of the whole ring can be within half an integer and we avoid the crossing of strong resonances.

  13. Expert system application for prioritizing preventive actions for shift work: shift expert.

    Science.gov (United States)

    Esen, Hatice; Hatipoğlu, Tuğçen; Cihan, Ahmet; Fiğlali, Nilgün

    2017-09-19

    Shift patterns, work hours, work arrangements and worker motivations have increasingly become key factors for job performance. The main objective of this article is to design an expert system that identifies the negative effects of shift work and prioritizes mitigation efforts according to their importance in preventing these negative effects. The proposed expert system will be referred to as the shift expert. A thorough literature review is conducted to determine the effects of shift work on workers. Our work indicates that shift work is linked to demographic variables, sleepiness and fatigue, health and well-being, and social and domestic conditions. These parameters constitute the sections of a questionnaire designed to focus on 26 important issues related to shift work. The shift expert is then constructed to provide prevention advice at the individual and organizational levels, and it prioritizes this advice using a fuzzy analytic hierarchy process model, which considers comparison matrices provided by users during the prioritization process. An empirical study of 61 workers working on three rotating shifts is performed. After administering the questionnaires, the collected data are analyzed statistically, and then the shift expert produces individual and organizational recommendations for these workers.

  14. Converging Paradigms: A Reflection on Parallel Theoretical Developments in Psychoanalytic Metapsychology and Empirical Dream Research.

    Science.gov (United States)

    Schmelowszky, Ágoston

    2016-08-01

    In the last decades one can perceive a striking parallelism between the shifting perspective of leading representatives of empirical dream research concerning their conceptualization of dreaming and the paradigm shift within clinically based psychoanalytic metapsychology with respect to its theory on the significance of dreaming. In metapsychology, dreaming becomes more and more a central metaphor of mental functioning in general. The theories of Klein, Bion, and Matte-Blanco can be considered as milestones of this paradigm shift. In empirical dream research, the competing theories of Hobson and of Solms respectively argued for and against the meaningfulness of the dream-work in the functioning of the mind. In the meantime, empirical data coming from various sources seemed to prove the significance of dream consciousness for the development and maintenance of adaptive waking consciousness. Metapsychological speculations and hypotheses based on empirical research data seem to point in the same direction, promising for contemporary psychoanalytic practice a more secure theoretical base. In this paper the author brings together these diverse theoretical developments and presents conclusions regarding psychoanalytic theory and technique, as well as proposing an outline of an empirical research plan for testing the specificity of psychoanalysis in developing dream formation.

  15. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  16. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  17. Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux

    International Nuclear Information System (INIS)

    Guo Zehua; Tang Xianzhu

    2012-01-01

    In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

  18. Implementing OpenShift

    CERN Document Server

    Miller, Adam

    2013-01-01

    A standard tutorial-based approach to using OpenShift and deploying custom or pre-built web applications to the OpenShift Online cloud.This book is for software developers and DevOps alike who are interested in learning how to use the OpenShift Platform-as-a-Service for developing and deploying applications, how the environment works on the back end, and how to deploy their very own open source Platform-as-a-Service based on the upstream OpenShift Origin project.

  19. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  20. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  1. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  2. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  3. Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences

    Science.gov (United States)

    Wierzbicki, Damian

    2017-12-01

    Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.

  4. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  5. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  6. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  7. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  8. Task shifting of antiretroviral treatment from doctors to primary-care nurses in South Africa (STRETCH): a pragmatic, parallel, cluster-randomised trial.

    Science.gov (United States)

    Fairall, Lara; Bachmann, Max O; Lombard, Carl; Timmerman, Venessa; Uebel, Kerry; Zwarenstein, Merrick; Boulle, Andrew; Georgeu, Daniella; Colvin, Christopher J; Lewin, Simon; Faris, Gill; Cornick, Ruth; Draper, Beverly; Tshabalala, Mvula; Kotze, Eduan; van Vuuren, Cloete; Steyn, Dewald; Chapman, Ronald; Bateman, Eric

    2012-09-08

    Robust evidence of the effectiveness of task shifting of antiretroviral therapy (ART) from doctors to other health workers is scarce. We aimed to assess the effects on mortality, viral suppression, and other health outcomes and quality indicators of the Streamlining Tasks and Roles to Expand Treatment and Care for HIV (STRETCH) programme, which provides educational outreach training of nurses to initiate and represcribe ART, and to decentralise care. We undertook a pragmatic, parallel, cluster-randomised trial in South Africa between Jan 28, 2008, and June 30, 2010. We randomly assigned 31 primary-care ART clinics to implement the STRETCH programme (intervention group) or to continue with standard care (control group). The ratio of randomisation depended on how many clinics were in each of nine strata. Two cohorts were enrolled: eligible patients in cohort 1 were adults (aged ≥16 years) with CD4 counts of 350 cells per μL or less who were not receiving ART; those in cohort 2 were adults who had already received ART for at least 6 months and were being treated at enrolment. The primary outcome in cohort 1 was time to death (superiority analysis). The primary outcome in cohort 2 was the proportion with undetectable viral loads (baseline CD4 counts of 201-350 cells per μL, mortality was slightly lower in the intervention group than in the control group (0·73, 0·54-1.00; p=0·052), but it did not differ between groups in patients with baseline CD4 of 200 cells per μL or less (0·94, 0·76-1·15; p=0·577). In cohort 2, viral load suppression 12 months after enrolment was equivalent in intervention (2156 [71%] of 3029 patients) and control groups (2230 [70%] of 3202; risk difference 1·1%, 95% CI -2·4 to 4·6). Expansion of primary-care nurses' roles to include ART initiation and represcription can be done safely, and improve health outcomes and quality of care, but might not reduce time to ART or mortality. UK Medical Research Council, Development Cooperation

  9. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  10. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  11. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  12. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  13. Are changes in objective working hour characteristics associated with changes in work-life conflict among hospital employees working shifts? A 7-year follow-up.

    Science.gov (United States)

    Karhula, Kati; Koskinen, Aki; Ojajärvi, Anneli; Ropponen, Annina; Puttonen, Sampsa; Kivimäki, Mika; Härmä, Mikko

    2018-06-01

    To investigate whether changes in objective working hour characteristics are associated with parallel changes in work-life conflict (WLC) among hospital employees. Survey responses from three waves of the Finnish Public Sector study (2008, 2012 and 2015) were combined with payroll data from 91 days preceding the surveys (n=2 482, 93% women). Time-dependent fixed effects regression models adjusted for marital status, number of children and stressfulness of the life situation were used to investigate whether changes in working hour characteristics were associated with parallel change in WLC. The working hour characteristics were dichotomised with cut-points in less than or greater than 10% or less than or greater than25% occurrence) and WLC to frequent versus seldom/none. Change in proportion of evening and night shifts and weekend work was significantly associated with parallel change in WLC (adjusted OR 2.19, 95% CI 1.62 to 2.96; OR 1.71, 95% CI 1.21 to 2.44; OR 1.63, 95% CI 1.194 to 2.22, respectively). Similarly, increase or decrease in proportion of quick returns (adjusted OR 1.45, 95% CI 1.10 to 1.89) and long work weeks (adjusted OR 1.26, 95% CI 1.04 to 1.52) was associated with parallel increase or decrease in WLC. Single days off and very long work weeks showed no association with WLC. Changes in unsocial working hour characteristics, especially in connection with evening shifts, are consistently associated with parallel changes in WLC. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  15. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  16. Shifting schedules: the health effects of reorganizing shift work.

    Science.gov (United States)

    Bambra, Clare L; Whitehead, Margaret M; Sowden, Amanda J; Akers, Joanne; Petticrew, Mark P

    2008-05-01

    Approximately one fifth of workers are engaged in some kind of shift work. The harmful effects of shift work on the health and work-life balance of employees are well known. A range of organizational interventions has been suggested to address these negative effects. This study undertook the systematic review (following Quality Of Reporting Of Meta [QUORUM] analyses guidelines) of experimental and quasi-experimental studies, from any country (in any language) that evaluated the effects on health and work-life balance of organizational-level interventions that redesign shift work schedules. Twenty-seven electronic databases (medical, social science, economic) were searched. Data extraction and quality appraisal were carried out by two independent reviewers. Narrative synthesis was performed. The review was conducted between October 2005 and November 2006. Twenty-six studies were found relating to a variety of organizational interventions. No one type of intervention was found to be consistently harmful to workers. However, three types were found to have beneficial effects on health and work-life balance: (1) switching from slow to fast rotation, (2) changing from backward to forward rotation, and (3) self-scheduling of shifts. Improvements were usually at little or no direct organizational cost. However, there were concerns about the generalizability of the evidence, and no studies reported on impacts on health inequalities. This review reinforces the findings of epidemiologic and laboratory-based research by suggesting that certain organizational-level interventions can improve the health of shift workers, their work-life balance, or both. This evidence could be useful when designing interventions to improve the experience of shift work.

  17. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  18. Role of endothelin-converting enzyme, chymase and neutral endopeptidase in the processing of big ET-1, ET-1(1-21) and ET-1(1-31) in the trachea of allergic mice.

    Science.gov (United States)

    De Campo, Benjamin A; Goldie, Roy G; Jeng, Arco Y; Henry, Peter J

    2002-08-01

    The present study examined the roles of endothelin-converting enzyme (ECE), neutral endopeptidase (NEP) and mast cell chymase as processors of the endothelin (ET) analogues ET-1(1-21), ET-1(1-31) and big ET-1 in the trachea of allergic mice. Male CBA/CaH mice were sensitized with ovalbumin (10 microg) delivered intraperitoneal on days 1 and 14, and exposed to aerosolized ovalbumin on days 14, 25, 26 and 27 (OVA mice). Mice were killed and the trachea excised for histological analysis and contraction studies on day 28. Tracheae from OVA mice had 40% more mast cells than vehicle-sensitized mice (sham mice). Ovalbumin (10 microg/ml) induced transient contractions (15+/-3% of the C(max)) in tracheae from OVA mice. The ECE inhibitor CGS35066 (10 microM) inhibited contractions induced by big ET-1 (4.8-fold rightward shift of dose-response curve; Peffect on contractions induced by any of the ET analogues used. The NEP inhibitor CGS24592 (10 microM) inhibited contractions induced by ET-1(1-31) (6.2-fold rightward shift; Pbig ET-1. These data suggest that big ET-1 is processed predominantly by a CGS35066-sensitive ECE within allergic airways rather than by mast cell-derived proteases such as chymase. If endogenous ET-1(1-31) is formed within allergic airways, it is likely to undergo further conversion by NEP to more active products.

  19. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  20. Stress Analysis of an Edge-Cracked Plate by using Photoelastic Fringe Phase Shifting Method

    International Nuclear Information System (INIS)

    Baek, Tae Hyun; Kim, Myung Soo; Cho, Sung Ho

    2000-01-01

    The method of photoelasticity allows one to obtain principal stress differences and principal stress directions in a photoelastic model. In the classical approach, the photoelastic parameters are measured manually point by point. The previous methods require much time and skill in the identification and measurement of photoelastic data. Fringe phase shifting method has been recently developed and widely used to measure and analyze fringe data in photo-mechanics. This paper presents the test results of photoelastic fringe phase shifting technique for the stress analysis of a circular disk under compression and an edge-cracked plate subjected to tensile load. The technique used here requires four phase stepped photoelastic images obtained from a circular polariscope by rotating the analyzer at 0 .deg. ,45 .deg. ,90 .deg. ,and 135 .deg. . Experimental results are compared with those or FEM. Good agreement between the results can be observed. However, some error may be included if the technique is used to general direction which is not parallel to isoclinic fringe

  1. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    Science.gov (United States)

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  2. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  3. Cellular Automata-Based Parallel Random Number Generators Using FPGAs

    Directory of Open Access Journals (Sweden)

    David H. K. Hoe

    2012-01-01

    Full Text Available Cellular computing represents a new paradigm for implementing high-speed massively parallel machines. Cellular automata (CA, which consist of an array of locally connected processing elements, are a basic form of a cellular-based architecture. The use of field programmable gate arrays (FPGAs for implementing CA accelerators has shown promising results. This paper investigates the design of CA-based pseudo-random number generators (PRNGs using an FPGA platform. To improve the quality of the random numbers that are generated, the basic CA structure is enhanced in two ways. First, the addition of a superrule to each CA cell is considered. The resulting self-programmable CA (SPCA uses the superrule to determine when to make a dynamic rule change in each CA cell. The superrule takes its inputs from neighboring cells and can be considered itself a second CA working in parallel with the main CA. When implemented on an FPGA, the use of lookup tables in each logic cell removes any restrictions on how the super-rules should be defined. Second, a hybrid configuration is formed by combining a CA with a linear feedback shift register (LFSR. This is advantageous for FPGA designs due to the compactness of the LFSR implementations. A standard software package for statistically evaluating the quality of random number sequences known as Diehard is used to validate the results. Both the SPCA and the hybrid CA/LFSR were found to pass all the Diehard tests.

  4. Flatness based feedforward control of a parallel hybrid drivetrain; Flachheitsbasierter Vorsteuerungsentwurf fuer den Antriebsstrang eines Parallelhybriden

    Energy Technology Data Exchange (ETDEWEB)

    Gasper, Rainer; Hesseler, Frank; Abel, Dirk [RWTH Aachen Univ. (Germany). Inst. fuer Regelungstechnik

    2010-10-15

    The advantages of Hybrid Electrical Vehicles (HEV) are fuel consumption reduction and minimization of exhaust emissions. Moreover, the drivability of a HEV is very important for the consumer acceptance. The gear shifts and the start of the internal combustion engine are very important for the drivability of a HEV. Because this two tasks are automated, oscillations in the vehicle would be uncomfortable for the driver. In the paper at hand, feedforward controllers for the drivetrain control of a parallel hybrid with an automated manual transmission and a dry clutch are presented. (orig.)

  5. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  6. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  7. Josephson shift registers

    International Nuclear Information System (INIS)

    Przybysz, J.X.

    1989-01-01

    This paper gives a review of Josephson shift register circuits that were designed, fabricated, or tested, with emphasis on work in the 1980s. Operating speed is most important, since it often limits system performance. Older designs used square-wave clocks, but most modern designs use offset sine waves, with either two or three phases. Operating margins and gate bias uniformity are key concerns. The fastest measured Josephson shift register operated at 2.3 GHz, which compares well with a GaAs shift register that consumes 250 times more power. The difficulties of high-speed testing have prevented many Josephson shift registers from being operated at their highest speeds. Computer simulations suggest that 30-GHz operation is possible with current Nb/Al 2 O 3 /Nb technology. Junctions with critical current densities near 10 kA/cm 2 would make 100-GHz shift registers feasible

  8. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  9. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  10. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  11. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  12. Parallel β-sheet vibrational couplings revealed by 2D IR spectroscopy of an isotopically labeled macrocycle: quantitative benchmark for the interpretation of amyloid and protein infrared spectra.

    Science.gov (United States)

    Woys, Ann Marie; Almeida, Aaron M; Wang, Lu; Chiu, Chi-Cheng; McGovern, Michael; de Pablo, Juan J; Skinner, James L; Gellman, Samuel H; Zanni, Martin T

    2012-11-21

    Infrared spectroscopy is playing an important role in the elucidation of amyloid fiber formation, but the coupling models that link spectra to structure are not well tested for parallel β-sheets. Using a synthetic macrocycle that enforces a two stranded parallel β-sheet conformation, we measured the lifetimes and frequency for six combinations of doubly (13)C═(18)O labeled amide I modes using 2D IR spectroscopy. The average vibrational lifetime of the isotope labeled residues was 550 fs. The frequencies of the labels ranged from 1585 to 1595 cm(-1), with the largest frequency shift occurring for in-register amino acids. The 2D IR spectra of the coupled isotope labels were calculated from molecular dynamics simulations of a series of macrocycle structures generated from replica exchange dynamics to fully sample the conformational distribution. The models used to simulate the spectra include through-space coupling, through-bond coupling, and local frequency shifts caused by environment electrostatics and hydrogen bonding. The calculated spectra predict the line widths and frequencies nearly quantitatively. Historically, the characteristic features of β-sheet infrared spectra have been attributed to through-space couplings such as transition dipole coupling. We find that frequency shifts of the local carbonyl groups due to nearest neighbor couplings and environmental factors are more important, while the through-space couplings dictate the spectral intensities. As a result, the characteristic absorption spectra empirically used for decades to assign parallel β-sheet secondary structure arises because of a redistribution of oscillator strength, but the through-space couplings do not themselves dramatically alter the frequency distribution of eigenstates much more than already exists in random coil structures. Moreover, solvent exposed residues have amide I bands with >20 cm(-1) line width. Narrower line widths indicate that the amide I backbone is solvent

  13. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  14. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  15. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  16. Shifting Attention

    Science.gov (United States)

    Ingram, Jenni

    2014-01-01

    This article examines the shifts in attention and focus as one teacher introduces and explains an image that represents the processes involved in a numeric problem that his students have been working on. This paper takes a micro-analytic approach to examine how the focus of attention shifts through what the teacher and students do and say in the…

  17. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  18. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  19. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  20. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  1. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  2. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  3. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  4. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  5. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  6. Comparative Study of Dynamic Programming and Pontryagin’s Minimum Principle on Energy Management for a Parallel Hybrid Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Huei Peng

    2013-04-01

    Full Text Available This paper compares two optimal energy management methods for parallel hybrid electric vehicles using an Automatic Manual Transmission (AMT. A control-oriented model of the powertrain and vehicle dynamics is built first. The energy management is formulated as a typical optimal control problem to trade off the fuel consumption and gear shifting frequency under admissible constraints. The Dynamic Programming (DP and Pontryagin’s Minimum Principle (PMP are applied to obtain the optimal solutions. Tuning with the appropriate co-states, the PMP solution is found to be very close to that from DP. The solution for the gear shifting in PMP has an algebraic expression associated with the vehicular velocity and can be implemented more efficiently in the control algorithm. The computation time of PMP is significantly less than DP.

  7. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  8. Work shift duration: a review comparing eight hour and 12 hour shift systems.

    Science.gov (United States)

    Smith, L; Folkard, S; Tucker, P; Macdonald, I

    1998-04-01

    Shiftwork is now a major feature of working life across a broad range of industries. The features of the shift systems operated can impact on the wellbeing, performance, and sleep of shiftworkers. This paper reviews the current state of knowledge on one major characteristic of shift rotas-namely, shift duration. Evidence comparing the relative effects of eight hour and 12 hour shifts on fatigue and job performance, safety, sleep, and physical and psychological health are considered. At the organisational level, factors such as the mode of system implementation, attitudes towards shift rotas, sickness absence and turnover, overtime, and moonlighting are discussed. Manual and electronic searches of the shiftwork research literature were conducted to obtain information on comparisons between eight hour and 12 hour shifts. The research findings are largely equivocal. The bulk of the evidence suggests few differences between eight and 12 hour shifts in the way they affect people. There may even be advantages to 12 hour shifts in terms of lower stress levels, better physical and psychological wellbeing, improved durations and quality of off duty sleep as well as improvements in family relations. On the negative side, the main concerns are fatigue and safety. It is noted that a 12 hour shift does not equate with being active for only 12 hours. There can be considerable extension of the person's time awake either side of the shift. However, the effects of longer term exposure to extended work days have been relatively uncharted in any systematic way. Longitudinal comparative research into the chronic impact of the compressed working week is needed.

  9. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  10. Third ventricle midline shift on computed tomography as an alternative to septum pellucidum shift

    International Nuclear Information System (INIS)

    Santiago, Carlos Francis A.; Oropilla, Jean Quint L; Alvarez, Victor M.

    2000-01-01

    The cerebral midline shift is measured using the displacement from midline of the third ventricle. It is an easily determined criterion from which CT scans of patients with spontaneous intracerebral hematoma may be investigated. Midline shift is a significant criteria in which to gauge the neurological status of patients. In a retrospective study of 32 patients with spontaneous unilateral intracerebral hemorrhage, a midline third ventricle shift correlated well with septum pellucidum shift. A greater than 7 mm midline third ventricle shift was associated with a significantly lower Glasgow Coma scale score compared a shift less than 7mm. For the septum pellucidum, a greater than 10 mm shift was similarly associated with a significantly lower Glasgow Coma scale score. (Author)

  11. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  12. Insomnia in shift work.

    Science.gov (United States)

    Vallières, Annie; Azaiez, Aïda; Moreau, Vincent; LeBlanc, Mélanie; Morin, Charles M

    2014-12-01

    Shift work disorder involves insomnia and/or excessive sleepiness associated with the work schedule. The present study examined the impact of insomnia on the perceived physical and psychological health of adults working on night and rotating shift schedules compared to day workers. A total of 418 adults (51% women, mean age 41.4 years), including 51 night workers, 158 rotating shift workers, and 209 day workers were selected from an epidemiological study. An algorithm was used to classify each participant of the two groups (working night or rotating shifts) according to the presence or absence of insomnia symptoms. Each of these individuals was paired with a day worker according to gender, age, and income. Participants completed several questionnaires measuring sleep, health, and psychological variables. Night and rotating shift workers with insomnia presented a sleep profile similar to that of day workers with insomnia. Sleep time was more strongly related to insomnia than to shift work per se. Participants with insomnia in the three groups complained of anxiety, depression, and fatigue, and reported consuming equal amounts of sleep-aid medication. Insomnia also contributed to chronic pain and otorhinolaryngology problems, especially among rotating shift workers. Work productivity and absenteeism were more strongly related to insomnia. The present study highlights insomnia as an important component of the sleep difficulties experienced by shift workers. Insomnia may exacerbate certain physical and mental health problems of shift workers, and impair their quality of life. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R; Ratterman, Joseph D; Smith, Brian E

    2014-11-11

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.

  14. Effects of extended work shifts and shift work on patient safety, productivity, and employee health.

    Science.gov (United States)

    Keller, Simone M

    2009-12-01

    It is estimated 1.3 million health care errors occur each year and of those errors 48,000 to 98,000 result in the deaths of patients (Barger et al., 2006). Errors occur for a variety of reasons, including the effects of extended work hours and shift work. The need for around-the-clock staff coverage has resulted in creative ways to maintain quality patient care, keep health care errors or adverse events to a minimum, and still meet the needs of the organization. One way organizations have attempted to alleviate staff shortages is to create extended work shifts. Instead of the standard 8-hour shift, workers are now working 10, 12, 16, or more hours to provide continuous patient care. Although literature does support these staffing patterns, it cannot be denied that shifts beyond the traditional 8 hours increase staff fatigue, health care errors, and adverse events and outcomes and decrease alertness and productivity. This article includes a review of current literature on shift work, the definition of shift work, error rates and adverse outcomes related to shift work, health effects on shift workers, shift work effects on older workers, recommended optimal shift length, positive and negative effects of shift work on the shift worker, hazards associated with driving after extended shifts, and implications for occupational health nurses. Copyright 2009, SLACK Incorporated.

  15. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  16. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  17. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  18. Verification of the absorbed dose values determined with plane parallel ionization chambers in therapeutic electron beams using ferrous sulfate dosimetry

    International Nuclear Information System (INIS)

    Plaetsen, A. van der; Thierens, H.; Palmans, H.

    2000-01-01

    Absolute and relative dosimetry measurements in clinical electron beams using different detectors were performed at a Philips SL18 accelerator. For absolute dosimetry, ionization chamber measurements with the PTW Markus and PTW Roos plane parallel chambers were performed in water following the recommendations of the TRS-381 Code of Practice, using different options for chamber calibration. The dose results obtained with these ionization chambers using the electron beam calibration method were compared with the dose response of the ferrous sulphate (Fricke) chemical dosimeter. The influence of the choice of detector type on the determination of physical quantities necessary for absolute dose determination was investigated and discussed. Results for d max , R 50 and R p were in agreement within statistical uncertainties when using a diode, diamond or plane parallel chamber. The effective point of measurement for the Markus chamber is found to be shifted 0.5 mm from the front surface of the cavity. Fluence correction factors, h m , for dose determination in electron beams using a PMMA phantom were determined experimentally for both plane parallel chamber types. (author)

  19. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  20. Parallelization for first principles electronic state calculation program

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Oguchi, Tamio.

    1997-03-01

    In this report we study the parallelization for First principles electronic state calculation program. The target machines are NEC SX-4 for shared memory type parallelization and FUJITSU VPP300 for distributed memory type parallelization. The features of each parallel machine are surveyed, and the parallelization methods suitable for each are proposed. It is shown that 1.60 times acceleration is achieved with 2 CPU parallelization by SX-4 and 4.97 times acceleration is achieved with 12 PE parallelization by VPP 300. (author)

  1. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  2. Neoclassical parallel flow calculation in the presence of external parallel momentum sources in Heliotron J

    Energy Technology Data Exchange (ETDEWEB)

    Nishioka, K.; Nakamura, Y. [Graduate School of Energy Science, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan); Nishimura, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Lee, H. Y. [Korea Advanced Institute of Science and Technology, Daejeon 305-701 (Korea, Republic of); Kobayashi, S.; Mizuuchi, T.; Nagasaki, K.; Okada, H.; Minami, T.; Kado, S.; Yamamoto, S.; Ohshima, S.; Konoshima, S.; Sano, F. [Institute of Advanced Energy, Kyoto University, Gokasho, Uji, Kyoto 611-0011 (Japan)

    2016-03-15

    A moment approach to calculate neoclassical transport in non-axisymmetric torus plasmas composed of multiple ion species is extended to include the external parallel momentum sources due to unbalanced tangential neutral beam injections (NBIs). The momentum sources that are included in the parallel momentum balance are calculated from the collision operators of background particles with fast ions. This method is applied for the clarification of the physical mechanism of the neoclassical parallel ion flows and the multi-ion species effect on them in Heliotron J NBI plasmas. It is found that parallel ion flow can be determined by the balance between the parallel viscosity and the external momentum source in the region where the external source is much larger than the thermodynamic force driven source in the collisional plasmas. This is because the friction between C{sup 6+} and D{sup +} prevents a large difference between C{sup 6+} and D{sup +} flow velocities in such plasmas. The C{sup 6+} flow velocities, which are measured by the charge exchange recombination spectroscopy system, are numerically evaluated with this method. It is shown that the experimentally measured C{sup 6+} impurity flow velocities do not contradict clearly with the neoclassical estimations, and the dependence of parallel flow velocities on the magnetic field ripples is consistent in both results.

  3. Modelling a Nurse Shift Schedule with Multiple Preference Ranks for Shifts and Days-Off

    Directory of Open Access Journals (Sweden)

    Chun-Cheng Lin

    2014-01-01

    Full Text Available When it comes to nurse shift schedules, it is found that the nursing staff have diverse preferences about shift rotations and days-off. The previous studies only focused on the most preferred work shift and the number of satisfactory days-off of the schedule at the current schedule period but had few discussions on the previous schedule periods and other preference levels for shifts and days-off, which may affect fairness of shift schedules. As a result, this paper proposes a nurse scheduling model based upon integer programming that takes into account constraints of the schedule, different preference ranks towards each shift, and the historical data of previous schedule periods to maximize the satisfaction of all the nursing staff's preferences about the shift schedule. The main contribution of the proposed model is that we consider that the nursing staff’s satisfaction level is affected by multiple preference ranks and their priority ordering to be scheduled, so that the quality of the generated shift schedule is more reasonable. Numerical results show that the planned shifts and days-off are fair and successfully meet the preferences of all the nursing staff.

  4. Structural Properties of G,T-Parallel Duplexes

    Directory of Open Access Journals (Sweden)

    Anna Aviñó

    2010-01-01

    Full Text Available The structure of G,T-parallel-stranded duplexes of DNA carrying similar amounts of adenine and guanine residues is studied by means of molecular dynamics (MD simulations and UV- and CD spectroscopies. In addition the impact of the substitution of adenine by 8-aminoadenine and guanine by 8-aminoguanine is analyzed. The presence of 8-aminoadenine and 8-aminoguanine stabilizes the parallel duplex structure. Binding of these oligonucleotides to their target polypyrimidine sequences to form the corresponding G,T-parallel triplex was not observed. Instead, when unmodified parallel-stranded duplexes were mixed with their polypyrimidine target, an interstrand Watson-Crick duplex was formed. As predicted by theoretical calculations parallel-stranded duplexes carrying 8-aminopurines did not bind to their target. The preference for the parallel-duplex over the Watson-Crick antiparallel duplex is attributed to the strong stabilization of the parallel duplex produced by the 8-aminopurines. Theoretical studies show that the isomorphism of the triads is crucial for the stability of the parallel triplex.

  5. High-speed parallel solution of the neutron diffusion equation with the hierarchical domain decomposition boundary element method incorporating parallel communications

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Chiba, Gou

    2000-01-01

    A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)

  6. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  7. Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.

    Science.gov (United States)

    Bhandarkar, S M; Chirravuri, S; Arnold, J

    1996-01-01

    Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.

  8. Shift systems in nuclear power plants - aspects for planning, shift systems, utility practice

    International Nuclear Information System (INIS)

    Grauf, E.

    1986-01-01

    This lecture contains the most important aspects of shift structure and shift organisation. The criteria for shift planning involving essential tasks, duties, laws and regulations, medical aspects, social aspects, will be presented. In the Federal Republic of Germany some basic models were established, which will be shown and explained with special reference to the number of teams, size of shift crews and absence regulations. Moreover, the lecture will deal with rotation systems and provisions for the transfer of shift responsibilities. By example of a utility plant commissioning time scale (1300 MW PWR) the practice of shift installations will be shown as well as the most important points of education and training. Within this compass the criteria and requirements for training and education of operational personnel in the Federal Republic of Germany will also be touched. (orig.)

  9. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  10. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  11. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  12. Nurses' shift reports

    DEFF Research Database (Denmark)

    Buus, Niels; Hoeck, Bente; Hamilton, Bridget Elizabeth

    2017-01-01

    AIMS AND OBJECTIVES: To identify reporting practices that feature in studies of nurses' shift reports across diverse nursing specialities. The objectives were to perform an exhaustive systematic literature search and to critically review the quality and findings of qualitative field studies...... of nurses' shift reports. BACKGROUND: Nurses' shift reports are routine occurrences in healthcare organisations that are viewed as crucial for patient outcomes, patient safety and continuity of care. Studies of communication between nurses attend primarily to 1:1 communication and analyse the adequacy...... and accuracy of patient information and feature handovers at the bedside. Still, verbal reports between groups of nurses about patients are commonplace. Shift reports are obvious sites for studying the situated accomplishment of professional nursing at the group level. This review is focused exclusively...

  13. Parallelization methods study of thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Gaudart, Catherine

    2000-01-01

    The variety of parallelization methods and machines leads to a wide selection for programmers. In this study we suggest, in an industrial context, some solutions from the experience acquired through different parallelization methods. The study is about several scientific codes which simulate a large variety of thermal-hydraulics phenomena. A bibliography on parallelization methods and a first analysis of the codes showed the difficulty of our process on the whole applications to study. Therefore, it would be necessary to identify and extract a representative part of these applications and parallelization methods. The linear solver part of the codes forced itself. On this particular part several parallelization methods had been used. From these developments one could estimate the necessary work for a non initiate programmer to parallelize his application, and the impact of the development constraints. The different methods of parallelization tested are the numerical library PETSc, the parallelizer PAF, the language HPF, the formalism PEI and the communications library MPI and PYM. In order to test several methods on different applications and to follow the constraint of minimization of the modifications in codes, a tool called SPS (Server of Parallel Solvers) had be developed. We propose to describe the different constraints about the optimization of codes in an industrial context, to present the solutions given by the tool SPS, to show the development of the linear solver part with the tested parallelization methods and lastly to compare the results against the imposed criteria. (author) [fr

  14. Coupling of g proteins to reconstituted monomers and tetramers of the M2 muscarinic receptor.

    Science.gov (United States)

    Redka, Dar'ya S; Morizumi, Takefumi; Elmslie, Gwendolynne; Paranthaman, Pranavan; Shivnaraine, Rabindra V; Ellis, John; Ernst, Oliver P; Wells, James W

    2014-08-29

    G protein-coupled receptors can be reconstituted as monomers in nanodiscs and as tetramers in liposomes. When reconstituted with G proteins, both forms enable an allosteric interaction between agonists and guanylyl nucleotides. Both forms, therefore, are candidates for the complex that controls signaling at the level of the receptor. To identify the biologically relevant form, reconstituted monomers and tetramers of the purified M2 muscarinic receptor were compared with muscarinic receptors in sarcolemmal membranes for the effect of guanosine 5'-[β,γ-imido]triphosphate (GMP-PNP) on the inhibition of N-[(3)H]methylscopolamine by the agonist oxotremorine-M. With monomers, a stepwise increase in the concentration of GMP-PNP effected a lateral, rightward shift in the semilogarithmic binding profile (i.e. a progressive decrease in the apparent affinity of oxotremorine-M). With tetramers and receptors in sarcolemmal membranes, GMP-PNP effected a vertical, upward shift (i.e. an apparent redistribution of sites from a state of high affinity to one of low affinity with no change in affinity per se). The data were analyzed in terms of a mechanistic scheme based on a ligand-regulated equilibrium between uncoupled and G protein-coupled receptors (the "ternary complex model"). The model predicts a rightward shift in the presence of GMP-PNP and could not account for the effects at tetramers in vesicles or receptors in sarcolemmal membranes. Monomers present a special case of the model in which agonists and guanylyl nucleotides interact within a complex that is both constitutive and stable. The results favor oligomers of the M2 receptor over monomers as the biologically relevant state for coupling to G proteins. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  15. Simulation Exploration through Immersive Parallel Planes

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Steve [Los Alamos Visualization Associates

    2017-05-25

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  16. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  17. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  18. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  19. From antenna to antenna: lateral shift of olfactory memory recall by honeybees.

    Science.gov (United States)

    Rogers, Lesley J; Vallortigara, Giorgio

    2008-06-04

    Honeybees, Apis mellifera, readily learn to associate odours with sugar rewards and we show here that recall of the olfactory memory, as demonstrated by the bee extending its proboscis when presented with the trained odour, involves first the right and then the left antenna. At 1-2 hour after training using both antennae, recall is possible mainly when the bee uses its right antenna but by 6 hours after training a lateral shift has occurred and the memory can now be recalled mainly when the left antenna is in use. Long-term memory one day after training is also accessed mainly via the left antenna. This time-dependent shift from right to left antenna is also seen as side biases in responding to odour presented to the bee's left or right side. Hence, not only are the cellular events of memory formation similar in bees and vertebrate species but also the lateralized networks involved may be similar. These findings therefore seem to call for remarkable parallel evolution and suggest that the proper functioning of memory formation in a bilateral animal, either vertebrate or invertebrate, requires lateralization of processing.

  20. From antenna to antenna: lateral shift of olfactory memory recall by honeybees.

    Directory of Open Access Journals (Sweden)

    Lesley J Rogers

    Full Text Available Honeybees, Apis mellifera, readily learn to associate odours with sugar rewards and we show here that recall of the olfactory memory, as demonstrated by the bee extending its proboscis when presented with the trained odour, involves first the right and then the left antenna. At 1-2 hour after training using both antennae, recall is possible mainly when the bee uses its right antenna but by 6 hours after training a lateral shift has occurred and the memory can now be recalled mainly when the left antenna is in use. Long-term memory one day after training is also accessed mainly via the left antenna. This time-dependent shift from right to left antenna is also seen as side biases in responding to odour presented to the bee's left or right side. Hence, not only are the cellular events of memory formation similar in bees and vertebrate species but also the lateralized networks involved may be similar. These findings therefore seem to call for remarkable parallel evolution and suggest that the proper functioning of memory formation in a bilateral animal, either vertebrate or invertebrate, requires lateralization of processing.

  1. [Sleep quality of nurses working in shifts - Hungarian adaptation of the Bergen Shift Work Sleep Questionnaire].

    Science.gov (United States)

    Fusz, Katalin; Tóth, Ákos; Fullér, Noémi; Müller, Ágnes; Oláh, András

    2015-12-06

    Sleep disorders among shift workers are common problems due to the disturbed circadian rhythm. The Bergen Shift Work Sleep Questionnaire assesses discrete sleep problems related to work shifts (day, evening and night shifts) and rest days. The aim of the study was to develop the Hungarian version of this questionnaire and to compare the sleep quality of nurses in different work schedules. 326 nurses working in shifts filled in the questionnaire. The authors made convergent and discriminant validation of the questionnaire with the Athens Insomnia Scale and the Perceived Stress Questionnaire. The questionnaire based on psychometric characteristics was suitable to assess sleep disorders associated with shift work in a Hungarian sample. The frequency of discrete symptoms significantly (pshifts. Nurses experienced the worst sleep quality and daytime fatigue after the night shift. Nurses working in irregular shift system had worse sleep quality than nurses working in regular and flexible shift system (pworking in shifts should be assessed with the Hungarian version of the Bergen Shift Work Sleep Questionnaire on a nationally representative sample, and the least burdensome shift system could be established.

  2. Age differences in strategy shift: retrieval avoidance or general shift reluctance?

    Science.gov (United States)

    Frank, David J; Touron, Dayna R; Hertzog, Christopher

    2013-09-01

    Previous studies of metacognitive age differences in skill acquisition strategies have relied exclusively on tasks with a processing shift from an algorithm to retrieval strategy. Older adults' demonstrated reluctance to shift strategies in such tasks could reflect either a specific aversion to a memory retrieval strategy or a general, inertial resistance to strategy change. Haider and Frensch's (1999) alphabet verification task (AVT) affords a non-retrieval-based strategy shift. Participants verify the continuation of alphabet strings such as D E F G [4] L, with the bracketed digit indicating a number of letters to be skipped. When all deviations are restricted to the letter-digit-letter portion, participants can speed their responses by selectively attending to only that part of the stimulus. We adapted the AVT to include conditions that promoted shift to a retrieval strategy, a selective attention strategy, or both strategies. Item-level strategy reports were validated by eye movement data. Older adults shifted more slowly to the retrieval strategy but more quickly to the selective attention strategy than young adults, indicating a retrieval-strategy avoidance. Strategy confidence and perceived strategy difficulty correlated with shift to the two strategies in both age groups. Perceived speed of responses with each strategy specifically correlated with older adults' strategy choices, suggesting that some older adults avoid retrieval because they do not appreciate its efficiency benefits.

  3. Age Differences in Strategy Shift: Retrieval Avoidance or General Shift Reluctance?

    Science.gov (United States)

    Frank, David J.; Touron, Dayna R.; Hertzog, Christopher

    2013-01-01

    Previous studies of metacognitive age differences in skill acquisition strategies have relied exclusively on tasks with a processing shift from an algorithm to retrieval strategy. Older adults’ demonstrated reluctance to shift strategies in such tasks could reflect either a specific aversion to a memory retrieval strategy or a general, inertial resistance to strategy change. Haider and Frensch’s (1999) alphabet verification task (AVT) affords a non-retrieval-based strategy shift. Participants verify the continuation of alphabet strings such as D E F G [4] L, with the bracketed digit indicating a number of letters to be skipped. When all deviations are restricted to the letter-digit-letter portion, participants can speed their responses by selectively attend only to that part of the stimulus. We adapted the AVT to include conditions which promoted shift to a retrieval strategy, a selective attention strategy, or both strategies. Item-level strategy reports were validated by eye movement data. Older adults shifted more slowly to the retrieval strategy but more quickly to the selective attention strategy than young adults, indicating a retrieval-strategy avoidance. Strategy confidence and perceived strategy difficulty correlated with shift to the two strategies in both age groups. Perceived speed of responses with each strategy specifically correlated with older adults’ strategy choices, suggesting that some older adults avoid retrieval because they do not appreciate its efficiency benefits. PMID:23088195

  4. Non-occupational physical activity levels of shift workers compared with non-shift workers

    NARCIS (Netherlands)

    Loef, Bette; Hulsegge, Gerben; Wendel-Vos, G C Wanda; Verschuren, W M Monique; Vermeulen, Roel C H; Bakker, Marije F.; van der Beek, Allard J.; Proper, Karin I

    2017-01-01

    OBJECTIVES: Lack of physical activity (PA) has been hypothesised as an underlying mechanism in the adverse health effects of shift work. Therefore, our aim was to compare non-occupational PA levels between shift workers and non-shift workers. Furthermore, exposure-response relationships for

  5. Collectively loading an application in a parallel computer

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  6. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  7. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  8. Integrated Task And Data Parallel Programming: Language Design

    Science.gov (United States)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated

  9. Performance of the Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  10. Metabolic profiling based on two-dimensional J-resolved 1H NMR data and parallel factor analysis

    DEFF Research Database (Denmark)

    Yilmaz, Ali; Nyberg, Nils T; Jaroszewski, Jerzy W.

    2011-01-01

    the intensity variances along the chemical shift axis are taken into account. Here, we describe the use of parallel factor analysis (PARAFAC) as a tool to preprocess a set of two-dimensional J-resolved spectra with the aim of keeping the J-coupling information intact. PARAFAC is a mathematical decomposition......-model was done automatically by evaluating amount of explained variance and core consistency values. Score plots showing the distribution of objects in relation to each other, and loading plots in the form of two-dimensional pseudo-spectra with the same appearance as the original J-resolved spectra...

  11. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  12. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    Science.gov (United States)

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  13. Parallel and non-parallel laminar mixed convection flow in an inclined tube: The effect of the boundary conditions

    International Nuclear Information System (INIS)

    Barletta, A.

    2008-01-01

    The necessary condition for the onset of parallel flow in the fully developed region of an inclined duct is applied to the case of a circular tube. Parallel flow in inclined ducts is an uncommon regime, since in most cases buoyancy tends to produce the onset of secondary flow. The present study shows how proper thermal boundary conditions may preserve parallel flow regime. Mixed convection flow is studied for a special non-axisymmetric thermal boundary condition that, with a proper choice of a switch parameter, may be compatible with parallel flow. More precisely, a circumferentially variable heat flux distribution is prescribed on the tube wall, expressed as a sinusoidal function of the azimuthal coordinate θ with period 2π. A π/2 rotation in the position of the maximum heat flux, achieved by setting the switch parameter, may allow or not the existence of parallel flow. Two cases are considered corresponding to parallel and non-parallel flow. In the first case, the governing balance equations allow a simple analytical solution. On the contrary, in the second case, the local balance equations are solved numerically by employing a finite element method

  14. Parallel programming with Easy Java Simulations

    Science.gov (United States)

    Esquembre, F.; Christian, W.; Belloni, M.

    2018-01-01

    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  15. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  16. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  17. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  18. Conformal pure radiation with parallel rays

    International Nuclear Information System (INIS)

    Leistner, Thomas; Paweł Nurowski

    2012-01-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

  19. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  20. Challenges of Transit Oriented Development (TOD in Iran. The Need for a Paradigm Shift

    Directory of Open Access Journals (Sweden)

    Mahta Mirmoghtadaee

    2016-10-01

    The major contention of this paper is to discuss the general concept of TOD, its benefits and challenges in Iranian urban context. It is discussed here that TOD has several positive outcomes considering the existing urbanization trends in Iran. It may be used as a practical instrument to deal with rapidly urbanizing country in which motorization rate is increasing and air pollution is the serious cause of life loss. However there are several challenges which should be faced. The need for an Iranian version of TOD, which re-narrated the theory according to local situation, is the first challenge. A paradigm shift in the government, shifting the priority from housing schemes to mass transit systems is the second challenge needed to be taken into consideration. The third challenge is the overlapping and parallel institutions dealing with mass transit systems in urban and regional transportation planning and insufficient planning instruments. The integrated transportation and urban planning system is necessary here, and there is an urgent need to develop a national TOD guideline with the potential to develop local versions for each city.

  1. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro

    2012-11-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM, experiences have almost exclusively been limited to formulation based on flat homogeneous parallel loops. FMM in fact contains operations that cannot be readily expressed in such conventional but restrictive models. We show that task parallelism, or parallel recursions in particular, allows us to parallelize all operations of FMM naturally and scalably. Moreover it allows us to parallelize a \\'\\'mutual interaction\\'\\' for force/potential evaluation, which is roughly twice as efficient as a more conventional, unidirectional force/potential evaluation. The net result is an open source FMM that is clearly among the fastest single node implementations, including those on GPUs; with a million particles on a 32 cores Sandy Bridge 2.20GHz node, it completes a single time step including tree construction and force/potential evaluation in 65 milliseconds. The study clearly showcases both programmability and performance benefits of flexible parallel constructs over more monolithic parallel loops. © 2012 IEEE.

  2. Bandwidth scalable, coherent transmitter based on the parallel synthesis of multiple spectral slices using optical arbitrary waveform generation.

    Science.gov (United States)

    Geisler, David J; Fontaine, Nicolas K; Scott, Ryan P; He, Tingting; Paraschis, Loukas; Gerstel, Ori; Heritage, Jonathan P; Yoo, S J B

    2011-04-25

    We demonstrate an optical transmitter based on dynamic optical arbitrary waveform generation (OAWG) which is capable of creating high-bandwidth (THz) data waveforms in any modulation format using the parallel synthesis of multiple coherent spectral slices. As an initial demonstration, the transmitter uses only 5.5 GHz of electrical bandwidth and two 10-GHz-wide spectral slices to create 100-ns duration, 20-GHz optical waveforms in various modulation formats including differential phase-shift keying (DPSK), quaternary phase-shift keying (QPSK), and eight phase-shift keying (8PSK) with only changes in software. The experimentally generated waveforms showed clear eye openings and separated constellation points when measured using a real-time digital coherent receiver. Bit-error-rate (BER) performance analysis resulted in a BER < 9.8 × 10(-6) for DPSK and QPSK waveforms. Additionally, we experimentally demonstrate three-slice, 4-ns long waveforms that highlight the bandwidth scalable nature of the optical transmitter. The various generated waveforms show that the key transmitter properties (i.e., packet length, modulation format, data rate, and modulation filter shape) are software definable, and that the optical transmitter is capable of acting as a flexible bandwidth transmitter.

  3. Chemical shift imaging: a review

    International Nuclear Information System (INIS)

    Brateman, L.

    1986-01-01

    Chemical shift is the phenomenon that is seen when an isotope possessing a nuclear magnetic dipole moment resonates at a spectrum of resonance frequencies in a given magnetic field. These resonance frequencies, or chemical shifts, depend on the chemical environments of particular nuclei. Mapping the spatial distribution of nuclei associated with a particular chemical shift (e.g., hydrogen nuclei associated with water molecules or with lipid groups) is called chemical shift imaging. Several techniques of proton chemical shift imaging that have been applied in vivo are presented, and their clinical findings are reported and summarized. Acquiring high-resolution spectra for large numbers of volume elements in two or three dimensions may be prohibitive because of time constraints, but other methods of imaging lipid of water distributions (i.e., selective excitation, selective saturation, or variations in conventional magnetic resonance imaging pulse sequences) can provide chemical shift information. These techniques require less time, but they lack spectral information. Since fat deposition seen by chemical shift imaging may not be demonstrated by conventional magnetic resonance imaging, certain applications of chemical shift imaging, such as in the determination of fatty liver disease, have greater diagnostic utility than conventional magnetic resonance imaging. Furthermore, edge artifacts caused by chemical shift effects can be eliminated by certain selective methods of data acquisition employed in chemical shift imaging

  4. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  5. Adaptation of Shift Sequence Based Method for High Number in Shifts Rostering Problem for Health Care Workers

    Directory of Open Access Journals (Sweden)

    Mindaugas Liogys

    2013-08-01

    Full Text Available Purpose—is to investigate a shift sequence-based approach efficiency then problem consisting of a high number of shifts.Research objectives:• Solve health care workers rostering problem using a shift sequence based method.• Measure its efficiency then number of shifts increases.Design/methodology/approach—Usually rostering problems are highly constrained. Constraints are classified to soft and hard constraints. Soft and hard constraints of the problem are additionally classified to: sequence constraints, schedule constraints and roster constraints. Sequence constraints are considered when constructing shift sequences. Schedule constraints are considered when constructing a schedule. Roster constraints are applied, then constructing overall solution, i.e. combining all schedules.Shift sequence based approach consists of two stages:• Shift sequences construction,• The construction of schedules.In the shift sequences construction stage, the shift sequences are constructed for each set of health care workers of different skill, considering sequence constraints. Shifts sequences are ranked by their penalties for easier retrieval in later stage.In schedules construction stage, schedules for each health care worker are constructed iteratively, using the shift sequences produced in stage 1.Shift sequence based method is an adaptive iterative method where health care workers who received the highest schedule penalties in the last iteration are scheduled first at the current iteration.During the roster construction, and after a schedule has been generated for the current health care worker, an improvement method based on an efficient greedy local search is carried out on the partial roster. It simply swaps any pair of shifts between two health care workers in the (partial roster, as long as the swaps satisfy hard constraints and decrease the roster penalty.Findings—Using shift sequence method for solving health care workers rostering problem

  6. Adaptation of Shift Sequence Based Method for High Number in Shifts Rostering Problem for Health Care Workers

    Directory of Open Access Journals (Sweden)

    Mindaugas Liogys

    2011-08-01

    Full Text Available Purpose—is to investigate a shift sequence-based approach efficiency then problem consisting of a high number of shifts. Research objectives:• Solve health care workers rostering problem using a shift sequence based method.• Measure its efficiency then number of shifts increases. Design/methodology/approach—Usually rostering problems are highly constrained.Constraints are classified to soft and hard constraints. Soft and hard constraints of the problem are additionally classified to: sequence constraints, schedule constraints and roster constraints. Sequence constraints are considered when constructing shift sequences. Schedule constraints are considered when constructing a schedule. Roster constraints are applied, then constructing overall solution, i.e. combining all schedules.Shift sequence based approach consists of two stages:• Shift sequences construction,• The construction of schedules.In the shift sequences construction stage, the shift sequences are constructed for each set of health care workers of different skill, considering sequence constraints. Shifts sequences are ranked by their penalties for easier retrieval in later stage.In schedules construction stage, schedules for each health care worker are constructed iteratively, using the shift sequences produced in stage 1. Shift sequence based method is an adaptive iterative method where health care workers who received the highest schedule penalties in the last iteration are scheduled first at the current iteration. During the roster construction, and after a schedule has been generated for the current health care worker, an improvement method based on an efficient greedy local search is carried out on the partial roster. It simply swaps any pair of shifts between two health care workers in the (partial roster, as long as the swaps satisfy hard constraints and decrease the roster penalty.Findings—Using shift sequence method for solving health care workers rostering

  7. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  8. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  9. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  10. Combined chemical shift changes and amino acid specific chemical shift mapping of protein-protein interactions

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Frank H.; Riepl, Hubert [University of Regensburg, Institute of Biophysics and Physical Biochemistry (Germany); Maurer, Till [Boehringer Ingelheim Pharma GmbH and Co. KG, Analytical Sciences Department (Germany); Gronwald, Wolfram [University of Regensburg, Institute of Biophysics and Physical Biochemistry (Germany); Neidig, Klaus-Peter [Bruker BioSpin GmbH, Software Department (Germany); Kalbitzer, Hans Robert [University of Regensburg, Institute of Biophysics and Physical Biochemistry (Germany)], E-mail: hans-robert.kalbitzer@biologie.uni-regensburg.de

    2007-12-15

    Protein-protein interactions are often studied by chemical shift mapping using solution NMR spectroscopy. When heteronuclear data are available the interaction interface is usually predicted by combining the chemical shift changes of different nuclei to a single quantity, the combined chemical shift perturbation {delta}{delta}{sub comb}. In this paper different procedures (published and non-published) to calculate {delta}{delta}{sub comb} are examined that include a variety of different functional forms and weighting factors for each nucleus. The predictive power of all shift mapping methods depends on the magnitude of the overlap of the chemical shift distributions of interacting and non-interacting residues and the cut-off criterion used. In general, the quality of the prediction on the basis of chemical shift changes alone is rather unsatisfactory but the combination of chemical shift changes on the basis of the Hamming or the Euclidian distance can improve the result. The corrected standard deviation to zero of the combined chemical shift changes can provide a reasonable cut-off criterion. As we show combined chemical shifts can also be applied for a more reliable quantitative evaluation of titration data.

  11. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  12. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  13. Parallel computing by Monte Carlo codes MVP/GMVP

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Nakagawa, Masayuki; Mori, Takamasa

    2001-01-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of parallel computing platforms or by using a standard parallelization library MPI. The platforms used for benchmark calculations are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel paragon and a distributed-memory scalar-parallel computer Hitachi SR2201, IBM SP2. As mentioned generally, linear speedup could be obtained for large-scale problems but parallelization efficiency decreased as the batch size per a processing element(PE) was smaller. It was also found that the statistical uncertainty for assembly powers was less than 0.1% by the PWR full-core calculation with more than 10 million histories and it took about 1.5 hours by massively parallel computing. (author)

  14. Comparison of sleep disturbances in shift workers and people working with a fixed shift

    Directory of Open Access Journals (Sweden)

    Zohreh Yazdi

    2013-11-01

    Full Text Available Background: Different types of sleep disturbances can have a serious negative effect on a person’s ability, function and overall well-being. One of the most important issues that can result in sleep disturbances are occupational causes, the most important among them is shift work. The objective of this study was to compare the prevalence of sleep disturbances between shift work and non-shift workers. Material and Methods: This study was designed as a case-control study in 196 shift workers and 204 non-shift workers in a textile factory. The data were collected by using a comprehensive questionnaire including Pittsburg Sleep Quality Index questionnaire, Berlin Questionnaire, Epworth Sleepiness Scale, Insomnia Severity Index and Restless Leg Syndrome Questionnaire. Data analyses were carried out using the SPSS software version 13 by student's t-test, Chi square and multiple logistic regressions. Results: The duration of night sleep in shift workers was less than day workers (p<0.001. Prevalence of poor sleep quality and insomnia were higher in shift workers significantly than non shift workers (p<0.001, OR=2.3 95% CI: 1.7-2.9. The most prevalent type of insomnia was problems in initiating sleep (P=0.022, OR=2.2 95% CI: 1.5-3.2. There was no difference in the prevalence of excessive day-time sleepiness, restless leg syndrome, snoring, obstructive sleep apnea and different types of parasomnias between two groups. Conclusion: Reduced length of sleep and higher prevalence of poor sleep quality and insomnia in shift workers emphasizes the importance of serious attention to sleep disorders in shift workers.

  15. The parallel processing of EGS4 code on distributed memory scalar parallel computer:Intel Paragon XP/S15-256

    Energy Technology Data Exchange (ETDEWEB)

    Takemiya, Hiroshi; Ohta, Hirofumi; Honma, Ichirou

    1996-03-01

    The parallelization of Electro-Magnetic Cascade Monte Carlo Simulation Code, EGS4 on distributed memory scalar parallel computer: Intel Paragon XP/S15-256 is described. EGS4 has the feature that calculation time for one incident particle is quite different from each other because of the dynamic generation of secondary particles and different behavior of each particle. Granularity for parallel processing, parallel programming model and the algorithm of parallel random number generation are discussed and two kinds of method, each of which allocates particles dynamically or statically, are used for the purpose of realizing high speed parallel processing of this code. Among four problems chosen for performance evaluation, the speedup factors for three problems have been attained to nearly 100 times with 128 processor. It has been found that when both the calculation time for each incident particles and its dispersion are large, it is preferable to use dynamic particle allocation method which can average the load for each processor. And it has also been found that when they are small, it is preferable to use static particle allocation method which reduces the communication overhead. Moreover, it is pointed out that to get the result accurately, it is necessary to use double precision variables in EGS4 code. Finally, the workflow of program parallelization is analyzed and tools for program parallelization through the experience of the EGS4 parallelization are discussed. (author).

  16. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  17. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  18. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  19. Human umbilical vein: involvement of cyclooxygenase-2 pathway in bradykinin B1 receptor-sensitized responses.

    Science.gov (United States)

    Errasti, A E; Rey-Ares, V; Daray, F M; Rogines-Velo, M P; Sardi, S P; Paz, C; Podestá, E J; Rothlin, R P

    2001-08-01

    In isolated human umbilical vein (HUV), the contractile response to des-Arg9-bradykinin (des-Arg9-BK), selective BK B1 receptor agonist, increases as a function of the incubation time. Here, we evaluated whether cyclooxygenase (COX) pathway is involved in BK B1-sensitized response obtained in 5-h incubated HUV rings. The effect of different concentrations of indomethacin, sodium salicylate, ibuprofen, meloxicam, lysine clonixinate or NS-398 administrated 30 min before concentration-response curves (CRC) was studied. All treatments produced a significant rightward shift of the CRC to des-Arg9-BK in a concentration-dependent manner, which provides pharmacological evidence that COX pathway is involved in the BK B1 responses. Moreover, in this tissue, the NS-398 pKb (5.2) observed suggests that COX-2 pathway is the most relevant. The strong correlation between published pIC50 for COX-2 and the NSAIDs' pKbs estimated further supports the hypothesis that COX-2 metabolites are involved in BK B1 receptor-mediated responses. In other rings, indomethacin (30, 100 micromol/l) or NS-398 (10, 30 micromol/l) produced a significant rightward shift of the CRC to BK, selective BK B2 agonist, and its pKbs were similar to the values to inhibit BK B1 receptor responses, suggesting that COX-2 pathway also is involved in BK B2 receptor responses. Western blot analysis shows that COX-1 and COX-2 isoenzymes are present before and after 5-h in vitro incubation and apparently COX-2 does not suffer additional induction.

  20. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  1. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  2. Performance Analysis of Parallel Mathematical Subroutine library PARCEL

    International Nuclear Information System (INIS)

    Yamada, Susumu; Shimizu, Futoshi; Kobayashi, Kenichi; Kaburaki, Hideo; Kishida, Norio

    2000-01-01

    The parallel mathematical subroutine library PARCEL (Parallel Computing Elements) has been developed by Japan Atomic Energy Research Institute for easy use of typical parallelized mathematical codes in any application problems on distributed parallel computers. The PARCEL includes routines for linear equations, eigenvalue problems, pseudo-random number generation, and fast Fourier transforms. It is shown that the results of performance for linear equations routines exhibit good parallelization efficiency on vector, as well as scalar, parallel computers. A comparison of the efficiency results with the PETSc (Portable Extensible Tool kit for Scientific Computations) library has been reported. (author)

  3. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  4. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  5. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  6. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  7. Comments on X. Yin, A. Wen, Y. Chen, and T. Wang, `Studies in an optical millimeter-wave generation scheme via two parallel dual-parallel Mach-Zehnder modulators', Journal of Modern Optics, 58(8), 2011, pp. 665-673

    Science.gov (United States)

    Hasan, Mehedi; Maldonado-Basilio, Ramón; Hall, Trevor J.

    2015-04-01

    Yin et al. have described an innovative filter-less optical millimeter-wave generation scheme for octotupling of a 10 GHz RF oscillator, or sedecimtupling of a 5 GHz RF oscillator using two parallel dual-parallel Mach-Zehnder modulators (DP-MZMs). The great merit of their design is the suppression of all harmonics except those of order ? (octotupling) or all harmonics except those of order ? (sedecimtupling), where ? is an integer. A demerit of their scheme is the requirement to set a precise RF signal modulation index in order to suppress the zeroth order optical carrier. The purpose of this comment is to show that, in the case of the octotupling function, all harmonics may be suppressed except those of order ?, where ? is an odd integer, by the simple addition of an optical ? phase shift between the two DP-MZMs and an adjustment of the RF drive phases. Since the carrier is suppressed in the modified architecture, the octotupling circuit is thereby released of the strict requirement to set the drive level to a precise value without any significant increase in circuit complexity.

  8. Chemical shift homology in proteins

    International Nuclear Information System (INIS)

    Potts, Barbara C.M.; Chazin, Walter J.

    1998-01-01

    The degree of chemical shift similarity for homologous proteins has been determined from a chemical shift database of over 50 proteins representing a variety of families and folds, and spanning a wide range of sequence homologies. After sequence alignment, the similarity of the secondary chemical shifts of C α protons was examined as a function of amino acid sequence identity for 37 pairs of structurally homologous proteins. A correlation between sequence identity and secondary chemical shift rmsd was observed. Important insights are provided by examining the sequence identity of homologous proteins versus percentage of secondary chemical shifts that fall within 0.1 and 0.3 ppm thresholds. These results begin to establish practical guidelines for the extent of chemical shift similarity to expect among structurally homologous proteins

  9. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  10. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  11. A Tutorial on Parallel and Concurrent Programming in Haskell

    Science.gov (United States)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  12. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Directory of Open Access Journals (Sweden)

    Svetlana Postnova

    Full Text Available Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8 in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  13. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A; Postnov, Dmitry D

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  14. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  15. Current distribution characteristics of superconducting parallel circuits

    International Nuclear Information System (INIS)

    Mori, K.; Suzuki, Y.; Hara, N.; Kitamura, M.; Tominaka, T.

    1994-01-01

    In order to increase the current carrying capacity of the current path of the superconducting magnet system, the portion of parallel circuits such as insulated multi-strand cables or parallel persistent current switches (PCS) are made. In superconducting parallel circuits of an insulated multi-strand cable or a parallel persistent current switch (PCS), the current distribution during the current sweep, the persistent mode, and the quench process were investigated. In order to measure the current distribution, two methods were used. (1) Each strand was surrounded with a pure iron core with the air gap. In the air gap, a Hall probe was located. The accuracy of this method was deteriorated by the magnetic hysteresis of iron. (2) The Rogowski coil without iron was used for the current measurement of each path in a 4-parallel PCS. As a result, it was shown that the current distribution characteristics of a parallel PCS is very similar to that of an insulated multi-strand cable for the quench process

  16. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  17. 6th International Parallel Tools Workshop

    CERN Document Server

    Brinkmann, Steffen; Gracia, José; Resch, Michael; Nagel, Wolfgang

    2013-01-01

    The latest advances in the High Performance Computing hardware have significantly raised the level of available compute performance. At the same time, the growing hardware capabilities of modern supercomputing architectures have caused an increasing complexity of the parallel application development. Despite numerous efforts to improve and simplify parallel programming, there is still a lot of manual debugging and  tuning work required. This process  is supported by special software tools, facilitating debugging, performance analysis, and optimization and thus  making a major contribution to the development of  robust and efficient parallel software. This book introduces a selection of the tools, which were presented and discussed at the 6th International Parallel Tools Workshop, held in Stuttgart, Germany, 25-26 September 2012.

  18. Shift Work, Chronotype, and Melatonin Patterns among Female Hospital Employees on Day and Night Shifts.

    Science.gov (United States)

    Leung, Michael; Tranmer, Joan; Hung, Eleanor; Korsiak, Jill; Day, Andrew G; Aronson, Kristan J

    2016-05-01

    Shift work-related carcinogenesis is hypothesized to be mediated by melatonin; however, few studies have considered the potential effect modification of this underlying pathway by chronotype or specific aspects of shift work such as the number of consecutive nights in a rotation. In this study, we examined melatonin patterns in relation to shift status, stratified by chronotype and number of consecutive night shifts, and cumulative lifetime exposure to shift work. Melatonin patterns of 261 female personnel (147 fixed-day and 114 on rotations, including nights) at Kingston General Hospital were analyzed using cosinor analysis. Urine samples were collected from all voids over a 48-hour specimen collection period for measurement of 6-sulfatoxymelatonin concentrations using the Buhlmann ELISA Kit. Chronotypes were assessed using mid-sleep time (MSF) derived from the Munich Chronotype Questionnaire (MCTQ). Sociodemographic, health, and occupational information were collected by questionnaire. Rotational shift nurses working nights had a lower mesor and an earlier time of peak melatonin production compared to day-only workers. More pronounced differences in mesor and acrophase were seen among later chronotypes, and shift workers working ≥3 consecutive nights. Among nurses, cumulative shift work was associated with a reduction in mesor. These results suggest that evening-types and/or shift workers working ≥3 consecutive nights are more susceptible to adverse light-at-night effects, whereas long-term shift work may also chronically reduce melatonin levels. Cumulative and current exposure to shift work, including nights, affects level and timing of melatonin production, which may be related to carcinogenesis and cancer risk. Cancer Epidemiol Biomarkers Prev; 25(5); 830-8. ©2016 AACR. ©2016 American Association for Cancer Research.

  19. Angular parallelization of a curvilinear Sn transport theory method

    International Nuclear Information System (INIS)

    Haghighat, A.

    1991-01-01

    In this paper a parallel algorithm for angular domain decomposition (or parallelization) of an r-dependent spherical S n transport theory method is derived. The parallel formulation is incorporated into TWOTRAN-II using the IBM Parallel Fortran compiler and implemented on an IBM 3090/400 (with four processors). The behavior of the parallel algorithm for different physical problems is studied, and it is concluded that the parallel algorithm behaves differently in the presence of a fission source as opposed to the absence of a fission source; this is attributed to the relative contributions of the source and the angular redistribution terms in the S s algorithm. Further, the parallel performance of the algorithm is measured for various problem sizes and different combinations of angular subdomains or processors. Poor parallel efficiencies between ∼35 and 50% are achieved in situations where the relative difference of parallel to serial iterations is ∼50%. High parallel efficiencies between ∼60% and 90% are obtained in situations where the relative difference of parallel to serial iterations is <35%

  20. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  1. Parallelization characteristics of the DeCART code

    International Nuclear Information System (INIS)

    Cho, J. Y.; Joo, H. G.; Kim, H. Y.; Lee, C. C.; Chang, M. H.; Zee, S. Q.

    2003-12-01

    This report is to describe the parallelization characteristics of the DeCART code and also examine its parallel performance. Parallel computing algorithms are implemented to DeCART to reduce the tremendous computational burden and memory requirement involved in the three-dimensional whole core transport calculation. In the parallelization of the DeCART code, the axial domain decomposition is first realized by using MPI (Message Passing Interface), and then the azimuthal angle domain decomposition by using either MPI or OpenMP. When using the MPI for both the axial and the angle domain decomposition, the concept of MPI grouping is employed for convenient communication in each communication world. For the parallel computation, most of all the computing modules except for the thermal hydraulic module are parallelized. These parallelized computing modules include the MOC ray tracing, CMFD, NEM, region-wise cross section preparation and cell homogenization modules. For the distributed allocation, most of all the MOC and CMFD/NEM variables are allocated only for the assigned planes, which reduces the required memory by a ratio of the number of the assigned planes to the number of all planes. The parallel performance of the DeCART code is evaluated by solving two problems, a rodded variation of the C5G7 MOX three-dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In the aspect of parallel performance, the DeCART code shows a good speedup of about 40.1 and 22.4 in the ray tracing module and about 37.3 and 20.2 in the total computing time when using 48 CPUs on the IBM Regatta and 24 CPUs on the LINUX cluster, respectively. In the comparison between the MPI and OpenMP, OpenMP shows a somewhat better performance than MPI. Therefore, it is concluded that the first priority in the parallel computation of the DeCART code is in the axial domain decomposition by using MPI, and then in the angular domain using OpenMP, and finally the angular

  2. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  3. Parallel evolutionary computation in bioinformatics applications.

    Science.gov (United States)

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  5. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  6. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  7. Effect of Shift Work on Nocturia.

    Science.gov (United States)

    Kim, Jin Wook

    2016-01-01

    To identify the circadian sensitive component of nocturia by comparing nocturia in patients who voluntarily choose a disrupted circadian rhythm, that is, shift workers, with those who maintain normal day-night cycles. Between 2011 and 2013, a total of 1741 untreated patients, 1376 nonshift workers and 365 shift workers, were compared for nocturia indices based on frequency volume charts (FVCs). General linear model of 8-hour interval urine production and frequency were compared between FVCs of nonshift workers, FVCs of night-shift workers, and FVCs of day-shift workers. Nocturia frequency was increased in the night-shift workers (2.38 ± 1.44) compared with nonshift workers (2.18 ± 1.04) (P night-shift workers, 0.34 ± 0.13 for nonshift workers, P = .24), nocturnal bladder capacity index increased significantly (1.41 ± 1.06 for night-shift workers, 1.26 ± 0.92 for nonshift workers, P shift (P shift changes (P = .35). Patients in alternating work shifts showed increased nocturia, especially during their night shift. These changes tended to be more associated with decreased nocturnal bladder capacity than increased nocturnal polyuria. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Associations between number of consecutive night shifts and impairment of neurobehavioral performance during a subsequent simulated night shift.

    Science.gov (United States)

    Magee, Michelle; Sletten, Tracey L; Ferguson, Sally A; Grunstein, Ronald R; Anderson, Clare; Kennaway, David J; Lockley, Steven W; Rajaratnam, Shantha Mw

    2016-05-01

    This study aimed to investigate sleep and circadian phase in the relationships between neurobehavioral performance and the number of consecutive shifts worked. Thirty-four shift workers [20 men, mean age 31.8 (SD 10.9) years] worked 2-7 consecutive night shifts immediately prior to a laboratory-based, simulated night shift. For 7 days prior, participants worked their usual shift sequence, and sleep was assessed with logs and actigraphy. Participants completed a 10-minute auditory psychomotor vigilance task (PVT) at the start (~21:00 hours) and end (~07:00 hours) of the simulated night shift. Mean reaction times (RT), number of lapses and RT distribution was compared between those who worked 2-3 consecutive night shifts versus those who worked 4-7 shifts. Following 4-7 shifts, night shift workers had significantly longer mean RT at the start and end of shift, compared to those who worked 2-3 shifts. The slowest and fastest 10% RT were significantly slower at the start, but not end, of shift among participants who worked 4-7 nights. Those working 4-7 nights also demonstrated a broader RT distribution at the start and end of shift and had significantly slower RT based on cumulative distribution analysis (5 (th), 25 (th), 50 (th), 75 (th)percentiles at the start of shift; 75th percentile at the end of shift). No group differences in sleep parameters were found for 7 days and 24 hours prior to the simulated night shift. A greater number of consecutive night shifts has a negative impact on neurobehavioral performance, likely due to cognitive slowing.

  9. Change from an 8-hour shift to a 12-hour shift, attitudes, sleep, sleepiness and performance.

    Science.gov (United States)

    Lowden, A; Kecklund, G; Axelsson, J; Akerstedt, T

    1998-01-01

    The present study sought to evaluate the effect of a change from a rotating 3-shift (8-hour) to a 2-shift shift (12 hour) schedule on sleep, sleepiness, performance, perceived health, and well-being. Thirty-two shift workers at a chemical plant (control room operators) responded to a questionnaire a few months before a change was made in their shift schedule and 10 months after the change. Fourteen workers also filled out a diary, carried activity loggers, and carried out reaction-time tests (beginning and end of shift). Fourteen day workers served as a reference group for the questionnaires and 9 were intensively studied during a week with workdays and a free weekend. The questionnaire data showed that the shift change increased satisfaction with workhours, sleep, and time for social activities. Health, perceived accident risk, and reaction-time performance were not negatively affected. Alertness improved and subjective recovery time after night work decreased. The quick changes in the 8-hour schedule greatly increased sleep problems and fatigue. Sleepiness integrated across the entire shift cycle showed that the shift workers were less alert than the day workers, across workdays and days off (although alertness increased with the 12-hour shift). The change from 8-hour to 12-hour shifts was positive in most respects, possibly due to the shorter sequences of the workdays, the longer sequences of consecutive days off, the fewer types of shifts (easier planning), and the elimination of quick changes. The results may differ in groups with a higher work load.

  10. Parallel pipeline algorithm of real time star map preprocessing

    Science.gov (United States)

    Wang, Hai-yong; Qin, Tian-mu; Liu, Jia-qi; Li, Zhi-feng; Li, Jian-hua

    2016-03-01

    To improve the preprocessing speed of star map and reduce the resource consumption of embedded system of star tracker, a parallel pipeline real-time preprocessing algorithm is presented. The two characteristics, the mean and the noise standard deviation of the background gray of a star map, are firstly obtained dynamically by the means that the intervene of the star image itself to the background is removed in advance. The criterion on whether or not the following noise filtering is needed is established, then the extraction threshold value is assigned according to the level of background noise, so that the centroiding accuracy is guaranteed. In the processing algorithm, as low as two lines of pixel data are buffered, and only 100 shift registers are used to record the connected domain label, by which the problems of resources wasting and connected domain overflow are solved. The simulating results show that the necessary data of the selected bright stars could be immediately accessed in a delay time as short as 10us after the pipeline processing of a 496×496 star map in 50Mb/s is finished, and the needed memory and registers resource total less than 80kb. To verify the accuracy performance of the algorithm proposed, different levels of background noise are added to the processed ideal star map, and the statistic centroiding error is smaller than 1/23 pixel under the condition that the signal to noise ratio is greater than 1. The parallel pipeline algorithm of real time star map preprocessing helps to increase the data output speed and the anti-dynamic performance of star tracker.

  11. SHIFT: server for hidden stops analysis in frame-shifted translation.

    Science.gov (United States)

    Gupta, Arun; Singh, Tiratha Raj

    2013-02-23

    Frameshift is one of the three classes of recoding. Frame-shifts lead to waste of energy, resources and activity of the biosynthetic machinery. In addition, some peptides synthesized after frame-shifts are probably cytotoxic which serve as plausible cause for innumerable number of diseases and disorders such as muscular dystrophies, lysosomal storage disorders, and cancer. Hidden stop codons occur naturally in coding sequences among all organisms. These codons are associated with the early termination of translation for incorrect reading frame selection and help to reduce the metabolic cost related to the frameshift events. Researchers have identified several consequences of hidden stop codons and their association with myriad disorders. However the wealth of information available is speckled and not effortlessly acquiescent to data-mining. To reduce this gap, this work describes an algorithmic web based tool to study hidden stops in frameshifted translation for all the lineages through respective genetic code systems. This paper describes SHIFT, an algorithmic web application tool that provides a user-friendly interface for identifying and analyzing hidden stops in frameshifted translation of genomic sequences for all available genetic code systems. We have calculated the correlation between codon usage frequencies and the plausible contribution of codons towards hidden stops in an off-frame context. Markovian chains of various order have been used to model hidden stops in frameshifted peptides and their evolutionary association with naturally occurring hidden stops. In order to obtain reliable and persuasive estimates for the naturally occurring and predicted hidden stops statistical measures have been implemented. This paper presented SHIFT, an algorithmic tool that allows user-friendly exploration, analysis, and visualization of hidden stop codons in frameshifted translations. It is expected that this web based tool would serve as a useful complement for

  12. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  13. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  14. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  15. Parallel thermal radiation transport in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)

    2003-07-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  16. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  17. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  18. Shifted-modified Chebyshev filters

    OpenAIRE

    ŞENGÜL, Metin

    2013-01-01

    This paper introduces a new type of filter approximation method that utilizes shifted-modified Chebyshev filters. Construction of the new filters involves the use of shifted-modified Chebyshev polynomials that are formed using the roots of conventional Chebyshev polynomials. The study also includes 2 tables containing the shifted-modified Chebyshev polynomials and the normalized element values for the low-pass prototype filters up to degree 6. The transducer power gain, group dela...

  19. Implementing Shared Memory Parallelism in MCBEND

    Directory of Open Access Journals (Sweden)

    Bird Adam

    2017-01-01

    Full Text Available MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  20. Parallel Task Processing on a Multicore Platform in a PC-based Control System for Parallel Kinematics

    Directory of Open Access Journals (Sweden)

    Harald Michalik

    2009-02-01

    Full Text Available Multicore platforms are such that have one physical processor chip with multiple cores interconnected via a chip level bus. Because they deliver a greater computing power through concurrency, offer greater system density multicore platforms provide best qualifications to address the performance bottleneck encountered in PC-based control systems for parallel kinematic robots with heavy CPU-load. Heavy load control tasks are generated by new control approaches that include features like singularity prediction, structure control algorithms, vision data integration and similar tasks. In this paper we introduce the parallel task scheduling extension of a communication architecture specially tailored for the development of PC-based control of parallel kinematics. The Sche-duling is specially designed for the processing on a multicore platform. It breaks down the serial task processing of the robot control cycle and extends it with parallel task processing paths in order to enhance the overall control performance.

  1. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  2. OpenShift cookbook

    CERN Document Server

    Gulati, Shekhar

    2014-01-01

    If you are a web application developer who wants to use the OpenShift platform to host your next big idea but are looking for guidance on how to achieve this, then this book is the first step you need to take. This is a very accessible cookbook where no previous knowledge of OpenShift is needed.

  3. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  4. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-01-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. (orig.)

  5. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    Science.gov (United States)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  6. Performance and sleepiness in nurses working 12-h day shifts or night shifts in a community hospital.

    Science.gov (United States)

    Wilson, Marian; Permito, Regan; English, Ashley; Albritton, Sandra; Coogle, Carlana; Van Dongen, Hans P A

    2017-10-05

    Hospitals are around-the-clock operations and nurses are required to care for patients night and day. The nursing shortage and desire for a more balanced work-to-home life has popularized 12-h shifts for nurses. The present study investigated sleep/wake cycles and fatigue levels in 22 nurses working 12-h shifts, comparing day versus night shifts. Nurses (11day shift and 11 night shift) were recruited from a suburban acute-care medical center. Participants wore a wrist activity monitor and kept a diary to track their sleep/wake cycles for 2 weeks. They also completed a fatigue test battery, which included the Psychomotor Vigilance Test (PVT) and the Karolinska Sleepiness Scale (KSS), at the beginning, middle and end of 4 duty shifts. Daily sleep duration was 7.1h on average. No overall difference in mean daily sleep duration was found between nurses working day shifts versus night shifts. Objective performance on the PVT remained relatively good and stable at the start, middle, and end of duty shifts in day shift workers, but gradually degraded across duty time in night shift workers. Compared to day shift workers, night shift workers also exhibited more performance variability among measurement days and between participants at each testing time point. The same pattern was observed for subjective sleepiness on the KSS. However, congruence between objective and subjective measures of fatigue was poor. Our findings suggest a need for organizations to evaluate practices and policies to mitigate the inevitable fatigue that occurs during long night shifts, in order to improve patient and healthcare worker safety. Examination of alternative shift lengths or sanctioned workplace napping may be strategies to consider. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Researching the Parallel Process in Supervision and Psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    Reflects upon how to do process research in supervision and in the parallel process. A single case study is presented illustrating how a study on parallel process can be carried out.......Reflects upon how to do process research in supervision and in the parallel process. A single case study is presented illustrating how a study on parallel process can be carried out....

  8. Shift Verification and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Pandya, Tara M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Davidson, Gregory G [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Godfrey, Andrew T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over a burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.

  9. Development of parallel/serial program analyzing tool

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Nagao, Saichi; Takigawa, Yoshio; Kumakura, Toshimasa

    1999-03-01

    Japan Atomic Energy Research Institute has been developing 'KMtool', a parallel/serial program analyzing tool, in order to promote the parallelization of the science and engineering computation program. KMtool analyzes the performance of program written by FORTRAN77 and MPI, and it reduces the effort for parallelization. This paper describes development purpose, design, utilization and evaluation of KMtool. (author)

  10. Simulation Exploration through Immersive Parallel Planes: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny; Smith, Steve

    2016-03-01

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, each individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.

  11. Choice Shifts in Groups

    OpenAIRE

    Kfir Eliaz; Debraj Ray

    2004-01-01

    The phenomenon of "choice shifts" in group decision-making is fairly ubiquitous in the social psychology literature. Faced with a choice between a ``safe" and ``risky" decision, group members appear to move to one extreme or the other, relative to the choices each member might have made on her own. Both risky and cautious shifts have been identified in different situations. This paper demonstrates that from an individual decision-making perspective, choice shifts may be viewed as a systematic...

  12. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  13. Parallelization of Subchannel Analysis Code MATRA

    International Nuclear Information System (INIS)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk

    2014-01-01

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems

  14. Broadcasting a message in a parallel computer

    Science.gov (United States)

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  15. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  16. Rapid parallel evolution overcomes global honey bee parasite.

    Science.gov (United States)

    Oddie, Melissa; Büchler, Ralph; Dahle, Bjørn; Kovacic, Marin; Le Conte, Yves; Locke, Barbara; de Miranda, Joachim R; Mondet, Fanny; Neumann, Peter

    2018-05-16

    In eusocial insect colonies nestmates cooperate to combat parasites, a trait called social immunity. However, social immunity failed for Western honey bees (Apis mellifera) when the ectoparasitic mite Varroa destructor switched hosts from Eastern honey bees (Apis cerana). This mite has since become the most severe threat to A. mellifera world-wide. Despite this, some isolated A. mellifera populations are known to survive infestations by means of natural selection, largely by supressing mite reproduction, but the underlying mechanisms of this are poorly understood. Here, we show that a cost-effective social immunity mechanism has evolved rapidly and independently in four naturally V. destructor-surviving A. mellifera populations. Worker bees of all four 'surviving' populations uncapped/recapped worker brood cells more frequently and targeted mite-infested cells more effectively than workers in local susceptible colonies. Direct experiments confirmed the ability of uncapping/recapping to reduce mite reproductive success without sacrificing nestmates. Our results provide striking evidence that honey bees can overcome exotic parasites with simple qualitative and quantitative adaptive shifts in behaviour. Due to rapid, parallel evolution in four host populations this appears to be a key mechanism explaining survival of mite infested colonies.

  17. Night shift work exposure profile and obesity: Baseline results from a Chinese night shift worker cohort.

    Science.gov (United States)

    Sun, Miaomiao; Feng, Wenting; Wang, Feng; Zhang, Liuzhuo; Wu, Zijun; Li, Zhimin; Zhang, Bo; He, Yonghua; Xie, Shaohua; Li, Mengjie; Fok, Joan P C; Tse, Gary; Wong, Martin C S; Tang, Jin-Ling; Wong, Samuel Y S; Vlaanderen, Jelle; Evans, Greg; Vermeulen, Roel; Tse, Lap Ah

    2018-01-01

    This study aimed to evaluate the associations between types of night shift work and different indices of obesity using the baseline information from a prospective cohort study of night shift workers in China. A total of 3,871 workers from five companies were recruited from the baseline survey. A structured self-administered questionnaire was employed to collect the participants' demographic information, lifetime working history, and lifestyle habits. Participants were grouped into rotating, permanent and irregular night shift work groups. Anthropometric parameters were assessed by healthcare professionals. Multiple logistic regression models were used to evaluate the associations between night shift work and different indices of obesity. Night shift workers had increased risk of overweight and obesity, and odds ratios (ORs) were 1.17 (95% CI, 0.97-1.41) and 1.27 (95% CI, 0.74-2.18), respectively. Abdominal obesity had a significant but marginal association with night shift work (OR = 1.20, 95% CI, 1.01-1.43). A positive gradient between the number of years of night shift work and overweight or abdominal obesity was observed. Permanent night shift work showed the highest odds of being overweight (OR = 3.94, 95% CI, 1.40-11.03) and having increased abdominal obesity (OR = 3.34, 95% CI, 1.19-9.37). Irregular night shift work was also significantly associated with overweight (OR = 1.56, 95% CI, 1.13-2.14), but its association with abdominal obesity was borderline (OR = 1.26, 95% CI, 0.94-1.69). By contrast, the association between rotating night shift work and these parameters was not significant. Permanent and irregular night shift work were more likely to be associated with overweight or abdominal obesity than rotating night shift work. These associations need to be verified in prospective cohort studies.

  18. Night shift work exposure profile and obesity: Baseline results from a Chinese night shift worker cohort

    Science.gov (United States)

    Feng, Wenting; Wang, Feng; Zhang, Liuzhuo; Wu, Zijun; Li, Zhimin; Zhang, Bo; He, Yonghua; Xie, Shaohua; Li, Mengjie; Fok, Joan P. C.; Tse, Gary; Wong, Martin C. S.; Tang, Jin-ling; Wong, Samuel Y. S.; Vlaanderen, Jelle; Evans, Greg; Vermeulen, Roel; Tse, Lap Ah

    2018-01-01

    Aims This study aimed to evaluate the associations between types of night shift work and different indices of obesity using the baseline information from a prospective cohort study of night shift workers in China. Methods A total of 3,871 workers from five companies were recruited from the baseline survey. A structured self-administered questionnaire was employed to collect the participants’ demographic information, lifetime working history, and lifestyle habits. Participants were grouped into rotating, permanent and irregular night shift work groups. Anthropometric parameters were assessed by healthcare professionals. Multiple logistic regression models were used to evaluate the associations between night shift work and different indices of obesity. Results Night shift workers had increased risk of overweight and obesity, and odds ratios (ORs) were 1.17 (95% CI, 0.97–1.41) and 1.27 (95% CI, 0.74–2.18), respectively. Abdominal obesity had a significant but marginal association with night shift work (OR = 1.20, 95% CI, 1.01–1.43). A positive gradient between the number of years of night shift work and overweight or abdominal obesity was observed. Permanent night shift work showed the highest odds of being overweight (OR = 3.94, 95% CI, 1.40–11.03) and having increased abdominal obesity (OR = 3.34, 95% CI, 1.19–9.37). Irregular night shift work was also significantly associated with overweight (OR = 1.56, 95% CI, 1.13–2.14), but its association with abdominal obesity was borderline (OR = 1.26, 95% CI, 0.94–1.69). By contrast, the association between rotating night shift work and these parameters was not significant. Conclusion Permanent and irregular night shift work were more likely to be associated with overweight or abdominal obesity than rotating night shift work. These associations need to be verified in prospective cohort studies. PMID:29763461

  19. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  20. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  1. Shift schedule, work-family relationships, marital communication, job satisfaction and health among transport service shift workers.

    Science.gov (United States)

    Iskra-Golec, Irena; Smith, Lawrence; Wilczek-Rużyczka, Ewa; Siemiginowska, Patrycja; Wątroba, Joanna

    2017-02-21

    Existing research has documented that shiftwork consequences may depend on the shift system parameters. Fast rotating systems (1-3 shifts of the same kind in a row) and day work have been found to be less disruptive biologically and socially than slower rotating systems and afternoon and night work. The aim of this study was to compare day workers and shift workers of different systems in terms of rotation speed and shifts worked with regard to work-family and family-work positive and negative spillover, marital communication style, job satisfaction and health. Employees (N = 168) of the maintenance workshops of transportation service working different shift systems (day shift, weekly rotating 2 and 3‑shift system, and fast rotating 3-shift system) participated in the study. They completed the Work- Family Spillover Questionnaire, Marital Communication Questionnaire, Minnesota Job Satisfaction Questionnaire and the Physical Health Questionnaire (a part of the Standard Shiftwork Index). The workers of quicker rotating 3-shift systems reported significantly higher scores of family-to-work facilitation (F(3, 165) = 4.175, p = 0.007) and a higher level of constructive style of marital communication (Engagement F(3, 165) = 2.761, p = 0.044) than the workers of slower rotating 2-shift systems. There were no differences between the groups of workers with regard to health and job satisfaction. A higher level of work-family facilitation and a more constructive style of marital communication were found among the workers of faster rotating 3-shift system when compared to the workers of a slower rotating 2-shift system (afternoon, night). This may indicate that the fast rotating shift system in contrary to the slower rotating one is more friendly for the work and family domains and for the relationship between them. Int J Occup Med Environ Health 2017;30(1):121-131. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  2. Shift schedule, work–family relationships, marital communication, job satisfaction and health among transport service shift workers

    Directory of Open Access Journals (Sweden)

    Irena Iskra-Golec

    2017-02-01

    Full Text Available Objectives: Existing research has documented that shiftwork consequences may depend on the shift system parameters. Fast rotating systems (1–3 shifts of the same kind in a row and day work have been found to be less disruptive biologically and socially than slower rotating systems and afternoon and night work. The aim of this study was to compare day workers and shift workers of different systems in terms of rotation speed and shifts worked with regard to work–family and family–work positive and negative spillover, marital communication style, job satisfaction and health. Material and Methods: Employees (N = 168 of the maintenance workshops of transportation service working different shift systems (day shift, weekly rotating 2 and 3‑shift system, and fast rotating 3-shift system participated in the study. They completed the Work– Family Spillover Questionnaire, Marital Communication Questionnaire, Minnesota Job Satisfaction Questionnaire and the Physical Health Questionnaire (a part of the Standard Shiftwork Index. Results: The workers of quicker rotating 3-shift systems reported significantly higher scores of family-to-work facilitation (F(3, 165 = 4.175, p = 0.007 and a higher level of constructive style of marital communication (Engagement F(3, 165 = 2.761, p = 0.044 than the workers of slower rotating 2-shift systems. There were no differences between the groups of workers with regard to health and job satisfaction. Conclusions: A higher level of work–family facilitation and a more constructive style of marital communication were found among the workers of faster rotating 3-shift system when compared to the workers of a slower rotating 2-shift system (afternoon, night. This may indicate that the fast rotating shift system in contrary to the slower rotating one is more friendly for the work and family domains and for the relationship between them. Int J Occup Med Environ Health 2017;30(1:121–131

  3. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  4. A possibility of parallel and anti-parallel diffraction measurements on ...

    Indian Academy of Sciences (India)

    resolution property of the other one, anti-parallel position, is very poor. .... in a wide angular region using BPC mochromator at the MF condition by showing ... and N Nimura, Proceedings of the 7th World Conference on Neutron Radiography,.

  5. Real life working shift assignment problem

    Science.gov (United States)

    Sze, San-Nah; Kwek, Yeek-Ling; Tiong, Wei-King; Chiew, Kang-Leng

    2017-07-01

    This study concerns about the working shift assignment in an outlet of Supermarket X in Eastern Mall, Kuching. The working shift assignment needs to be solved at least once in every month. Current approval process of working shifts is too troublesome and time-consuming. Furthermore, the management staff cannot have an overview of manpower and working shift schedule. Thus, the aim of this study is to develop working shift assignment simulation and propose a working shift assignment solution. The main objective for this study is to fulfill manpower demand at minimum operation cost. Besides, the day off and meal break policy should be fulfilled accordingly. Demand based heuristic is proposed to assign working shift and the quality of the solution is evaluated by using the real data.

  6. Dynamics and computation in functional shifts

    Science.gov (United States)

    Namikawa, Jun; Hashimoto, Takashi

    2004-07-01

    We introduce a new type of shift dynamics as an extended model of symbolic dynamics, and investigate the characteristics of shift spaces from the viewpoints of both dynamics and computation. This shift dynamics is called a functional shift, which is defined by a set of bi-infinite sequences of some functions on a set of symbols. To analyse the complexity of functional shifts, we measure them in terms of topological entropy, and locate their languages in the Chomsky hierarchy. Through this study, we argue that considering functional shifts from the viewpoints of both dynamics and computation gives us opposite results about the complexity of systems. We also describe a new class of shift spaces whose languages are not recursively enumerable.

  7. Research in Parallel Algorithms and Software for Computational Aerosciences

    Science.gov (United States)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  8. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  9. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro; Nakashima, Jun; Yokota, Rio; Maruyama, Naoya

    2012-01-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM

  10. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  11. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (parallelization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hideo; Kawai, Wataru; Nemoto, Toshiyuki [Fujitsu Ltd., Tokyo (Japan); and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the parallelization. In this parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. In the vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  12. Associations of rotational shift work and night shift status with hypertension: a systematic review and meta-analysis.

    Science.gov (United States)

    Manohar, Sandhya; Thongprayoon, Charat; Cheungpasitporn, Wisit; Mao, Michael A; Herrmann, Sandra M

    2017-10-01

    The reported risks of hypertension (HTN) in rotating shift and night shift workers are controversial. The objective of this meta-analysis was to assess the association between shift work status and HTN. A literature search was performed using MEDLINE, EMBASE and Cochrane Database from inception through October 2016. Studies that reported odds ratios (OR) comparing the risk of HTN in shift workers were included. A prespecified subgroup analysis by rotating shift and night shift statuses were also performed. Pooled OR and 95% confidence interval (CI) were calculated using a random-effect, generic inverse variance method. The protocol for this study is registered with International Prospective Register of Systematic Reviews; no. CRD42016051843. Twenty-seven observational studies (nine cohort and 18 cross-sectional studies) with a total of 394 793 individuals were enrolled. The pooled ORs of HTN in shift workers in cohort and cross-sectional studies were 1.31 (95% CI, 1.07-1.60) and 1.10 (95% CI, 1.00-1.20), respectively. When meta-analysis was restricted only to cohort studies in rotating shift, the pooled OR of HTN in rotating shift workers was 1.34 (95% CI, 1.08-1.67). The data regarding night shift and HTN in cohort studies was limited. The pooled OR of HTN in night shift workers in cross-sectional studies was 1.07 (95% CI, 0.85-1.35). Based on the findings of our meta-analysis, shiftwork status may play an important role in HTN, as there is a significant association between rotating shift work and HTN. However, there is no significant association between night shift status and risk of HTN.

  13. Configuration affects parallel stent grafting results.

    Science.gov (United States)

    Tanious, Adam; Wooster, Mathew; Armstrong, Paul A; Zwiebel, Bruce; Grundy, Shane; Back, Martin R; Shames, Murray L

    2018-05-01

    A number of adjunctive "off-the-shelf" procedures have been described to treat complex aortic diseases. Our goal was to evaluate parallel stent graft configurations and to determine an optimal formula for these procedures. This is a retrospective review of all patients at a single medical center treated with parallel stent grafts from January 2010 to September 2015. Outcomes were evaluated on the basis of parallel graft orientation, type, and main body device. Primary end points included parallel stent graft compromise and overall endovascular aneurysm repair (EVAR) compromise. There were 78 patients treated with a total of 144 parallel stents for a variety of pathologic processes. There was a significant correlation between main body oversizing and snorkel compromise (P = .0195) and overall procedural complication (P = .0019) but not with endoleak rates. Patients were organized into the following oversizing groups for further analysis: 0% to 10%, 10% to 20%, and >20%. Those oversized into the 0% to 10% group had the highest rate of overall EVAR complication (73%; P = .0003). There were no significant correlations between any one particular configuration and overall procedural complication. There was also no significant correlation between total number of parallel stents employed and overall complication. Composite EVAR configuration had no significant correlation with individual snorkel compromise, endoleak, or overall EVAR or procedural complication. The configuration most prone to individual snorkel compromise and overall EVAR complication was a four-stent configuration with two stents in an antegrade position and two stents in a retrograde position (60% complication rate). The configuration most prone to endoleak was one or two stents in retrograde position (33% endoleak rate), followed by three stents in an all-antegrade position (25%). There was a significant correlation between individual stent configuration and stent compromise (P = .0385), with 31

  14. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    International Nuclear Information System (INIS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-01-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines

  15. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    Science.gov (United States)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  16. Micropatch Antenna Phase Shifting

    National Research Council Canada - National Science Library

    Thursby, Michael

    2000-01-01

    .... We have been looking at the ability of embedded element to adjust the phase shift seen by the element with the goal of being able to remove the phase shifting devices from the antenna and replace...

  17. Micropatch Antenna Phase Shifting

    National Research Council Canada - National Science Library

    Thursby, Michael

    1999-01-01

    .... We have been looking at the ability of embedded element to adjust the phase shift seen by the element wit the goal of being able to remove the phase shifting devices from the antenna and replace...

  18. Scalable parallel prefix solvers for discrete ordinates transport

    International Nuclear Information System (INIS)

    Pautz, S.; Pandya, T.; Adams, M.

    2009-01-01

    The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)

  19. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  20. Xyce parallel electronic simulator : users' guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is

  1. Influences on Dietary Choices during Day versus Night Shift in Shift Workers: A Mixed Methods Study.

    Science.gov (United States)

    Bonnell, Emily K; Huggins, Catherine E; Huggins, Chris T; McCaffrey, Tracy A; Palermo, Claire; Bonham, Maxine P

    2017-02-26

    Shift work is associated with diet-related chronic conditions such as obesity and cardiovascular disease. This study aimed to explore factors influencing food choice and dietary intake in shift workers. A fixed mixed method study design was undertaken on a convenience sample of firefighters who continually work a rotating roster. Six focus groups ( n = 41) were conducted to establish factors affecting dietary intake whilst at work. Dietary intake was assessed using repeated 24 h dietary recalls ( n = 19). Interviews were audio recorded, transcribed verbatim, and interpreted using thematic analysis. Dietary data were entered into FoodWorks and analysed using Wilcoxon signed-rank test; p night shift. Energy intakes (kJ/day) did not differ between days that included a day or night shift but greater energy density (ED energy , kJ/g/day) of the diet was observed on night shift compared with day shift. This study has identified a number of dietary-specific shift-related factors that may contribute to an increase in unhealthy behaviours in a shift-working population. Given the increased risk of developing chronic diseases, organisational change to support workers in this environment is warranted.

  2. A compressed shift schedule: dealing with some of the problems of shift work

    Energy Technology Data Exchange (ETDEWEB)

    Cunningham, J B [Victoria University, Victoria, BC (Canada). School of Public Administration

    1989-07-01

    This study examines some of the psychological and behavioural effects of a 12-hour compressed shift schedule on coal miners in two organisations in Western Canada. It suggests that young, married compressed shift workers are more satisfied with their family relationship. They spend less of their leisure time with spouses when working shifts, and do not spend any more time with them on their days off. They have less time available for many leisure activities on their workdays. The extra time on days off is not reallocated to the leisure activities they were unable to do on their workdays. Some extra leisure time on days off may be spent on personal hobbies. There is no suggestion that the compressed shift schedule has any negative effect on the individual's health. 38 refs., 3 tabs.

  3. Shift work as an oxidative stressor

    OpenAIRE

    Pasalar Parvin; Farahani Saeed; Sharifian Akbar; Gharavi Marjan; Aminian Omid

    2005-01-01

    Abstract Background Some medical disorders have higher prevalence in shift workers than others. This study was designed to evaluate the effect of night-shift-working on total plasma antioxidant capacity, with respect to the causative role of oxidative stress in induction of some of these disorders. Methods Two blood samples were taken from 44 workers with a rotational shift schedule, one after their day shift and one after their night shift. The total plasma antioxidant capacity of each worke...

  4. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  5. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  6. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    Science.gov (United States)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of

  7. Influences on Dietary Choices during Day versus Night Shift in Shift Workers: A Mixed Methods Study

    Science.gov (United States)

    Bonnell, Emily K.; Huggins, Catherine E.; Huggins, Chris T.; McCaffrey, Tracy A.; Palermo, Claire; Bonham, Maxine P.

    2017-01-01

    Shift work is associated with diet-related chronic conditions such as obesity and cardiovascular disease. This study aimed to explore factors influencing food choice and dietary intake in shift workers. A fixed mixed method study design was undertaken on a convenience sample of firefighters who continually work a rotating roster. Six focus groups (n = 41) were conducted to establish factors affecting dietary intake whilst at work. Dietary intake was assessed using repeated 24 h dietary recalls (n = 19). Interviews were audio recorded, transcribed verbatim, and interpreted using thematic analysis. Dietary data were entered into FoodWorks and analysed using Wilcoxon signed-rank test; p shift schedule; attitudes and decisions of co-workers; time and accessibility; and knowledge of the relationship between food and health. Participants reported consuming more discretionary foods and limited availability of healthy food choices on night shift. Energy intakes (kJ/day) did not differ between days that included a day or night shift but greater energy density (EDenergy, kJ/g/day) of the diet was observed on night shift compared with day shift. This study has identified a number of dietary-specific shift-related factors that may contribute to an increase in unhealthy behaviours in a shift-working population. Given the increased risk of developing chronic diseases, organisational change to support workers in this environment is warranted. PMID:28245625

  8. Faktor Dan Penjadualan Shift Kerja

    OpenAIRE

    Maurits, Lientje Setyawati; Widodo, Imam Djati

    2008-01-01

    Work shift has negative effect in physical and mental health, work performance and job accident. Disturbance of circadian rhythms is indicated as source of the problems. This article explores some researches related to the impacts of work shift and establishes basic principles of work shift scheduling that considers human need and limitation.

  9. Visual attention shifting in autism spectrum disorders.

    Science.gov (United States)

    Richard, Annette E; Lajiness-O'Neill, Renee

    2015-01-01

    Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be

  10. The convergence of parallel Boltzmann machines

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.; Eckmiller, R.; Hartmann, G.; Hauske, G.

    1990-01-01

    We discuss the main results obtained in a study of a mathematical model of synchronously parallel Boltzmann machines. We present supporting evidence for the conjecture that a synchronously parallel Boltzmann machine maximizes a consensus function that consists of a weighted sum of the regular

  11. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  12. Synchronization Of Parallel Discrete Event Simulations

    Science.gov (United States)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  13. Distributed parallel messaging for multiprocessor systems

    Science.gov (United States)

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  14. Shift work as an oxidative stressor

    Directory of Open Access Journals (Sweden)

    Pasalar Parvin

    2005-12-01

    Full Text Available Abstract Background Some medical disorders have higher prevalence in shift workers than others. This study was designed to evaluate the effect of night-shift-working on total plasma antioxidant capacity, with respect to the causative role of oxidative stress in induction of some of these disorders. Methods Two blood samples were taken from 44 workers with a rotational shift schedule, one after their day shift and one after their night shift. The total plasma antioxidant capacity of each worker was measured through the FRAP method. The impacts of age and weight were also assessed. Results The total plasma antioxidant capacity was measured in 44 shift-workers with a mean age of 36.57 years (SD: 10.18 and mean BMI of 26.06 (SD: 4.37 after their day and night shifts. The mean reduction of total plasma antioxidant capacity after the night shift was 105.8 μmol/L (SD: 146.39. Also, a significant correlation was shown between age and weight and total plasma antioxidant capacity. Age and weight were found to be inversely related to total plasma antioxidant capacity; as age and weight increased, the total plasma antioxidant capacity decreased. Conclusion Shift work can act as an oxidative stressor and may induce many medical disorders. Aging and obesity in shift workers makes them more sensitive to this hazardous effect.

  15. Shift work-related health problems in

    Directory of Open Access Journals (Sweden)

    S. Khavaji

    2010-04-01

    Full Text Available Background and aimsShift work is a major feature of working life that affects diverse aspects of human life. The main purposes of this study were to investigate shift work-related health problems and their risk factors among workers of "12-hour shift" schedule.MethodsThis cross-sectional study was carried out at 8 petrochemical industries in Asalooyeh area. Study population consisted of 1203 workers including 549 shift worker (46% and 654 day worker (54%. Data on personal details, shift schedule and adverse effects of shift work werecollected by anonymous questionnaire. Statistical analyses were performed using SPSS, version 11.5. The level of significance was set at 5%.ResultsAlthough, the results showed that health problems among shift workers was more prevalent than day workers, but the differences were just significant in gastrointestinal and musculoskeletal disorders (p<0.05. Multiple linear regressions indicated that in addition to shift working, other variants such as long work hours, type of employment, second job, number of children and job title were associated with health problems.ConclusionPrevalence rates of gastrointestinal and musculoskeletal problems among shift workers were significantly higher than that of day workers. Although, working in shift system was the main significant factor associated with the reported problems, but other demographic andwork variables were also found to have association.

  16. Comparison of Mental Health and Quality of Life between Shift and Non-shift Employees of Service Industries

    Directory of Open Access Journals (Sweden)

    Nahal Salimi

    2016-12-01

    Full Text Available This study examined the relationship between the employment practices and employees' mental health and quality of life in Iran. In particular, the study compared the mental health and quality of life of shift and non-shift workers in sensitive employment settings. Using a cross-sectional survey design, 120 individuals employed in two airline companies as either shift or non-shift employees completed the survey for the study. Data was collected using General Health Question (GHQ28 for mental health, the Short Form (36 Health Survey (SF-36 for Quality of Life and a demographic questionnaire. Multivariate analysis of variance (MANOVA was used to analyze the collected data. The results showed that (1 type of work (shift or non-shift has an effect on mental health and quality of life; and (2 there are significant differences in dimensions of quality of life and mental health between shift and non-shift staff.

  17. SimShiftDB; local conformational restraints derived from chemical shift similarity searches on a large synthetic database

    Energy Technology Data Exchange (ETDEWEB)

    Ginzinger, Simon W. [Center of Applied Molecular Engineering, University of Salzburg, Department of Molecular Biology, Division of Bioinformatics (Austria)], E-mail: simon@came.sbg.ac.at; Coles, Murray [Max-Planck-Institute for Developmental Biology, Department of Protein Evolution (Germany)], E-mail: Murray.Coles@tuebingen.mpg.de

    2009-03-15

    We present SimShiftDB, a new program to extract conformational data from protein chemical shifts using structural alignments. The alignments are obtained in searches of a large database containing 13,000 structures and corresponding back-calculated chemical shifts. SimShiftDB makes use of chemical shift data to provide accurate results even in the case of low sequence similarity, and with even coverage of the conformational search space. We compare SimShiftDB to HHSearch, a state-of-the-art sequence-based search tool, and to TALOS, the current standard tool for the task. We show that for a significant fraction of the predicted similarities, SimShiftDB outperforms the other two methods. Particularly, the high coverage afforded by the larger database often allows predictions to be made for residues not involved in canonical secondary structure, where TALOS predictions are both less frequent and more error prone. Thus SimShiftDB can be seen as a complement to currently available methods.

  18. SimShiftDB; local conformational restraints derived from chemical shift similarity searches on a large synthetic database

    International Nuclear Information System (INIS)

    Ginzinger, Simon W.; Coles, Murray

    2009-01-01

    We present SimShiftDB, a new program to extract conformational data from protein chemical shifts using structural alignments. The alignments are obtained in searches of a large database containing 13,000 structures and corresponding back-calculated chemical shifts. SimShiftDB makes use of chemical shift data to provide accurate results even in the case of low sequence similarity, and with even coverage of the conformational search space. We compare SimShiftDB to HHSearch, a state-of-the-art sequence-based search tool, and to TALOS, the current standard tool for the task. We show that for a significant fraction of the predicted similarities, SimShiftDB outperforms the other two methods. Particularly, the high coverage afforded by the larger database often allows predictions to be made for residues not involved in canonical secondary structure, where TALOS predictions are both less frequent and more error prone. Thus SimShiftDB can be seen as a complement to currently available methods

  19. Fuzzy Determination of Target Shifting Time and Torque Control of Shifting Phase for Dry Dual Clutch Transmission

    Directory of Open Access Journals (Sweden)

    Zhiguo Zhao

    2014-01-01

    Full Text Available Based on the independently developed five-speed dry dual clutch transmission (DDCT, the paper proposes the torque coordinating control strategy between engine and two clutches, which obtains engine speed and clutch transferred torque in the shifting process, adequately reflecting the driver intention and improving the shifting quality. Five-degree-of-freedom (DOF shifting dynamics model of DDCT with single intermediate shaft is firstly established according to its physical characteristics. Then the quantitative control objectives of the shifting process are presented. The fuzzy decision of shifting time and the model-based torque coordinating control strategy are proposed and also verified by simulating under different driving intentions in up-/downshifting processes with the DCT model established on the MATLAB/Simulink. Simulation results validate that the shifting control algorithm proposed in this paper can not only meet the shifting quality requirements, but also adapt to the various shifting intentions, having a strong robustness.

  20. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  1. Seamless-merging-oriented parallel inverse lithography technology

    International Nuclear Information System (INIS)

    Yang Yiwei; Shi Zheng; Shen Shanhu

    2009-01-01

    Inverse lithography technology (ILT), a promising resolution enhancement technology (RET) used in next generations of IC manufacture, has the capability to push lithography to its limit. However, the existing methods of ILT are either time-consuming due to the large layout in a single process, or not accurate enough due to simply block merging in the parallel process. The seamless-merging-oriented parallel ILT method proposed in this paper is fast because of the parallel process; and most importantly, convergence enhancement penalty terms (CEPT) introduced in the parallel ILT optimization process take the environment into consideration as well as environmental change through target updating. This method increases the similarity of the overlapped area between guard-bands and work units, makes the merging process approach seamless and hence reduces hot-spots. The experimental results show that seamless-merging-oriented parallel ILT not only accelerates the optimization process, but also significantly improves the quality of ILT.

  2. Automatic Management of Parallel and Distributed System Resources

    Science.gov (United States)

    Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

    1990-01-01

    Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

  3. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  4. Parallel sparse direct solver for integrated circuit simulation

    CERN Document Server

    Chen, Xiaoming; Yang, Huazhong

    2017-01-01

    This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques. · Introduces complicated algorithms of sparse linear solvers, using concise principles and simple examples, without complex theory or lengthy derivations; · Describes a parallel sparse direct solver that can be adopted to accelerate any SPICE-like integrated circuit simulato...

  5. Streaming nested data parallelism on multicores

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2016-01-01

    The paradigm of nested data parallelism (NDP) allows a variety of semi-regular computation tasks to be mapped onto SIMD-style hardware, including GPUs and vector units. However, some care is needed to keep down space consumption in situations where the available parallelism may vastly exceed...

  6. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  7. 17 CFR 12.24 - Parallel proceedings.

    Science.gov (United States)

    2010-04-01

    ...) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration proceeding... the receivership includes the resolution of claims made by customers; or (3) A petition filed under... any of the foregoing with knowledge of a parallel proceeding shall promptly notify the Commission, by...

  8. Parallel computing solution of Boltzmann neutron transport equation

    International Nuclear Information System (INIS)

    Ansah-Narh, T.

    2010-01-01

    The focus of the research was on developing parallel computing algorithm for solving Eigen-values of the Boltzmam Neutron Transport Equation (BNTE) in a slab geometry using multi-grid approach. In response to the problem of slow execution of serial computing when solving large problems, such as BNTE, the study was focused on the design of parallel computing systems which was an evolution of serial computing that used multiple processing elements simultaneously to solve complex physical and mathematical problems. Finite element method (FEM) was used for the spatial discretization scheme, while angular discretization was accomplished by expanding the angular dependence in terms of Legendre polynomials. The eigenvalues representing the multiplication factors in the BNTE were determined by the power method. MATLAB Compiler Version 4.1 (R2009a) was used to compile the MATLAB codes of BNTE. The implemented parallel algorithms were enabled with matlabpool, a Parallel Computing Toolbox function. The option UseParallel was set to 'always' and the default value of the option was 'never'. When those conditions held, the solvers computed estimated gradients in parallel. The parallel computing system was used to handle all the bottlenecks in the matrix generated from the finite element scheme and each domain of the power method generated. The parallel algorithm was implemented on a Symmetric Multi Processor (SMP) cluster machine, which had Intel 32 bit quad-core x 86 processors. Convergence rates and timings for the algorithm on the SMP cluster machine were obtained. Numerical experiments indicated the designed parallel algorithm could reach perfect speedup and had good stability and scalability. (au)

  9. Parallel electric fields from ionospheric winds

    International Nuclear Information System (INIS)

    Nakada, M.P.

    1987-01-01

    The possible production of electric fields parallel to the magnetic field by dynamo winds in the E region is examined, using a jet stream wind model. Current return paths through the F region above the stream are examined as well as return paths through the conjugate ionosphere. The Wulf geometry with horizontal winds moving in opposite directions one above the other is also examined. Parallel electric fields are found to depend strongly on the width of current sheets at the edges of the jet stream. If these are narrow enough, appreciable parallel electric fields are produced. These appear to be sufficient to heat the electrons which reduces the conductivity and produces further increases in parallel electric fields and temperatures. Calculations indicate that high enough temperatures for optical emission can be produced in less than 0.3 s. Some properties of auroras that might be produced by dynamo winds are examined; one property is a time delay in brightening at higher and lower altitudes

  10. Data parallel sorting for particle simulation

    Science.gov (United States)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  11. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  12. Parallel 3-D method of characteristics in MPACT

    International Nuclear Information System (INIS)

    Kochunas, B.; Dovvnar, T. J.; Liu, Z.

    2013-01-01

    A new parallel 3-D MOC kernel has been developed and implemented in MPACT which makes use of the modular ray tracing technique to reduce computational requirements and to facilitate parallel decomposition. The parallel model makes use of both distributed and shared memory parallelism which are implemented with the MPI and OpenMP standards, respectively. The kernel is capable of parallel decomposition of problems in space, angle, and by characteristic rays up to 0(104) processors. Initial verification of the parallel 3-D MOC kernel was performed using the Takeda 3-D transport benchmark problems. The eigenvalues computed by MPACT are within the statistical uncertainty of the benchmark reference and agree well with the averages of other participants. The MPACT k eff differs from the benchmark results for rodded and un-rodded cases by 11 and -40 pcm, respectively. The calculations were performed for various numbers of processors and parallel decompositions up to 15625 processors; all producing the same result at convergence. The parallel efficiency of the worst case was 60%, while very good efficiency (>95%) was observed for cases using 500 processors. The overall run time for the 500 processor case was 231 seconds and 19 seconds for the case with 15625 processors. Ongoing work is focused on developing theoretical performance models and the implementation of acceleration techniques to minimize the number of iterations to converge. (authors)

  13. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  14. Individual vulnerability to insomnia, excessive sleepiness and shift work disorder amongst healthcare shift workers. A systematic review.

    Science.gov (United States)

    Booker, Lauren A; Magee, Michelle; Rajaratnam, Shantha M W; Sletten, Tracey L; Howard, Mark E

    2018-03-27

    Shift workers often experience reduced sleep quality, duration and/or excessive sleepiness due to the imposed conflict between work and their circadian system. About 20-30% of shift workers experience prominent insomnia symptoms and excessive daytime sleepiness consistent with the circadian rhythm sleep disorder known as shift work disorder. Individual factors may influence this vulnerability to shift work disorder or sleep-related impairment associated with shift work. This paper was registered with Prospero and was conducted using recommended standards for systematic reviews and meta-analyses. Published literature that measured sleep-related impairment associated with shift work including reduced sleep quality and duration and increased daytime sleepiness amongst healthcare shift workers and explored characteristics associated with individual variability were reviewed. Fifty-eight studies were included. Older age, morning-type, circadian flexibility, being married or having children, increased caffeine intake, higher scores on neuroticism and lower on hardiness were related to a higher risk of sleep-related impairment in response to shift work, whereas physical activity was a protective factor. The review highlights the diverse range of measurement tools used to evaluate the impact of shift work on sleep. Use of standardised and validated tools would enable cross-study comparisons. Longitudinal studies are required to establish causal relationships between individual factors and the development of shift work disorder. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Homogeneous bilateral block shifts

    Indian Academy of Sciences (India)

    Douglas class were classified in [3]; they are unilateral block shifts of arbitrary block size (i.e. dim H(n) can be anything). However, no examples of irreducible homogeneous bilateral block shifts of block size larger than 1 were known until now.

  16. Quantized beam shifts in graphene

    Energy Technology Data Exchange (ETDEWEB)

    de Melo Kort-Kamp, Wilton Junior [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sinitsyn, Nikolai [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dalvit, Diego Alejandro Roberto [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-08

    We predict the existence of quantized Imbert-Fedorov, Goos-Hanchen, and photonic spin Hall shifts for light beams impinging on a graphene-on-substrate system in an external magnetic field. In the quantum Hall regime the Imbert-Fedorov and photonic spin Hall shifts are quantized in integer multiples of the fine structure constant α, while the Goos-Hanchen ones in multiples of α2. We investigate the influence on these shifts of magnetic field, temperature, and material dispersion and dissipation. An experimental demonstration of quantized beam shifts could be achieved at terahertz frequencies for moderate values of the magnetic field.

  17. A qualitative single case study of parallel processes

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    2007-01-01

    Parallel process in psychotherapy and supervision is a phenomenon manifest in relationships and interactions, that originates in one setting and is reflected in another. This article presents an explorative single case study of parallel processes based on qualitative analyses of two successive...... randomly chosen psychotherapy sessions with a schizophrenic patient and the supervision session given in between. The author's analysis is verified by an independent examiner's analysis. Parallel processes are identified and described. Reflections on the dynamics of parallel processes and supervisory...

  18. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  19. Fluid Shifts

    Science.gov (United States)

    Stenger, M. B.; Hargens, A. R.; Dulchavsky, S. A.; Arbeille, P.; Danielson, R. W.; Ebert, D. J.; Garcia, K. M.; Johnston, S. L.; Laurie, S. S.; Lee, S. M. C.; hide

    2017-01-01

    Introduction. NASA's Human Research Program is focused on addressing health risks associated with long-duration missions on the International Space Station (ISS) and future exploration-class missions beyond low Earth orbit. Visual acuity changes observed after short-duration missions were largely transient, but now more than 50 percent of ISS astronauts have experienced more profound, chronic changes with objective structural findings such as optic disc edema, globe flattening and choroidal folds. These structural and functional changes are referred to as the visual impairment and intracranial pressure (VIIP) syndrome. Development of VIIP symptoms may be related to elevated intracranial pressure (ICP) secondary to spaceflight-induced cephalad fluid shifts, but this hypothesis has not been tested. The purpose of this study is to characterize fluid distribution and compartmentalization associated with long-duration spaceflight and to determine if a relation exists with vision changes and other elements of the VIIP syndrome. We also seek to determine whether the magnitude of fluid shifts during spaceflight, as well as any VIIP-related effects of those shifts, are predicted by the crewmember's pre-flight status and responses to acute hemodynamic manipulations, specifically posture changes and lower body negative pressure. Methods. We will examine a variety of physiologic variables in 10 long-duration ISS crewmembers using the test conditions and timeline presented in the figure below. Measures include: (1) fluid compartmentalization (total body water by D2O, extracellular fluid by NaBr, intracellular fluid by calculation, plasma volume by CO rebreathe, interstitial fluid by calculation); (2) forehead/eyelids, tibia, and calcaneus tissue thickness (by ultrasound); (3) vascular dimensions by ultrasound (jugular veins, cerebral and carotid arteries, vertebral arteries and veins, portal vein); (4) vascular dynamics by MRI (head/neck blood flow, cerebrospinal fluid

  20. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  1. The influence of internal current loop on transient response performance of I-V droop controlled paralleled DC-DC converters

    DEFF Research Database (Denmark)

    Wang, Haojie; Han, Minxiao; Guerrero, Josep M.

    2017-01-01

    The external droop control loop of I-V droop control is designed as a voltage loop with embedded virtual impedance, so the internal current loop plays a major role in the system bandwidth. Thus, in this paper, the influence of internal current loop on transient response performance of I-V droop...... controlled paralleled dc-dc converters is analyzed, which is guided and significant for its industry application. The model which is used for dynamic analysis is built, and the root locus method is used based on the model to analyze the dynamic response of the system by shifting different control parameters...

  2. Isotope shifting capacity of rock

    International Nuclear Information System (INIS)

    Blattner, P.; Department of Scientific and Industrial Research, Lower Hutt

    1980-01-01

    Any oxygen isotope shifted rock volume exactly defines a past throughput of water. An expression is derived that relates the throughput of an open system to the isotope shift of reservoir rock and present-day output. The small isotope shift of Ngawha reservoir rock and the small, high delta oxygen-18 output are best accounted for by a magmatic water source

  3. Sleep and satisfaction in 8- and 12-h forward-rotating shift systems: Industrial employees prefer 12-h shifts.

    Science.gov (United States)

    Karhula, Kati; Härmä, Mikko; Ropponen, Annina; Hakola, Tarja; Sallinen, Mikael; Puttonen, Sampsa

    2016-01-01

    Twelve-hour shift systems have become more popular in industry. Survey data of shift length, shift rotation speed, self-rated sleep, satisfaction and perceived health were investigated for the associations among 599 predominantly male Finnish industrial employees. The studied forward-rotating shift systems were 12-h fast (12fast, DDNN------, n = 268), 8-h fast (8fast, MMEENN----, n = 161) and 8-h slow (8slow, MMMM-EEEE-NNNN, n = 170). Satisfaction with shift system differed between the groups (p effects on sleep and alertness were rare (8%) in the 12fast group (53% 8fast, 66% 8 slow, p effects of the current shift system on general health (12fast 4%, 8fast 30%, 8slow 41%, p work-life balance (12fast 8%, 8fast 52%, 8slow 63%, p effects of shift work were dependent on both shift length and shift rotation speed: employees in the 12-h rapidly forward-rotating shift system were most satisfied, perceived better work-life balance and slept better than the employees in the 8fast or especially the employees in the 8-h slowly rotating systems.

  4. Parallel knock-out schemes in networks

    NARCIS (Netherlands)

    Broersma, H.J.; Fomin, F.V.; Woeginger, G.J.

    2004-01-01

    We consider parallel knock-out schemes, a procedure on graphs introduced by Lampert and Slater in 1997 in which each vertex eliminates exactly one of its neighbors in each round. We are considering cases in which after a finite number of rounds, where the minimimum number is called the parallel

  5. Parallel and vector implementation of APROS simulator code

    International Nuclear Information System (INIS)

    Niemi, J.; Tommiska, J.

    1990-01-01

    In this paper the vector and parallel processing implementation of a general purpose simulator code is discussed. In this code the utilization of vector processing is straightforward. In addition to the loop level parallel processing, the functional decomposition and the domain decomposition have been considered. Results represented for a PWR-plant simulation illustrate the potential speed-up factors of the alternatives. It turns out that the loop level parallelism and the domain decomposition are the most promising alternative to employ the parallel processing. (author)

  6. Fast event recorder utilizing a CCD analog shift register

    International Nuclear Information System (INIS)

    Ducar, R.J.; McIntyre, P.M.

    1978-01-01

    A system of electronics has been developed to allow the capture and recording of relatively fast, low-amplitude analog events. The heart of the system is a dual 455-cell analog shift register charge-coupled device, Fairchild CCD321ADC-3. The CCD is operated in a dual clock mode. The input is sampled at a selectable clock rate of .25-20 MHz. The stored analog data is then clocked out at a slower rate, typically about .25 MHz. The time base expansion of the analog data allows for analog-to-digital conversion and memory storage using conventional medium-speed devices. The digital data is sequentially loaded into a static RAM and may then be block transferred to a computer. The analog electronics are housed in a single-width NIM module, and the RAM memory in a single-width CAMAC module. Each pair of modules provides six parallel channels. Cost is about $200.00 per channel. Applications are described for ionization imaging (TPC, IRC) and long-drift calorimetry in liquid argon

  7. SWAMP+: multiple subsequence alignment using associative massive parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  8. A novel thromboxane A2 receptor N42S variant results in reduced surface expression and platelet dysfunction.

    Science.gov (United States)

    Nisar, Shaista P; Lordkipanidzé, Marie; Jones, Matthew L; Dawood, Ban; Murden, Sherina; Cunningham, Margaret R; Mumford, Andrew D; Wilde, Jonathan T; Watson, Steve P; Mundell, Stuart J; Lowe, Gillian C

    2014-05-05

    A small number of thromboxane receptor variants have been described in patients with a bleeding history that result in platelet dysfunction. We have identified a patient with a history of significant bleeding, who expresses a novel heterozygous thromboxane receptor variant that predicts an asparagine to serine substitution (N42S). This asparagine is conserved across all class A GPCRs, suggesting a vital role for receptor structure and function.We investigated the functional consequences of the TP receptor heterozygous N42S substitution by performing platelet function studies on platelet-rich plasma taken from the patient and healthy controls. We investigated the N42S mutation by expressing the wild-type (WT) and mutant receptor in human embryonic kidney (HEK) cells. Aggregation studies showed an ablation of arachidonic acid responses in the patient, whilst there was right-ward shift of the U46619 concentration response curve (CRC). Thromboxane generation was unaffected. Calcium mobilisation studies in cells lines showed a rightward shift of the U46619 CRC in N42S-expressing cells compared to WT. Radioligand binding studies revealed a reduction in BMax in platelets taken from the patient and in N42S-expressing cells, whilst cell studies confirmed poor surface expression. We have identified a novel thromboxane receptor variant, N42S, which results in platelet dysfunction due to reduced surface expression. It is associated with a significant bleeding history in the patient in whom it was identified. This is the first description of a naturally occurring variant that results in the substitution of this highly conserved residue and confirms the importance of this residue for correct GPCR function.

  9. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    Science.gov (United States)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  10. Tempo of Diversification of Global Amphibians: One-Constant Rate, One-Continuous Shift or Multiple-Discrete Shifts?

    Directory of Open Access Journals (Sweden)

    Youhua Chen

    2014-01-01

    Full Text Available In this brief report, alternative time-varying diversification rate models were fitted onto the phylogeny of global amphibians by considering one-constant-rate (OCR, one-continuous-shift (OCS and multiplediscrete- shifts (MDS situations. The OCS diversification model was rejected by γ statistic (γ=-5.556, p⁄ 0.001, implying the existence of shifting diversification rates for global amphibian phylogeny. Through model selection, MDS diversification model outperformed OCS and OCR models using “laser” package under R environment. Moreover, MDS models, implemented using another R package “MEDUSA”, indicated that there were sixteen shifts over the internal nodes for amphibian phylogeny. Conclusively, both OCS and MDS models are recommended to compare so as to better quantify rate-shifting trends of species diversification. MDS diversification models should be preferential for large phylogenies using “MEDUSA” package in which any arbitrary numbers of shifts are allowed to model.

  11. Phenological shifts conserve thermal niches in North American birds and reshape expectations for climate-driven range shifts.

    Science.gov (United States)

    Socolar, Jacob B; Epanchin, Peter N; Beissinger, Steven R; Tingley, Morgan W

    2017-12-05

    Species respond to climate change in two dominant ways: range shifts in latitude or elevation and phenological shifts of life-history events. Range shifts are widely viewed as the principal mechanism for thermal niche tracking, and phenological shifts in birds and other consumers are widely understood as the principal mechanism for tracking temporal peaks in biotic resources. However, phenological and range shifts each present simultaneous opportunities for temperature and resource tracking, although the possible role for phenological shifts in thermal niche tracking has been widely overlooked. Using a canonical dataset of Californian bird surveys and a detectability-based approach for quantifying phenological signal, we show that Californian bird communities advanced their breeding phenology by 5-12 d over the last century. This phenological shift might track shifting resource peaks, but it also reduces average temperatures during nesting by over 1 °C, approximately the same magnitude that average temperatures have warmed over the same period. We further show that early-summer temperature anomalies are correlated with nest success in a continental-scale database of bird nests, suggesting avian thermal niches might be broadly limited by temperatures during nesting. These findings outline an adaptation surface where geographic range and breeding phenology respond jointly to constraints imposed by temperature and resource phenology. By stabilizing temperatures during nesting, phenological shifts might mitigate the need for range shifts. Global change ecology will benefit from further exploring phenological adjustment as a potential mechanism for thermal niche tracking and vice versa.

  12. Parallel-Processing Test Bed For Simulation Software

    Science.gov (United States)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  13. A parallel nearly implicit time-stepping scheme

    OpenAIRE

    Botchev, Mike A.; van der Vorst, Henk A.

    2001-01-01

    Across-the-space parallelism still remains the most mature, convenient and natural way to parallelize large scale problems. One of the major problems here is that implicit time stepping is often difficult to parallelize due to the structure of the system. Approximate implicit schemes have been suggested to circumvent the problem. These schemes have attractive stability properties and they are also very well parallelizable. The purpose of this article is to give an overall assessment of the pa...

  14. [Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].

    Science.gov (United States)

    Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan

    2016-03-01

    To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.

  15. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2016-03-15

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  16. Improvement of Parallel Algorithm for MATRA Code

    International Nuclear Information System (INIS)

    Kim, Seong-Jin; Seo, Kyong-Won; Kwon, Hyouk; Hwang, Dae-Hyun

    2014-01-01

    The feasibility study to parallelize the MATRA code was conducted in KAERI early this year. As a result, a parallel algorithm for the MATRA code has been developed to decrease a considerably required computing time to solve a bigsize problem such as a whole core pin-by-pin problem of a general PWR reactor and to improve an overall performance of the multi-physics coupling calculations. It was shown that the performance of the MATRA code was greatly improved by implementing the parallel algorithm using MPI communication. For problems of a 1/8 core and whole core for SMART reactor, a speedup was evaluated as about 10 when the numbers of used processor were 25. However, it was also shown that the performance deteriorated as the axial node number increased. In this paper, the procedure of a communication between processors is optimized to improve the previous parallel algorithm.. To improve the performance deterioration of the parallelized MATRA code, the communication algorithm between processors was newly presented. It was shown that the speedup was improved and stable regardless of the axial node number

  17. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  18. [Burden and health effects of shift work].

    Science.gov (United States)

    Heitmann, Jörg

    2010-10-01

    In Germany aprox. 15% of all employees have irregular or flexible working hours. Disturbed sleep and/or hypersomnia are direct consequences of shift work and therefore described as shift work disorder. Beyond this, shift work can also be associated with specific pathological disorders. There are individual differences in tolerance to shift work. Optimization of both shift schedules and sleep to "non-physiological" times of the day are measures to counteract the negative effects of shift work. There is still not enough evidence to recommend drugs for routine use in shift workers. © Georg Thieme Verlag Stuttgart · New York.

  19. A Parallel Priority Queue with Constant Time Operations

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Träff, Jesper Larsson; Zaroliagis, Christos D.

    1998-01-01

    We present a parallel priority queue that supports the following operations in constant time:parallel insertionof a sequence of elements ordered according to key,parallel decrease keyfor a sequence of elements ordered according to key,deletion of the minimum key element, anddeletion of an arbitrary...... application is a parallel implementation of Dijkstra's algorithm for the single-source shortest path problem, which runs inO(n) time andO(mlogn) work on a CREW PRAM on graphs withnvertices andmedges. This is a logarithmic factor improvement in the running time compared with previous approaches....

  20. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  1. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  2. A possibility of parallel and anti-parallel diffraction measurements on neu- tron diffractometer employing bent perfect crystal monochromator at the monochromatic focusing condition

    Science.gov (United States)

    Choi, Yong Nam; Kim, Shin Ae; Kim, Sung Kyu; Kim, Sung Baek; Lee, Chang-Hee; Mikula, Pavel

    2004-07-01

    In a conventional diffractometer having single monochromator, only one position, parallel position, is used for the diffraction experiment (i.e. detection) because the resolution property of the other one, anti-parallel position, is very poor. However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the Delta d/d measured on three diffraction geometries (symmetric, asymmetric compression and asymmetric expansion), we can conclude that the simultaneous diffraction measurement in both parallel and anti-parallel positions can be achieved.

  3. Shifted Independent Component Analysis

    DEFF Research Database (Denmark)

    Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2007-01-01

    Delayed mixing is a problem of theoretical interest and practical importance, e.g., in speech processing, bio-medical signal analysis and financial data modelling. Most previous analyses have been based on models with integer shifts, i.e., shifts by a number of samples, and have often been carried...

  4. The impact of shift work on the psychological and physical health of nurses in a general hospital: a comparison between rotating night shifts and day shifts.

    Science.gov (United States)

    Ferri, Paola; Guadi, Matteo; Marcheselli, Luigi; Balduzzi, Sara; Magnani, Daniela; Di Lorenzo, Rosaria

    2016-01-01

    Shift work is considered necessary to ensure continuity of care in hospitals and residential facilities. In particular, the night shift is one of the most frequent reasons for the disruption of circadian rhythms, causing significant alterations of sleep and biological functions that can affect physical and psychological well-being and negatively impact work performance. The aim of this study was to highlight if shift work with nights, as compared with day work only, is associated with risk factors predisposing nurses to poorer health conditions and lower job satisfaction. This cross-sectional study was conducted from June 1, 2015 to July 31, 2015 in 17 wards of a general hospital and a residential facility of a northern Italian city. This study involved 213 nurses working in rotating night shifts and 65 in day shifts. The instrument used for data collection was the "Standard Shift Work Index," validated in Italian. Data were statistically analyzed. The response rate was 86%. The nurses engaged in rotating night shifts were statistically significantly younger, more frequently single, and had Bachelors and Masters degrees in nursing. They reported the lowest mean score in the items of job satisfaction, quality and quantity of sleep, with more frequent chronic fatigue, psychological, and cardiovascular symptoms in comparison with the day shift workers, in a statistically significant way. Our results suggest that nurses with rotating night schedule need special attention due to the higher risk for both job dissatisfaction and undesirable health effects.

  5. Broadcasting collective operation contributions throughout a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-02-21

    Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

  6. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  7. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  8. Algorithms for computational fluid dynamics n parallel processors

    International Nuclear Information System (INIS)

    Van de Velde, E.F.

    1986-01-01

    A study of parallel algorithms for the numerical solution of partial differential equations arising in computational fluid dynamics is presented. The actual implementation on parallel processors of shared and nonshared memory design is discussed. The performance of these algorithms is analyzed in terms of machine efficiency, communication time, bottlenecks and software development costs. For elliptic equations, a parallel preconditioned conjugate gradient method is described, which has been used to solve pressure equations discretized with high order finite elements on irregular grids. A parallel full multigrid method and a parallel fast Poisson solver are also presented. Hyperbolic conservation laws were discretized with parallel versions of finite difference methods like the Lax-Wendroff scheme and with the Random Choice method. Techniques are developed for comparing the behavior of an algorithm on different architectures as a function of problem size and local computational effort. Effective use of these advanced architecture machines requires the use of machine dependent programming. It is shown that the portability problems can be minimized by introducing high level operations on vectors and matrices structured into program libraries

  9. Distributed Parallel Architecture for "Big Data"

    Directory of Open Access Journals (Sweden)

    Catalin BOJA

    2012-01-01

    Full Text Available This paper is an extension to the "Distributed Parallel Architecture for Storing and Processing Large Datasets" paper presented at the WSEAS SEPADS’12 conference in Cambridge. In its original version the paper went over the benefits of using a distributed parallel architecture to store and process large datasets. This paper analyzes the problem of storing, processing and retrieving meaningful insight from petabytes of data. It provides a survey on current distributed and parallel data processing technologies and, based on them, will propose an architecture that can be used to solve the analyzed problem. In this version there is more emphasis put on distributed files systems and the ETL processes involved in a distributed environment.

  10. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  11. Parallel optoelectronic trinary signed-digit division

    Science.gov (United States)

    Alam, Mohammad S.

    1999-03-01

    The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.

  12. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  13. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  14. Parallel debt in the Serbian finance law

    Directory of Open Access Journals (Sweden)

    Kuzman Miloš

    2014-01-01

    Full Text Available The purpose of this paper is to present the mechanism of parallel debt in the Serbian financial law. While considering whether the mechanism of parallel debt exists under the Serbian law, the Anglo-Saxon mechanism of trust is represented. Hence it is explained why the mechanism of trust is not allowed under the Serbian law. Further on, the mechanism of parallel debt is introduced as well as a debate on permissibility of its cause in the Serbian law. Comparative legal arguments about this issue are also presented in this paper. In conclusion, the author suggests that on the basis of the conclusions drawn in this paper, the parallel debt mechanism is to be declared admissible if it is ever taken into consideration by the Serbian courts.

  15. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  16. A high-speed linear algebra library with automatic parallelism

    Science.gov (United States)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  17. .NET 4.5 parallel extensions

    CERN Document Server

    Freeman, Bryan

    2013-01-01

    This book contains practical recipes on everything you will need to create task-based parallel programs using C#, .NET 4.5, and Visual Studio. The book is packed with illustrated code examples to create scalable programs.This book is intended to help experienced C# developers write applications that leverage the power of modern multicore processors. It provides the necessary knowledge for an experienced C# developer to work with .NET parallelism APIs. Previous experience of writing multithreaded applications is not necessary.

  18. Fringe Capacitance of a Parallel-Plate Capacitor.

    Science.gov (United States)

    Hale, D. P.

    1978-01-01

    Describes an experiment designed to measure the forces between charged parallel plates, and determines the relationship among the effective electrode area, the measured capacitance values, and the electrode spacing of a parallel plate capacitor. (GA)

  19. Phase-Shift Dynamics of Sea Urchin Overgrazing on Nutrified Reefs.

    Directory of Open Access Journals (Sweden)

    Nina Kriegisch

    Full Text Available Shifts from productive kelp beds to impoverished sea urchin barrens occur globally and represent a wholesale change to the ecology of sub-tidal temperate reefs. Although the theory of shifts between alternative stable states is well advanced, there are few field studies detailing the dynamics of these kinds of transitions. In this study, sea urchin herbivory (a 'top-down' driver of ecosystems was manipulated over 12 months to estimate (1 the sea urchin density at which kelp beds collapse to sea urchin barrens, and (2 the minimum sea urchin density required to maintain urchin barrens on experimental reefs in the urbanised Port Phillip Bay, Australia. In parallel, the role of one of the 'bottom-up' drivers of ecosystem structure was examined by (3 manipulating local nutrient levels and thus attempting to alter primary production on the experimental reefs. It was found that densities of 8 or more urchins m-2 (≥ 427 g m-2 biomass lead to complete overgrazing of kelp beds while kelp bed recovery occurred when densities were reduced to ≤ 4 urchins m-2 (≤ 213 g m-2 biomass. This experiment provided further insight into the dynamics of transition between urchin barrens and kelp beds by exploring possible tipping-points which in this system can be found between 4 and 8 urchins m-2 (213 and 427 g m-2 respectively. Local enhancement of nutrient loading did not change the urchin density required for overgrazing or kelp bed recovery, as algal growth was not affected by nutrient enhancement.

  20. Phase-Shift Dynamics of Sea Urchin Overgrazing on Nutrified Reefs.

    Science.gov (United States)

    Kriegisch, Nina; Reeves, Simon; Johnson, Craig R; Ling, Scott D

    2016-01-01

    Shifts from productive kelp beds to impoverished sea urchin barrens occur globally and represent a wholesale change to the ecology of sub-tidal temperate reefs. Although the theory of shifts between alternative stable states is well advanced, there are few field studies detailing the dynamics of these kinds of transitions. In this study, sea urchin herbivory (a 'top-down' driver of ecosystems) was manipulated over 12 months to estimate (1) the sea urchin density at which kelp beds collapse to sea urchin barrens, and (2) the minimum sea urchin density required to maintain urchin barrens on experimental reefs in the urbanised Port Phillip Bay, Australia. In parallel, the role of one of the 'bottom-up' drivers of ecosystem structure was examined by (3) manipulating local nutrient levels and thus attempting to alter primary production on the experimental reefs. It was found that densities of 8 or more urchins m-2 (≥ 427 g m-2 biomass) lead to complete overgrazing of kelp beds while kelp bed recovery occurred when densities were reduced to ≤ 4 urchins m-2 (≤ 213 g m-2 biomass). This experiment provided further insight into the dynamics of transition between urchin barrens and kelp beds by exploring possible tipping-points which in this system can be found between 4 and 8 urchins m-2 (213 and 427 g m-2 respectively). Local enhancement of nutrient loading did not change the urchin density required for overgrazing or kelp bed recovery, as algal growth was not affected by nutrient enhancement.

  1. Parallel processing of Monte Carlo code MCNP for particle transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Kawasaki, Takuji

    1996-06-01

    It is possible to vectorize or parallelize Monte Carlo codes (MC code) for photon and neutron transport problem, making use of independency of the calculation for each particle. Applicability of existing MC code to parallel processing is mentioned. As for parallel computer, we have used both vector-parallel processor and scalar-parallel processor in performance evaluation. We have made (i) vector-parallel processing of MCNP code on Monte Carlo machine Monte-4 with four vector processors, (ii) parallel processing on Paragon XP/S with 256 processors. In this report we describe the methodology and results for parallel processing on two types of parallel or distributed memory computers. In addition, we mention the evaluation of parallel programming environments for parallel computers used in the present work as a part of the work developing STA (Seamless Thinking Aid) Basic Software. (author)

  2. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  3. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  4. Distance-two interpolation for parallel algebraic multigrid

    International Nuclear Information System (INIS)

    Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

    2007-01-01

    In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

  5. A Coupling Tool for Parallel Molecular Dynamics-Continuum Simulations

    KAUST Repository

    Neumann, Philipp

    2012-06-01

    We present a tool for coupling Molecular Dynamics and continuum solvers. It is written in C++ and is meant to support the developers of hybrid molecular - continuum simulations in terms of both realisation of the respective coupling algorithm as well as parallel execution of the hybrid simulation. We describe the implementational concept of the tool and its parallel extensions. We particularly focus on the parallel execution of particle insertions into dense molecular systems and propose a respective parallel algorithm. Our implementations are validated for serial and parallel setups in two and three dimensions. © 2012 IEEE.

  6. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  7. Parallel science and engineering applications the Charm++ approach

    CERN Document Server

    Kale, Laxmikant V

    2016-01-01

    Developed in the context of science and engineering applications, with each abstraction motivated by and further honed by specific application needs, Charm++ is a production-quality system that runs on almost all parallel computers available. Parallel Science and Engineering Applications: The Charm++ Approach surveys a diverse and scalable collection of science and engineering applications, most of which are used regularly on supercomputers by scientists to further their research. After a brief introduction to Charm++, the book presents several parallel CSE codes written in the Charm++ model, along with their underlying scientific and numerical formulations, explaining their parallelization strategies and parallel performance. These chapters demonstrate the versatility of Charm++ and its utility for a wide variety of applications, including molecular dynamics, cosmology, quantum chemistry, fracture simulations, agent-based simulations, and weather modeling. The book is intended for a wide audience of people i...

  8. Parallel processing of two-dimensional Sn transport calculations

    International Nuclear Information System (INIS)

    Uematsu, M.

    1997-01-01

    A parallel processing method for the two-dimensional S n transport code DOT3.5 has been developed to achieve a drastic reduction in computation time. In the proposed method, parallelization is achieved with angular domain decomposition and/or space domain decomposition. The calculational speed of parallel processing by angular domain decomposition is largely influenced by frequent communications between processing elements. To assess parallelization efficiency, sample problems with up to 32 x 32 spatial meshes were solved with a Sun workstation using the PVM message-passing library. As a result, parallel calculation using 16 processing elements, for example, was found to be nine times as fast as that with one processing element. As for parallel processing by geometry segmentation, the influence of processing element communications on computation time is small; however, discontinuity at the segment boundary degrades convergence speed. To accelerate the convergence, an alternate sweep of angular flux in conjunction with space domain decomposition and a two-step rescaling method consisting of segmentwise rescaling and ordinary pointwise rescaling have been developed. By applying the developed method, the number of iterations needed to obtain a converged flux solution was reduced by a factor of 2. As a result, parallel calculation using 16 processing elements was found to be 5.98 times as fast as the original DOT3.5 calculation

  9. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    Science.gov (United States)

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  10. Cα chemical shift tensors in helical peptides by dipolar-modulated chemical shift recoupling NMR

    International Nuclear Information System (INIS)

    Yao Xiaolan; Yamaguchi, Satoru; Hong Mei

    2002-01-01

    The Cα chemical shift tensors of proteins contain information on the backbone conformation. We have determined the magnitude and orientation of the Cα chemical shift tensors of two peptides with α-helical torsion angles: the Ala residue in G*AL (φ=-65.7 deg., ψ=-40 deg.), and the Val residue in GG*V (φ=-81.5 deg., ψ=-50.7 deg.). The magnitude of the tensors was determined from quasi-static powder patterns recoupled under magic-angle spinning, while the orientation of the tensors was extracted from Cα-Hα and Cα-N dipolar modulated powder patterns. The helical Ala Cα chemical shift tensor has a span of 36 ppm and an asymmetry parameter of 0.89. Its σ 11 axis is 116 deg. ± 5 deg. from the Cα-Hα bond while the σ 22 axis is 40 deg. ± 5 deg. from the Cα-N bond. The Val tensor has an anisotropic span of 25 ppm and an asymmetry parameter of 0.33, both much smaller than the values for β-sheet Val found recently (Yao and Hong, 2002). The Val σ 33 axis is tilted by 115 deg. ± 5 deg. from the Cα-Hα bond and 98 deg. ± 5 deg. from the Cα-N bond. These represent the first completely experimentally determined Cα chemical shift tensors of helical peptides. Using an icosahedral representation, we compared the experimental chemical shift tensors with quantum chemical calculations and found overall good agreement. These solid-state chemical shift tensors confirm the observation from cross-correlated relaxation experiments that the projection of the Cα chemical shift tensor onto the Cα-Hα bond is much smaller in α-helices than in β-sheets

  11. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  12. The series-parallel circuit in the treatment of fulminant hepatitis.

    Science.gov (United States)

    Nakae, Hajime; Yonekawa, Chikara; Moon, Sunkwi; Tajimi, Kimitaka

    2004-04-01

    We developed a series-parallel treatment method for combined plasma exchange (PE) and continuous hemodiafiltration (CHDF) therapy in fulminant hepatitis. We then compared total serum bilirubin, citrate, and cytokine levels obtained by the new methods to those obtained with treatment by the single and reverse-parallel PE methods. Ten adult patients with fulminant hepatitis consented to participate. Plasma exchange was conducted 25 times by the single method (PE only), 16 times by the reverse-parallel method, and 37 times by the series-parallel method. The percentage of total bilirubin removed was highest with the single method followed in order by that with the series-parallel and reverse-parallel methods; the differences were significant. The percentage increase in citrate level was highest with the single method, followed in order by that with the series-parallel and the reverse-parallel methods; these differences were also significant. There was no significant difference in serum interleukin (IL)-6 levels after PE, by the single or the reverse-parallel methods. However, the IL-6 level decreased significantly following PE by the series-parallel method. The serum IL-18 level decreased significantly following PE by each of the three methods. Thus, removal of excess bilirubin, citrate, and cytokines by the series-parallel method, a simple maneuver with excellent removal rates, was considered effective.

  13. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  14. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  15. Shifts in information processing level: the speed theory of intelligence revisited.

    Science.gov (United States)

    Sircar, S S

    2000-06-01

    A hypothesis is proposed here to reconcile the inconsistencies observed in the IQ-P3 latency relation. The hypothesis stems from the observation that task-induced increase in P3 latency correlates positively with IQ scores. It is hypothesised that: (a) there are several parallel information processing pathways of varying complexity which are associated with the generation of P3 waves of varying latencies; (b) with increasing workload, there is a shift in the 'information processing level' through progressive recruitment of more complex polysynaptic pathways with greater processing power and inhibition of the oligosynaptic pathways; (c) high-IQ subjects have a greater reserve of higher level processing pathways; (d) a given 'task-load' imposes a greater 'mental workload' in subjects with lower IQ than in those with higher IQ. According to this hypothesis, a meaningful comparison of the P3 correlates of IQ is possible only when the information processing level is pushed to its limits.

  16. Methodological aspects of shift-work research.

    Science.gov (United States)

    Knutsson, Anders

    2004-01-01

    A major issue in shift-work research is to understand the possible ways in which shift work can impact performance and health. Nearly all body functions, from those of the cellular level to those of the entire body, are circadian rhythmic. Disturbances of these rhythms as well as the social consequences of odd work hours are of importance for the health and well-being of shift workers. This article reviews a number of common methodological issues which are of relevance to epidemiological studies in this area of research. It discusses conceptual problems regarding the use of the term "shift work," and it underscores the need to develop models that explain the mechanisms of disease in shift workers.

  17. The BLAZE language - A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  18. The BLAZE language: A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  19. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  20. An Algorithm for Parallel Sn Sweeps on Unstructured Meshes

    International Nuclear Information System (INIS)

    Pautz, Shawn D.

    2002-01-01

    A new algorithm for performing parallel S n sweeps on unstructured meshes is developed. The algorithm uses a low-complexity list ordering heuristic to determine a sweep ordering on any partitioned mesh. For typical problems and with 'normal' mesh partitionings, nearly linear speedups on up to 126 processors are observed. This is an important and desirable result, since although analyses of structured meshes indicate that parallel sweeps will not scale with normal partitioning approaches, no severe asymptotic degradation in the parallel efficiency is observed with modest (≤100) levels of parallelism. This result is a fundamental step in the development of efficient parallel S n methods

  1. 3D, parallel fluid-structure interaction code

    CSIR Research Space (South Africa)

    Oxtoby, Oliver F

    2011-01-01

    Full Text Available The authors describe the development of a 3D parallel Fluid–Structure–Interaction (FSI) solver and its application to benchmark problems. Fluid and solid domains are discretised using and edge-based finite-volume scheme for efficient parallel...

  2. Reliability allocation problem in a series-parallel system

    International Nuclear Information System (INIS)

    Yalaoui, Alice; Chu, Chengbin; Chatelet, Eric

    2005-01-01

    In order to improve system reliability, designers may introduce in a system different technologies in parallel. When each technology is composed of components in series, the configuration belongs to the series-parallel systems. This type of system has not been studied as much as the parallel-series architecture. There exist no methods dedicated to the reliability allocation in series-parallel systems with different technologies. We propose in this paper theoretical and practical results for the allocation problem in a series-parallel system. Two resolution approaches are developed. Firstly, a one stage problem is studied and the results are exploited for the multi-stages problem. A theoretical condition for obtaining the optimal allocation is developed. Since this condition is too restrictive, we secondly propose an alternative approach based on an approximated function and the results of the one-stage study. This second approach is applied to numerical examples

  3. SAT in shift manager training

    International Nuclear Information System (INIS)

    Lecuyer, F.

    1995-01-01

    EDF has improved the organization of the operation shift teams with the replacement of shift supervisor in shift manager function. The shift manager is not only responsible for tasks associated to plant operation (production), but he is also responsible for safety of these tasks and for management of shift team members. A job analysis of this new job position has been performed in order to design the training programme. It resulted in a 10-month training programme that includes 8 weeks in safety-related topics and 12 weeks in soft-skills related topics. The safety related training courses are mandatory, the other courses are optional courses depending on individual trainee needs. The training also includes the development of management competencies. During the 10 month period, each trainee develops an individual project that is evaluated by NPP manager. As well, as group project is undertaken by the trainees and overseen by a steering committee. The steering committee participates in the evaluation process and provides operational experience feedback to the trainee groups and to the overall programme

  4. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  5. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  6. Individual differences in shift work tolerance

    NARCIS (Netherlands)

    Lammers-van der Holst, H.M.

    2016-01-01

    Shift work is a key feature of our contemporary 24/7 society, employing several successive work teams to sustain around-the-clock operations. However, numerous studies imply that frequently shifting the periods of sleep and wakefulness poses a serious threat to the shift worker’s physical, mental

  7. Discrete Hadamard transformation algorithm's parallelism analysis and achievement

    Science.gov (United States)

    Hu, Hui

    2009-07-01

    With respect to Discrete Hadamard Transformation (DHT) wide application in real-time signal processing while limitation in operation speed of DSP. The article makes DHT parallel research and its parallel performance analysis. Based on multiprocessor platform-TMS320C80 programming structure, the research is carried out to achieve two kinds of parallel DHT algorithms. Several experiments demonstrated the effectiveness of the proposed algorithms.

  8. Rubus: A compiler for seamless and extensible parallelism

    Science.gov (United States)

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been

  9. Rubus: A compiler for seamless and extensible parallelism.

    Directory of Open Access Journals (Sweden)

    Muhammad Adnan

    Full Text Available Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU, originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84

  10. Parallel object-oriented term rewriting : the booleans

    NARCIS (Netherlands)

    Rodenburg, P.H.; Vrancken, J.L.M.

    As a first case study in parallel object-oriented term rewriting, we give two implementations of term rewriting algorithms for boolean terms, using the parallel object-oriented features of the language Pool-T. The term rewriting systems are specified in the specification formalism

  11. 3D printed soft parallel actuator

    Science.gov (United States)

    Zolfagharian, Ali; Kouzani, Abbas Z.; Khoo, Sui Yang; Noshadi, Amin; Kaynak, Akif

    2018-04-01

    This paper presents a 3-dimensional (3D) printed soft parallel contactless actuator for the first time. The actuator involves an electro-responsive parallel mechanism made of two segments namely active chain and passive chain both 3D printed. The active chain is attached to the ground from one end and constitutes two actuator links made of responsive hydrogel. The passive chain, on the other hand, is attached to the active chain from one end and consists of two rigid links made of polymer. The actuator links are printed using an extrusion-based 3D-Bioplotter with polyelectrolyte hydrogel as printer ink. The rigid links are also printed by a 3D fused deposition modelling (FDM) printer with acrylonitrile butadiene styrene (ABS) as print material. The kinematics model of the soft parallel actuator is derived via transformation matrices notations to simulate and determine the workspace of the actuator. The printed soft parallel actuator is then immersed into NaOH solution with specific voltage applied to it via two contactless electrodes. The experimental data is then collected and used to develop a parametric model to estimate the end-effector position and regulate kinematics model in response to specific input voltage over time. It is observed that the electroactive actuator demonstrates expected behaviour according to the simulation of its kinematics model. The use of 3D printing for the fabrication of parallel soft actuators opens a new chapter in manufacturing sophisticated soft actuators with high dexterity and mechanical robustness for biomedical applications such as cell manipulation and drug release.

  12. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  13. MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

    Science.gov (United States)

    Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.

    2018-02-01

    We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.

  14. An analysis of clock-shift experiments: is scatter increased and deflection reduced in clock-shifted homing pigeons?

    Science.gov (United States)

    Chappell

    1997-01-01

    Clock-shifting (altering the phase of the internal clock) in homing pigeons leads to a deflection in the vanishing bearing of the clock-shifted group relative to controls. However, two unexplained phenomena are common in clock-shift experiments: the vanishing bearings of the clock-shifted group are often more scattered (with a shorter vector length) than those of the control group, and the deflection of the mean bearing of the clock-shifted group from that of the controls is often smaller than expected theoretically. Here, an analysis of 55 clock-shift experiments performed in four countries over 21 years is reported. The bearings of the clock-shifted groups were significantly more scattered than those of controls and less deflected than expected, but these effects were not significantly different at familiar and unfamiliar sites. The possible causes of the effects are discussed and evaluated with reference to this analysis and other experiments. The most likely causes appear to be conflict between the directions indicated by the sun compass and either unshifted familiar visual landmarks (at familiar sites only) or the unshifted magnetic compass (possible at both familiar and unfamiliar sites).

  15. Examining paid sickness absence by shift workers.

    Science.gov (United States)

    Catano, V M; Bissonnette, A B

    2014-06-01

    Shift workers are at greater risk than day workers with respect to psychological and physical health, yet little research has linked shift work to increased sickness absence. To investigate the relationship between shift work and sickness absence while controlling for organizational and individual characteristics and shift work attributes that have confounded previous research. The study used archive data collected from three national surveys in Canada, each involving over 20000 employees and 6000 private-sector firms in 14 different occupational groups. The employees reported the number of paid sickness absence days in the past 12 months. Data were analysed using both chi-squared statistics and hierarchical regressions. Contrary to previous research, shift workers took less paid sickness absence than day workers. There were no differences in the length of the sickness absence between both groups or in sickness absence taken by female and male workers whether working days or shifts. Only job tenure, the presence of a union in the workplace and working rotating shifts predicted sickness absence in shift workers. The results were consistent across all three samples. In general, shift work does not seem to be linked to increased sickness absence. However, such associations may be true for specific industries. Male and female workers did not differ in the amount of sickness absence taken. Rotating shifts, regardless of industry, predicted sickness absence among shift workers. Consideration should be given to implementing scheduled time off between shift changes. © The Author 2014. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Parallel image encryption algorithm based on discretized chaotic map

    International Nuclear Information System (INIS)

    Zhou Qing; Wong Kwokwo; Liao Xiaofeng; Xiang Tao; Hu Yue

    2008-01-01

    Recently, a variety of chaos-based algorithms were proposed for image encryption. Nevertheless, none of them works efficiently in parallel computing environment. In this paper, we propose a framework for parallel image encryption. Based on this framework, a new algorithm is designed using the discretized Kolmogorov flow map. It fulfills all the requirements for a parallel image encryption algorithm. Moreover, it is secure and fast. These properties make it a good choice for image encryption on parallel computing platforms

  17. Introduction to massively-parallel computing in high-energy physics

    CERN Document Server

    AUTHOR|(CDS)2083520

    1993-01-01

    Ever since computers were first used for scientific and numerical work, there has existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time-scales. However, the vast leaps in processor performance achieved through advances in semi-conductor science have reached a hiatus as the technology comes up against the physical limits of the speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (S...

  18. Molecular Electronic Shift Registers

    Science.gov (United States)

    Beratan, David N.; Onuchic, Jose N.

    1990-01-01

    Molecular-scale shift registers eventually constructed as parts of high-density integrated memory circuits. In principle, variety of organic molecules makes possible large number of different configurations and modes of operation for such shift-register devices. Several classes of devices and implementations in some specific types of molecules proposed. All based on transfer of electrons or holes along chains of repeating molecular units.

  19. Blue-shifted and red-shifted hydrogen bonds: Theoretical study of the CH3CHO· · ·HNO complexes

    Science.gov (United States)

    Yang, Yong; Zhang, Weijun; Gao, Xiaoming

    The blue-shifted and red-shifted H-bonds have been studied in complexes CH3CHO?HNO. At the MP2/6-31G(d), MP2/6-31+G(d,p) MP2/6-311++G(d,p), B3LYP/6-31G(d), B3LYP/6-31+G(d,p) and B3LYP/6-311++G(d,p) levels, the geometric structures and vibrational frequencies of complexes CH3CHO?HNO are calculated by both standard and CP-corrected methods, respectively. Complex A exhibits simultaneously red-shifted C bond H?O and blue-shifted N bond H?O H-bonds. Complex B possesses simultaneously two blue-shifted H-bonds: C bond H?O and N bond H?O. From NBO analysis, it becomes evident that the red-shifted C bond H?O H-bond can be explained on the basis of the two opposite effects: hyperconjugation and rehybridization. The blue-shifted C bond H?O H-bond is a result of conjunct C bond H bond strengthening effects of the hyperconjugation and the rehybridization due to existence of the significant electron density redistribution effect. For the blue-shifted N bond H?O H-bonds, the hyperconjugation is inhibited due to existence of the electron density redistribution effect. The large blue shift of the N bond H stretching frequency is observed because the rehybridization dominates the hyperconjugation.

  20. Parallelism in computations in quantum and statistical mechanics

    International Nuclear Information System (INIS)

    Clementi, E.; Corongiu, G.; Detrich, J.H.

    1985-01-01

    Often very fundamental biochemical and biophysical problems defy simulations because of limitations in today's computers. We present and discuss a distributed system composed of two IBM 4341 s and/or an IBM 4381 as front-end processors and ten FPS-164 attached array processors. This parallel system - called LCAP - has presently a peak performance of about 110 Mflops; extensions to higher performance are discussed. Presently, the system applications use a modified version of VM/SP as the operating system: description of the modifications is given. Three applications programs have been migrated from sequential to parallel: a molecular quantum mechanical, a Metropolis-Monte Carlo and a molecular dynamics program. Descriptions of the parallel codes are briefly outlined. Use of these parallel codes has already opened up new capabilities for our research. The very positive performance comparisons with today's supercomputers allow us to conclude that parallel computers and programming, of the type we have considered, represent a pragmatic answer to many computationally intensive problems. (orig.)