WorldWideScience

Sample records for partially parallel acquisitions

  1. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    Tintera, Jaroslav; Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter

    2004-01-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  2. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    Tintera, Jaroslav [Institute for Clinical and Experimental Medicine, Prague (Czech Republic); Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter [University Clinic Mainz, Institute of Neuroradiology, Mainz (Germany)

    2004-12-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  3. PARALLEL SOLUTION METHODS OF PARTIAL DIFFERENTIAL EQUATIONS

    Korhan KARABULUT

    1998-03-01

    Full Text Available Partial differential equations arise in almost all fields of science and engineering. Computer time spent in solving partial differential equations is much more than that of in any other problem class. For this reason, partial differential equations are suitable to be solved on parallel computers that offer great computation power. In this study, parallel solution to partial differential equations with Jacobi, Gauss-Siedel, SOR (Succesive OverRelaxation and SSOR (Symmetric SOR algorithms is studied.

  4. Single breath-hold real-time cine MR imaging: improved temporal resolution using generalized autocalibrating partially parallel acquisition (GRAPPA) algorithm

    Wintersperger, Bernd J.; Nikolaou, Konstantin; Dietrich, Olaf; Reiser, Maximilian F.; Schoenberg, Stefan O.; Rieber, Johannes; Nittka, Matthias

    2003-01-01

    The purpose of this study was to test parallel imaging techniques for improvement of temporal resolution in multislice single breath-hold real-time cine steady-state free precession (SSFP) in comparison with standard segmented single-slice SSFP techniques. Eighteen subjects were examined on a 1.5-T scanner using a multislice real-time cine SSFP technique using the GRAPPA algorithm. Global left ventricular parameters (EDV, ESV, SV, EF) were evaluated and results compared with a standard segmented single-slice SSFP technique. Results for EDV (r=0.93), ESV (r=0.99), SV (r=0.83), and EF (r=0.99) of real-time multislice SSFP imaging showed a high correlation with results of segmented SSFP acquisitions. Systematic differences between both techniques were statistically non-significant. Single breath-hold multislice techniques using GRAPPA allow for improvement of temporal resolution and for accurate assessment of global left ventricular functional parameters. (orig.)

  5. PARTIAL REINFORCEMENT (ACQUISITION) EFFECTS WITHIN SUBJECTS.

    AMSEL, A; MACKINNON, J R; RASHOTTE, M E; SURRIDGE, C T

    1964-03-01

    Acquisition performance of 22 rats in a straight alley runway was examined. The animals were subjected to partial reinforcement when the alley was black (B+/-) and continuous reinforcement when it was white (W+). The results indicated (a) higher terminal performance, for partial as against continuous reinforcement conditions, for starting-time and running-time measures, and (b) lower terminal performance under partial conditions for a goal-entry-time measure. These results confirm within subjects an effect previously demonstrated, in the runway, only in between-groups tests, where one group is run under partial reinforcement and a separate group is run under continuous reinforcement in the presence of the same external stimuli. Differences between the runway situation, employing a discrete-trial procedure and performance measures at three points in the response chain, and the Skinner box situation, used in its free-operant mode with a single performance measure, are discussed in relation to the present findings.

  6. Application of parallel preprocessors in data acquisition

    Butler, H.S.; Cooper, M.D.; Williams, R.A.; Hughes, E.B.; Rolfe, J.R.; Wilson, S.L.; Zeman, H.D.

    1981-01-01

    A data-acquisition system is being developed for a large-scale experiment at LAMPF. It will make use of four microprocessors running in parallel to acquire and preprocess data from 432 photomultiplier tubes (PMT) attached to 396 NaI crystals. The microprocessors are LSI-11/23s operating through CAMAC Auxiliary Crate Controllers (ACC). Data acquired by the microprocessors will be collected through a programmable Branch Driver (MBD) which also will read data from 52 scintillators (88 PMTs) and 728 wires comprising a drift chamber. The MBD will transfer data from each event into a PDP-11/44 for further processing and taping. The microprocessors will perform the secondary function of monitoring the calibration of the NaI PMTs. A special trigger circuit allows the system to stack data from a second event while the first is still being processed. Major components of the system were tested in April 1981. Timing measurements from this test are reported

  7. Rocket measurement of auroral partial parallel distribution functions

    Lin, C.-A.

    1980-01-01

    The auroral partial parallel distribution functions are obtained by using the observed energy spectra of electrons. The experiment package was launched by a Nike-Tomahawk rocket from Poker Flat, Alaska over a bright auroral band and covered an altitude range of up to 180 km. Calculated partial distribution functions are presented with emphasis on their slopes. The implications of the slopes are discussed. It should be pointed out that the slope of the partial parallel distribution function obtained from one energy spectra will be changed by superposing another energy spectra on it.

  8. Computational acceleration for MR image reconstruction in partially parallel imaging.

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images.

  9. Acquisition with partial and continuous reinforcement in pigeon autoshaping.

    Gottlieb, Daniel A

    2004-08-01

    Contemporary time accumulation models make the unique prediction that acquisition of a conditioned response will be equally rapid with partial and continuous reinforcement, if the time between conditioned stimuli is held constant. To investigate this, acquisition of conditioned responding was examined in pigeon autoshaping under conditions of 100% and 25% reinforcement, holding intertrial interval constant. Contrary to what was predicted, evidence for slowed acquisition in partially reinforced animals was observed with several response measures. However, asymptotic performance was superior with 25% reinforcement. A switching of reinforcement contingencies after initial acquisition did not immediately affect responding. After further sessions, partial reinforcement augmented responding, whereas continuous reinforcement did not, irrespective of an animal's reinforcement history. Subsequent training with a novel stimulus maintained the response patterns. These acquisition results generally support associative, rather than time accumulation, accounts of conditioning.

  10. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  11. A tomograph VMEbus parallel processing data acquisition system

    Wilkinson, N.A.; Rogers, J.G.; Atkins, M.S.

    1989-01-01

    This paper describes a VME based data acquisition system suitable for the development of Positron Volume Imaging tomographs which use 3-D data for improved image resolution over slice-oriented tomographs. the data acquisition must be flexible enough to accommodate several 3-D reconstruction algorithms; hence, a software-based system is most suitable. Furthermore, because of the increased dimensions and resolution of volume imaging tomographs, the raw data event rate is greater than that of slice-oriented machines. These dual requirements are met by our data acquisition system. Flexibility is achieved through an array of processors connected over a VMEbus, operating asynchronously and in parallel. High raw data throughput is achieved using a dedicated high speed data transfer device available for the VMEbus. The device can attain a raw data rate of 2.5 million coincidence events per second for raw events which are 64 bits wide

  12. A tomograph VMEbus parallel processing data acquisition system

    Atkins, M.S.; Wilkinson, N.A.; Rogers, J.G.

    1988-11-01

    This paper describes a VME based data acquisition system suitable for the development of Positron Volume Imaging tomographs which use 3-D data for improved image resolution over slice-oriented tomographs. The data acquisition must be flexible enough to accommodate several 3-D reconstruction algorithms; hence, a software-based system is most suitable. Furthermore, because of the increased dimensions and resolution of volume imaging tomographs, the raw data event rate is greater than that of slice-oriented machines. These dual requirements are met by our data acquisition systems. Flexibility is achieved through an array of processors connected over a VMEbus, operating asynchronously and in parallel. High raw data throughput is achieved using a dedicated high speed data transfer device available for the VMEbus. The device can attain a raw data rate of 2.5 million coincidence events per second for raw events per second for raw events which are 64 bits wide. Real-time data acquisition and pre-processing requirements can be met by about forty 20 MHz Motorola 68020/68881 processors

  13. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  14. Fast MR image reconstruction for partially parallel imaging with arbitrary k-space trajectories.

    Ye, Xiaojing; Chen, Yunmei; Lin, Wei; Huang, Feng

    2011-03-01

    Both acquisition and reconstruction speed are crucial for magnetic resonance (MR) imaging in clinical applications. In this paper, we present a fast reconstruction algorithm for SENSE in partially parallel MR imaging with arbitrary k-space trajectories. The proposed method is a combination of variable splitting, the classical penalty technique and the optimal gradient method. Variable splitting and the penalty technique reformulate the SENSE model with sparsity regularization as an unconstrained minimization problem, which can be solved by alternating two simple minimizations: One is the total variation and wavelet based denoising that can be quickly solved by several recent numerical methods, whereas the other one involves a linear inversion which is solved by the optimal first order gradient method in our algorithm to significantly improve the performance. Comparisons with several recent parallel imaging algorithms indicate that the proposed method significantly improves the computation efficiency and achieves state-of-the-art reconstruction quality.

  15. Simulation of partially coherent light propagation using parallel computing devices

    Magalhães, Tiago C.; Rebordão, José M.

    2017-08-01

    Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.

  16. Microprocessor event analysis in parallel with Camac data acquisition

    Cords, D.; Eichler, R.; Riege, H.

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a Camac System (GEC-ELLIOTT System Crate) and shares the Camac access with a Nord-1OS computer. Interfaces have been designed and tested for execution of Camac cycles, communication with the Nord-1OS computer and DMA-transfer from Camac to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-1OS computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the result of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-1OS buffer will be reset and the event omitted from further processing. (orig.)

  17. Microprocessor event analysis in parallel with CAMAC data acquisition

    Cords, D; Riege, H

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a CAMAC System (GEC-ELLIOTT System Crate) and shares the CAMAC access with a Nord-10S computer. Interfaces have been designed and tested for execution of CAMAC cycles, communication with the Nord-10S computer and DMA-transfer from CAMAC to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-10S computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the results of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-10S buffer will be reset and the event omitted from further processing. (5 refs).

  18. Dynamic grid refinement for partial differential equations on parallel computers

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems. 6 refs

  19. Issues in developing parallel iterative algorithms for solving partial differential equations on a (transputer-based) distributed parallel computing system

    Rajagopalan, S.; Jethra, A.; Khare, A.N.; Ghodgaonkar, M.D.; Srivenkateshan, R.; Menon, S.V.G.

    1990-01-01

    Issues relating to implementing iterative procedures, for numerical solution of elliptic partial differential equations, on a distributed parallel computing system are discussed. Preliminary investigations show that a speed-up of about 3.85 is achievable on a four transputer pipeline network. (author). 2 figs., 3 a ppendixes., 7 refs

  20. Parallel preprocessing in a nuclear data acquisition system

    Pichot, G.; Auriol, E.; Lemarchand, G.; Millaud, J.

    1977-01-01

    The appearance of microprocessors and large memory chips has somewhat modified the spectrum of tools usable by the data acquisition system designer. This is particular true in the nuclear research field where the data flow has been continuously growing as a consequence of the increasing capabilities of new detectors. This paper deals with the insertion, between a data acquisition system and a computer, of a preprocessing structure based on microprocessors and large capacity high speed memories. The results shows a significant improvement on several aspects in the operation of the system with returns paying back the investments in 18 months

  1. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered

  2. Single-Shot MR Spectroscopic Imaging with Partial Parallel Imaging

    Posse, Stefan; Otazo, Ricardo; Tsai, Shang-Yueh; Yoshimoto, Akio Ernesto; Lin, Fa-Hsuan

    2010-01-01

    An MR spectroscopic imaging (MRSI) pulse sequence based on Proton-Echo-Planar-Spectroscopic-Imaging (PEPSI) is introduced that measures 2-dimensional metabolite maps in a single excitation. Echo-planar spatial-spectral encoding was combined with interleaved phase encoding and parallel imaging using SENSE to reconstruct absorption mode spectra. The symmetrical k-space trajectory compensates phase errors due to convolution of spatial and spectral encoding. Single-shot MRSI at short TE was evaluated in phantoms and in vivo on a 3 T whole body scanner equipped with 12-channel array coil. Four-step interleaved phase encoding and 4-fold SENSE acceleration were used to encode a 16×16 spatial matrix with 390 Hz spectral width. Comparison with conventional PEPSI and PEPSI with 4-fold SENSE acceleration demonstrated comparable sensitivity per unit time when taking into account g-factor related noise increases and differences in sampling efficiency. LCModel fitting enabled quantification of Inositol, Choline, Creatine and NAA in vivo with concentration values in the ranges measured with conventional PEPSI and SENSE-accelerated PEPSI. Cramer-Rao lower bounds were comparable to those obtained with conventional SENSE-accelerated PEPSI at the same voxel size and measurement time. This single-shot MRSI method is therefore suitable for applications that require high temporal resolution to monitor temporal dynamics or to reduce sensitivity to tissue movement. PMID:19097245

  3. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics. [Data Acquisition by Parallel Histogramming and NEtworking

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered.

  4. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  5. Characterization of Harmonic Signal Acquisition with Parallel Dipole and Multipole Detectors

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2018-04-01

    Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) is a powerful instrument for the study of complex biological samples due to its high resolution and mass measurement accuracy. However, the relatively long signal acquisition periods needed to achieve high resolution can serve to limit applications of FTICR-MS. The use of multiple pairs of detector electrodes enables detection of harmonic frequencies present at integer multiples of the fundamental cyclotron frequency, and the obtained resolving power for a given acquisition period increases linearly with the order of harmonic signal. However, harmonic signal detection also increases spectral complexity and presents challenges for interpretation. In the present work, ICR cells with independent dipole and harmonic detection electrodes and preamplifiers are demonstrated. A benefit of this approach is the ability to independently acquire fundamental and multiple harmonic signals in parallel using the same ions under identical conditions, enabling direct comparison of achieved performance as parameters are varied. Spectra from harmonic signals showed generally higher resolving power than spectra acquired with fundamental signals and equal signal duration. In addition, the maximum observed signal to noise (S/N) ratio from harmonic signals exceeded that of fundamental signals by 50 to 100%. Finally, parallel detection of fundamental and harmonic signals enables deconvolution of overlapping harmonic signals since observed fundamental frequencies can be used to unambiguously calculate all possible harmonic frequencies. Thus, the present application of parallel fundamental and harmonic signal acquisition offers a general approach to improve utilization of harmonic signals to yield high-resolution spectra with decreased acquisition time. [Figure not available: see fulltext.

  6. Passive and partially active fault tolerance for massively parallel stream processing engines

    Su, Li; Zhou, Yongluan

    2018-01-01

    . On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE...... also propose effective and efficient algorithms to optimize a partially active replication plan to maximize the quality of tentative outputs. We implemented PPA on top of Storm, an open-source MPSPE and conducted extensive experiments using both real and synthetic datasets to verify the effectiveness...

  7. Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions

    Buddala, Santhoshi Snigdha

    Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.

  8. Modeling, realization and evaluation of a parallel architecture for the data acquisition in multidetectors

    Guirande, Ph.; Aleonard, M-M.; Dien, Q-T.; Pedroza, J-L.

    1997-01-01

    The efficiency increasing in four π (EUROGAM, EUROBALL, DIAMANT) is achieved by an increase in the granularity, hence in the event counting rate in the acquisition system. Consequently, an evolution of the architecture of readout systems, coding and software is necessary. To achieve the required evaluation we have implemented a parallel architecture to check the quality of the events. The first application of this architecture was to make available an improved data acquisition system for the DIAMANT multidetector. The data acquisition system of DIAMANT is based on an ensemble of VME cards which must manage: the event readout, their salvation on magnetic support and histogram construction. The ensemble consists of processors distributed in a net, a workstation to control the experiment and a display system for spectra and arrays. In such architecture the task of VME bus becomes quickly a limitation for performances not only for the data transfer but also for coordination of different processors. The parallel architecture used makes the VME bus operation easy. It is based on three DSP C40 (Digital Signal Processor) implanted in a commercial (LSI) VME. It is provided with an external bus used to read the raw data from an interface card (ROCVI) between the 32 bit ECL bus reading the real time VME-based encoders. The performed tests have evidenced jamming after data exchanges between the processors using two communication lines. The analysis of this problem has indicated the necessity of dynamical changes of tasks to avoid this blocking. Intrinsic evaluation (i.e. without transfer on the VME bus) has been carried out for two parallel topologies (processor farm and tree). The simulation software permitted the generation of event packets. The obtained rates are sensibly equivalent (6 Mo/s) independent of topology. The farm topology has been chosen because it is simple to implant. The charge evaluation has reduced the rate in 'simplex' communication mode to 5.3 Mo/s and

  9. Parallel imaging: is GRAPPA a useful acquisition tool for MR imaging intended for volumetric brain analysis?

    Frank Anders

    2009-08-01

    Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.

  10. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  11. Evaluation of Parallel and Fan-Beam Data Acquisition Geometries and Strategies for Myocardial SPECT Imaging

    Qi, Yujin; Tsui, B. M. W.; Gilland, K. L.; Frey, E. C.; Gullberg, G. T.

    2004-06-01

    This study evaluates myocardial SPECT images obtained from parallel-hole (PH) and fan-beam (FB) collimator geometries using both circular-orbit (CO) and noncircular-orbit (NCO) acquisitions. A newly developed 4-D NURBS-based cardiac-torso (NCAT) phantom was used to simulate the /sup 99m/Tc-sestamibi uptakes in human torso with myocardial defects in the left ventricular (LV) wall. Two phantoms were generated to simulate patients with thick and thin body builds. Projection data including the effects of attenuation, collimator-detector response and scatter were generated using SIMSET Monte Carlo simulations. A large number of photon histories were generated such that the projection data were close to noise free. Poisson noise fluctuations were then added to simulate the count densities found in clinical data. Noise-free and noisy projection data were reconstructed using the iterative OS-EM reconstruction algorithm with attenuation compensation. The reconstructed images from noisy projection data show that the noise levels are lower for the FB as compared to the PH collimator due to increase in detected counts. The NCO acquisition method provides slightly better resolution and small improvement in defect contrast as compared to the CO acquisition method in noise-free reconstructed images. Despite lower projection counts the NCO shows the same noise level as the CO in the attenuation corrected reconstruction images. The results from the channelized Hotelling observer (CHO) study show that FB collimator is superior to PH collimator in myocardial defect detection, but the NCO shows no statistical significant difference from the CO for either PH or FB collimator. In conclusion, our results indicate that data acquisition using NCO makes a very small improvement in the resolution over CO for myocardial SPECT imaging. This small improvement does not make a significant difference on myocardial defect detection. However, an FB collimator provides better defect detection than a

  12. The role of contextual associations in producing the partial reinforcement acquisition deficit.

    Miguez, Gonzalo; Witnauer, James E; Miller, Ralph R

    2012-01-01

    Three conditioned suppression experiments with rats as subjects assessed the contributions of the conditioned stimulus (CS)-context and context-unconditioned stimulus (US) associations to the degraded stimulus control by the CS that is observed following partial reinforcement relative to continuous reinforcement training. In Experiment 1, posttraining associative deflation (i.e., extinction) of the training context after partial reinforcement restored responding to a level comparable to the one produced by continuous reinforcement. In Experiment 2, posttraining associative inflation of the context (achieved by administering unsignaled outcome presentations in the context) enhanced the detrimental effect of partial reinforcement. Experiment 3 found that the training context must be an effective competitor to produce the partial reinforcement acquisition deficit. When the context was down-modulated, the target regained behavioral control thereby demonstrating higher-order retrospective revaluation. The results are discussed in terms of retrospective revaluation, and are used to contrast the predictions of a performance-focused model with those of an acquisition-focused model. (c) 2012 APA, all rights reserved.

  13. Parallels between control PDE's (Partial Differential Equations) and systems of ODE's (Ordinary Differential Equations)

    Hunt, L. R.; Villarreal, Ramiro

    1987-01-01

    System theorists understand that the same mathematical objects which determine controllability for nonlinear control systems of ordinary differential equations (ODEs) also determine hypoellipticity for linear partial differentail equations (PDEs). Moreover, almost any study of ODE systems begins with linear systems. It is remarkable that Hormander's paper on hypoellipticity of second order linear p.d.e.'s starts with equations due to Kolmogorov, which are shown to be analogous to the linear PDEs. Eigenvalue placement by state feedback for a controllable linear system can be paralleled for a Kolmogorov equation if an appropriate type of feedback is introduced. Results concerning transformations of nonlinear systems to linear systems are similar to results for transforming a linear PDE to a Kolmogorov equation.

  14. The design and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1987-05-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these systems, taking advantages of the independent nature of ''events,'' is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, CPU power equivalent to several VAXs is obtained at a fraction of the cost of one VAX. The front end interfacs to a VAX 11/750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to reply data using the same command language. Design concepts, data structures, performance, and experience to data are discussed

  15. The design, creation, and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1986-01-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these system, taking advantage of the independent nature of ''events'', is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, multi-VAX cpu power is obtained at a fraction of the cost of one VAX. The front end interfaces to a VAX 750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to replay data using the same command language. Design concepts, data structures, performance, and experience to data are discussed. 5 refs., 2 figs

  16. Rapid musculoskeletal magnetic resonance imaging using integrated parallel acquisition techniques (IPAT) - Initial experiences

    Romaneehsen, B.; Oberholzer, K.; Kreitner, K.-F.; Mueller, L.P.

    2003-01-01

    Purpose: To investigate the feasibility of using multiple receiver coil elements for time saving integrated parallel imaging techniques (iPAT) in traumatic musculoskeletal disorders. Material and methods: 6 patients with traumatic derangements of the knee, ankle and hip underwent MR imaging at 1.5 T. For signal detection of the knee and ankle, we used a 6-channel body array coil that was placed around the joints, for hip imaging two 4-channel body array coils and two elements of the spine array coil were combined for signal detection. All patients were investigated with a standard imaging protocol that mainly consisted of different turbo spin-echo sequences (PD-, T 2 -weighted TSE with and without fat suppression, STIR). All sequences were repeated with an integrated parallel acquisition technique (iPAT) using a modified sensitivity encoding (mSENSE) technique with an acceleration factor of 2. Overall image quality was subjectively assessed using a five-point scale as well as the ability for detection of pathologic findings. Results: Regarding overall image quality, there were no significant differences between standard imaging and imaging using mSENSE. All pathologies (occult fracture, meniscal tear, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques. iPAT led to a 48% reduction of acquisition time compared with standard technique. Additionally, time savings with iPAT led to a decrease of pain-induced motion artifacts in two cases. Conclusion: In times of increasing cost pressure, iPAT using multiple coil elements seems to be an efficient and economic tool for fast musculoskeletal imaging with diagnostic performance comparable to conventional techniques. (orig.) [de

  17. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging

    Rabrait, C.

    2007-11-01

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  18. Rapid musculoskeletal magnetic resonance imaging using integrated parallel acquisition techniques (IPAT) - Initial experiences

    Romaneehsen, B.; Oberholzer, K.; Kreitner, K.-F. [Johannes Gutenberg-Univ. Mainz (Germany). Klinik und Poliklinik fuer Radiologie; Mueller, L.P. [Johannes Gutenberg-Univ. Mainz (Germany). Klinik und Poliklinik fuer Unfallchirurgie

    2003-09-01

    Purpose: To investigate the feasibility of using multiple receiver coil elements for time saving integrated parallel imaging techniques (iPAT) in traumatic musculoskeletal disorders. Material and methods: 6 patients with traumatic derangements of the knee, ankle and hip underwent MR imaging at 1.5 T. For signal detection of the knee and ankle, we used a 6-channel body array coil that was placed around the joints, for hip imaging two 4-channel body array coils and two elements of the spine array coil were combined for signal detection. All patients were investigated with a standard imaging protocol that mainly consisted of different turbo spin-echo sequences (PD-, T{sub 2}-weighted TSE with and without fat suppression, STIR). All sequences were repeated with an integrated parallel acquisition technique (iPAT) using a modified sensitivity encoding (mSENSE) technique with an acceleration factor of 2. Overall image quality was subjectively assessed using a five-point scale as well as the ability for detection of pathologic findings. Results: Regarding overall image quality, there were no significant differences between standard imaging and imaging using mSENSE. All pathologies (occult fracture, meniscal tear, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques. iPAT led to a 48% reduction of acquisition time compared with standard technique. Additionally, time savings with iPAT led to a decrease of pain-induced motion artifacts in two cases. Conclusion: In times of increasing cost pressure, iPAT using multiple coil elements seems to be an efficient and economic tool for fast musculoskeletal imaging with diagnostic performance comparable to conventional techniques. (orig.) [German] Ziel: Einsatz integrierter paralleler Akquisitionstechniken (iPAT) zur Verkuerzung der Untersuchungszeit bei muskuloskelettalen Verletzungen. Material und Methoden: 6 Patienten mit einem Knie, Sprunggelenks- oder Huefttrauma wurden bei 1,5 T

  19. Fast magnetic resonance imaging of the knee using a parallel acquisition technique (mSENSE): a prospective performance evaluation

    Kreitner, K.F.; Romaneehsen, Bernd; Oberholzer, Katja; Dueber, Christoph; Krummenauer, Frank; Mueller, L.P.

    2006-01-01

    The performance of a magnetic resonance (MR) imaging strategy that uses multiple receiver coil elements and integrated parallel imaging techniques (iPAT) in traumatic and degenerative disorders of the knee and to compare this technique with a standard MR imaging protocol was evaluated. Ninety patients with suspected internal derangements of the knee joint prospectively underwent MR imaging at 1.5 T. For signal detection, a 6-channel array coil was used. All patients were investigated with a standard imaging protocol consisting of different turbo spin-echo sequences proton density (PD), T 2 -weighted turbo spin echo (TSE) with and without fat suppression in three imaging planes. All sequences were repeated with an integrated parallel acquisition technique (iPAT) using the modified sensitivity encoding (mSENSE) algorithm with an acceleration factor of 2. Two radiologists independently evaluated and scored all images with regard to overall image quality, artefacts and pathologic findings. Agreement of the parallel ratings between readers and imaging techniques, respectively, was evaluated by means of pairwise kappa coefficients that were stratified for the area of evaluation. Agreement between the parallel readers for both the iPAT imaging and the conventional technique, respectively, as well as between imaging techniques was found encouraging with inter-observer kappa values ranging between 0.78 and 0.98 for both imaging techniques, and the inter-method kappa values ranging between 0.88 and 1.00 for both clinical readers. All pathological findings (e.g. occult fractures, meniscal and cruciate ligament tears, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques with comparable performance. The use of iPAT lead to a 48% reduction of acquisition time compared with standard technique. Parallel imaging using mSENSE proved to be an efficient and economic tool for fast musculoskeletal MR imaging of the knee joint with comparable

  20. Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations

    Fijany, Amir

    1993-01-01

    In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.

  1. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  2. Parallel image-acquisition in continuous-wave electron paramagnetic resonance imaging with a surface coil array: Proof-of-concept experiments

    Enomoto, Ayano; Hirata, Hiroshi

    2014-02-01

    This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

  3. VIBE with parallel acquisition technique - a novel approach to dynamic contrast-enhanced MR imaging of the liver

    Dobritz, M.; Radkow, T.; Bautz, W.; Fellner, F.A.; Nittka, M.

    2002-01-01

    Purpose: The VIBE (volume interpolated breath-hold examination) sequence in combination with parallel acquisition technique (iPAT: integrated parallel acquisition technique) allows dynamic contrast-enhanced MRI of the liver with high temporal and spatial resolution. The aim of this study was to obtain first clinical experience with this technique for the detection and characterization of focal liver lesions. Materials and Methods: We examined 10 consecutive patients using a 1.5 T MR system (gradient field strength 30 mT/m) with a phased-array coil combination. Following sequences- were acquired: T 2 -w TSE and T 1 -w FLASH, after administration of gadolinium, 6 VIBE sequences with iPAT (TR/TE/matrix/partition thickness/time of acquisition: 6.2 ms/ 3.2 ms/256 x 192/4 mm/13 s), as well as T 1 -weighted FLASH with fat saturation. Two observers evaluated the different sequences concerning the number of lesions and their dignity. Following lesions were found: hepatocellular carcinoma (5 patients), hemangioma (2), metastasis (1), cyst (1), adenoma (1). Results: The VIBE sequences were superior for the detection of lesions with arterial hyperperfusion with a total of 33 focal lesions. 21 lesions were found with T 2 -w TSE and 20 with plain T 1 -weighted FLASH. Diagnostic accuracy increased with the VIBE sequence in comparison to the other sequences. Conclusion: VIBE with iPAT allows MR imaging of the liver with high spatial and temporal resolution providing dynamic contrast-enhanced information about the whole liver. This may lead to improved detection of liver lesions, especially hepatocellular carcinoma. (orig.) [de

  4. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  5. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  6. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  7. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Single-shot magnetic resonance spectroscopic imaging with partial parallel imaging.

    Posse, Stefan; Otazo, Ricardo; Tsai, Shang-Yueh; Yoshimoto, Akio Ernesto; Lin, Fa-Hsuan

    2009-03-01

    A magnetic resonance spectroscopic imaging (MRSI) pulse sequence based on proton-echo-planar-spectroscopic-imaging (PEPSI) is introduced that measures two-dimensional metabolite maps in a single excitation. Echo-planar spatial-spectral encoding was combined with interleaved phase encoding and parallel imaging using SENSE to reconstruct absorption mode spectra. The symmetrical k-space trajectory compensates phase errors due to convolution of spatial and spectral encoding. Single-shot MRSI at short TE was evaluated in phantoms and in vivo on a 3-T whole-body scanner equipped with a 12-channel array coil. Four-step interleaved phase encoding and fourfold SENSE acceleration were used to encode a 16 x 16 spatial matrix with a 390-Hz spectral width. Comparison with conventional PEPSI and PEPSI with fourfold SENSE acceleration demonstrated comparable sensitivity per unit time when taking into account g-factor-related noise increases and differences in sampling efficiency. LCModel fitting enabled quantification of inositol, choline, creatine, and N-acetyl-aspartate (NAA) in vivo with concentration values in the ranges measured with conventional PEPSI and SENSE-accelerated PEPSI. Cramer-Rao lower bounds were comparable to those obtained with conventional SENSE-accelerated PEPSI at the same voxel size and measurement time. This single-shot MRSI method is therefore suitable for applications that require high temporal resolution to monitor temporal dynamics or to reduce sensitivity to tissue movement.

  9. Regional alveolar partial pressure of oxygen measurement with parallel accelerated hyperpolarized gas MRI.

    Kadlecek, Stephen; Hamedani, Hooman; Xu, Yinan; Emami, Kiarash; Xin, Yi; Ishii, Masaru; Rizi, Rahim

    2013-10-01

    Alveolar oxygen tension (Pao2) is sensitive to the interplay between local ventilation, perfusion, and alveolar-capillary membrane permeability, and thus reflects physiologic heterogeneity of healthy and diseased lung function. Several hyperpolarized helium ((3)He) magnetic resonance imaging (MRI)-based Pao2 mapping techniques have been reported, and considerable effort has gone toward reducing Pao2 measurement error. We present a new Pao2 imaging scheme, using parallel accelerated MRI, which significantly reduces measurement error. The proposed Pao2 mapping scheme was computer-simulated and was tested on both phantoms and five human subjects. Where possible, correspondence between actual local oxygen concentration and derived values was assessed for both bias (deviation from the true mean) and imaging artifact (deviation from the true spatial distribution). Phantom experiments demonstrated a significantly reduced coefficient of variation using the accelerated scheme. Simulation results support this observation and predict that correspondence between the true spatial distribution and the derived map is always superior using the accelerated scheme, although the improvement becomes less significant as the signal-to-noise ratio increases. Paired measurements in the human subjects, comparing accelerated and fully sampled schemes, show a reduced Pao2 distribution width for 41 of 46 slices. In contrast to proton MRI, acceleration of hyperpolarized imaging has no signal-to-noise penalty; its use in Pao2 measurement is therefore always beneficial. Comparison of multiple schemes shows that the benefit arises from a longer time-base during which oxygen-induced depolarization modifies the signal strength. Demonstration of the accelerated technique in human studies shows the feasibility of the method and suggests that measurement error is reduced here as well, particularly at low signal-to-noise levels. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  10. Effective Five Directional Partial Derivatives-Based Image Smoothing and a Parallel Structure Design.

    Choongsang Cho; Sangkeun Lee

    2016-04-01

    Image smoothing has been used for image segmentation, image reconstruction, object classification, and 3D content generation. Several smoothing approaches have been used at the pre-processing step to retain the critical edge, while removing noise and small details. However, they have limited performance, especially in removing small details and smoothing discrete regions. Therefore, to provide fast and accurate smoothing, we propose an effective scheme that uses a weighted combination of the gradient, Laplacian, and diagonal derivatives of a smoothed image. In addition, to reduce computational complexity, we designed and implemented a parallel processing structure for the proposed scheme on a graphics processing unit (GPU). For an objective evaluation of the smoothing performance, the images were linearly quantized into several layers to generate experimental images, and the quantized images were smoothed using several methods for reconstructing the smoothly changed shape and intensity of the original image. Experimental results showed that the proposed scheme has higher objective scores and better successful smoothing performance than similar schemes, while preserving and removing critical and trivial details, respectively. For computational complexity, the proposed smoothing scheme running on a GPU provided 18 and 16 times lower complexity than the proposed smoothing scheme running on a CPU and the L0-based smoothing scheme, respectively. In addition, a simple noise reduction test was conducted to show the characteristics of the proposed approach; it reported that the presented algorithm outperforms the state-of-the art algorithms by more than 5.4 dB. Therefore, we believe that the proposed scheme can be a useful tool for efficient image smoothing.

  11. Parallel search engine optimisation and pay-per-click campaigns: A comparison of cost per acquisition

    Wouter T. Kritzinger

    2017-07-01

    Full Text Available Background: It is imperative that commercial websites should rank highly in search engine result pages because these provide the main entry point to paying customers. There are two main methods to achieve high rankings: search engine optimisation (SEO and pay-per-click (PPC systems. Both require a financial investment – SEO mainly at the beginning, and PPC spread over time in regular amounts. If marketing budgets are applied in the wrong area, this could lead to losses and possibly financial ruin. Objectives: The objective of this research was to investigate, using three real-world case studies, the actual expenditure on and income from both SEO and PPC systems. These figures were then compared, and specifically, the cost per acquisition (CPA was used to decide which system yielded the best results. Methodology: Three diverse websites were chosen, and analytics data for all three were compared over a 3-month period. Calculations were performed to reduce the figures to single ratios, to make comparisons between them possible. Results: Some of the resultant ratios varied widely between websites. However, the CPA was shown to be on average 52.1 times lower for SEO than for PPC systems. Conclusion: It was concluded that SEO should be the marketing system of preference for e-commerce-based websites. However, there are cases where PPC would yield better results – when instant traffic is required, and when a large initial expenditure is not possible.

  12. A proposed scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC detectors

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; Vanberg, R.

    1990-01-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequence, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a proposed new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of Gigabytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the proposed Scalable Parallel Open Architecture data acquisition system are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build a prototype of the proposed data acquisition system architecture is given in the paper. The major component of the system, a self-routing parallel event builder, is described in detail

  13. Acquisition and extinction of continuously and partially reinforced running in rats with lesions of the dorsal noradrenergic bundle.

    Owen, S; Boarder, M R; Gray, J A; Fillenz, M

    1982-05-01

    Local injection of 6-hydroxydopamine was used to selectively destroy the dorsal ascending noradrenergic bundle (DB) in rats. Two lesion procedures were used, differing in the extent of depletion of forebrain noradrenaline they produced (greater than 90% or 77%). In Experiments 1-3 the rats were run in a straight alley for food reward on continuous (CR) or partial (PR) reinforcement schedules. The smaller lesion reduced and the larger lesion eliminated the partial reinforcement acquisition effect (i.e. the faster start and run speeds produced by PR during training) and the partial reinforcement extinction effect (PREE, i.e. the greater resistance to extinction produced by PR training); these changes were due to altered performance only in the PR condition. Abolition of the PREE by the larger DB lesion occurred with 50 acquisition trials, but with 100 trials the lesion had no effect. In Experiment 4 rats were run in a double runway with food reward on CR in the second goal box, and on CR, PR or without reinforcement in the first. The larger lesion again eliminated the PREE in the first runway, but did not block the frustration effect in the second runway (i.e. the faster speeds observed in the PR condition after non-reward than after reward in the first goal box). These results are consistent with the hypothesis that DB lesions alter behavioural responses to signals of non-reward, but not to non-reward itself. They cannot be predicted from two other hypotheses: that the DB mediates responses to reward or that it subserves selective attention. Since septal and hippocampal, but not amygdalar, lesions have been reported to produced similar behavioural changes, it is proposed that the critical DB projection for the effects observed in these experiments is to the septo-hippocampal system.

  14. Sequential combination of k-t principle component analysis (PCA) and partial parallel imaging: k-t PCA GROWL.

    Qi, Haikun; Huang, Feng; Zhou, Hongmei; Chen, Huijun

    2017-03-01

    k-t principle component analysis (k-t PCA) is a distinguished method for high spatiotemporal resolution dynamic MRI. To further improve the accuracy of k-t PCA, a combination with partial parallel imaging (PPI), k-t PCA/SENSE, has been tested. However, k-t PCA/SENSE suffers from long reconstruction time and limited improvement. This study aims to improve the combination of k-t PCA and PPI on both reconstruction speed and accuracy. A sequential combination scheme called k-t PCA GROWL (GRAPPA operator for wider readout line) was proposed. The GRAPPA operator was performed before k-t PCA to extend each readout line into a wider band, which improved the condition of the encoding matrix in the following k-t PCA reconstruction. k-t PCA GROWL was tested and compared with k-t PCA and k-t PCA/SENSE on cardiac imaging. k-t PCA GROWL consistently resulted in better image quality compared with k-t PCA/SENSE at high acceleration factors for both retrospectively and prospectively undersampled cardiac imaging, with a much lower computation cost. The improvement in image quality became greater with the increase of acceleration factor. By sequentially combining the GRAPPA operator and k-t PCA, the proposed k-t PCA GROWL method outperformed k-t PCA/SENSE in both reconstruction speed and accuracy, suggesting that k-t PCA GROWL is a better combination scheme than k-t PCA/SENSE. Magn Reson Med 77:1058-1067, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition

    Bruder, H.; Raupach, R.; Sunnegardh, J.; Allmendinger, T.; Klotz, E.; Stierstorfer, K.; Flohr, T.

    2015-11-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high. In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution. It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover

  16. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition

    Bruder, H; Raupach, R; Sunnegardh, J; Allmendinger, T; Klotz, E; Stierstorfer, K; Flohr, T

    2015-01-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta.Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high.In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution.It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J).We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR).Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover spatial

  17. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging; Imagerie par resonance magnetique a haute resolution temporelle: developpement d'une methode d'acquisition parallele tridimensionnelle pour l'imagerie fonctionnelle cerebrale

    Rabrait, C

    2007-11-15

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  18. Investigation of variability in image acquisition and contouring during 3D ultrasound guidance for partial breast irradiation

    Landry, Anthony; Olivotto, Ivo; Beckham, Wayne; Berrang, Tanya; Gagne, Isabelle; Popescu, Carmen; Mitchell, Tracy; Vey, Hazel; Sand, Letricia; Soh, Siew Yan; Wark, Jill

    2014-01-01

    Three-dimensional ultrasound (3DUS) at simulation compared to 3DUS at treatment is an image guidance option for partial breast irradiation (PBI). This study assessed if user dependence in acquiring and contouring 3DUS (operator variability) contributed to variation in seroma shifts calculated for breast IGRT. Eligible patients met breast criteria for current randomized PBI studies. 5 Operators participated in this study. For each patient, 3 operators were involved in scan acquisitions and 5 were involved in contouring. At CT simulation (CT1), a 3DUS (US1) was performed by a single radiation therapist (RT). 7 to 14 days after CT1 a second CT (CT2) and 3 sequential 3DUS scans (US2a,b,c) were acquired by each of 3 RTs. Seroma shifts, between US1 and US2 scans were calculated by comparing geometric centers of the seromas (centroids). Operator contouring variability was determined by comparing 5 RT’s contours for a single image set. Scanning variability was assessed by comparing shifts between multiple scans acquired at the same time point (US1-US2a,b,c). Shifts in seromas contoured on CT (CT1-CT2) were compared to US data. From an initial 28 patients, 15 had CT visible seromas, met PBI dosimetric constraints, had complete US data, and were analyzed. Operator variability contributed more to the overall variability in seroma localization than the variability associated with multiple scan acquisitions (95% confidence mean uncertainty of 6.2 mm vs. 1.1 mm). The mean standard deviation in seroma shift was user dependent and ranged from 1.7 to 2.9 mm. Mean seroma shifts from simulation to treatment were comparable to CT. Variability in shifts due to different users acquiring and contouring 3DUS for PBI guidance were comparable to CT shifts. Substantial inter-observer effect needs to be considered during clinical implementation of 3DUS IGRT

  19. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC [Superconducting Super Collider] detectors

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; VanBerg, R.

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab

  20. MR sialography: evaluation of an ultra-fast sequence in consideration of a parallel acquisition technique and different functional conditions in patients with salivary gland diseases

    Petridis, C.; Ries, T.; Cramer, M.C.; Graessner, J.; Petersen, K.U.; Reitmeier, F.; Jaehne, M.; Weiss, F.; Adam, G.; Habermann, C.R.

    2007-01-01

    Purpose: To evaluate an ultra-fast sequence for MR sialography requiring no post-processing and to compare the acquisition technique regarding the effect of oral stimulation with a parallel acquisition technique in patients with salivary gland diseases. Materials and Methods: 128 patients with salivary gland disease were prospectively examined using a 1.5-T superconducting system with a 30 mT/m maximum gradient capability and a maximum slew rate of 125 mT/m/sec. A single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation. All images were obtained with and without a parallel imaging technique. The evaluation of the ductal system of the parotid and submandibular gland was performed using a visual scale of 1-5 for each side. The images were assessed by two independent experienced radiologists. An ANOVA with posthoc comparisons and an overall two tailed significance level of p=0.05 was used for the statistical evaluation. An intraclass correlation was computed to evaluate interobserver variability and a correlation of >0.8 was determined, thereby indicating a high correlation. Results: Depending on the diagnosed diseases and the absence of abruption of the ducts, all parts of excretory ducts were able to be visualized in all patients using the developed technique with an overall rating for all ducts of 2.70 (SD±0.89). A high correlation was achieved between the two observers with an intraclass correlation of 0.73. Oral application of a sialogogum improved the visibility of excretory ducts significantly (p<0.001). In contrast, the use of a parallel imaging technique led to a significant decrease in image quality (p=0,011). (orig.)

  1. Parallel imaging with phase scrambling.

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  2. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands; MR-Sialographie: Optimierung und Bewertung ultraschneller Sequenzen mit paralleler Bildgebung und oraler Stimulation

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G. [Radiologisches Zentrum, Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie, Universitaetsklinikum Hamburg-Eppendorf (Germany); Graessner, J. [Siemens Medical Systems, Hamburg (Germany); Reitmeier, F.; Jaehne, M. [Kopf- und Hautzentrum, Klinik und Poliklinik fuer Hals-, Nasen- und Ohrenheilkunde, Universitaetsklinikum Hamburg-Eppendorf (Germany); Petersen, K.U. [Zentrum fuer Psychosoziale Medizin, Klinik und Poliklinik fuer Psychiatrie und Psychotherapie, Universitaetsklinikum Hamburg-Eppendorf (Germany)

    2005-04-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD{+-}1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p<0.001; {eta}{sup 2}=0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p<0.001; {eta}{sup 2}=0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post

  3. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G.; Graessner, J.; Reitmeier, F.; Jaehne, M.; Petersen, K.U.

    2005-01-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD±1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p 2 =0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p 2 =0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post-processing needed. To improve results of MR

  4. Partial structure of the phylloxin gene from the giant monkey frog, Phyllomedusa bicolor: parallel cloning of precursor cDNA and genomic DNA from lyophilized skin secretion.

    Chen, Tianbao; Gagliardo, Ron; Walker, Brian; Zhou, Mei; Shaw, Chris

    2005-12-01

    Phylloxin is a novel prototype antimicrobial peptide from the skin of Phyllomedusa bicolor. Here, we describe parallel identification and sequencing of phylloxin precursor transcript (mRNA) and partial gene structure (genomic DNA) from the same sample of lyophilized skin secretion using our recently-described cloning technique. The open-reading frame of the phylloxin precursor was identical in nucleotide sequence to that previously reported and alignment with the nucleotide sequence derived from genomic DNA indicated the presence of a 175 bp intron located in a near identical position to that found in the dermaseptins. The highly-conserved structural organization of skin secretion peptide genes in P. bicolor can thus be extended to include that encoding phylloxin (plx). These data further reinforce our assertion that application of the described methodology can provide robust genomic/transcriptomic/peptidomic data without the need for specimen sacrifice.

  5. Morphological Awareness in Vocabulary Acquisition among Chinese-Speaking Children: Testing Partial Mediation via Lexical Inference Ability

    Zhang, Haomin

    2015-01-01

    The goal of this study was to investigate the effect of Chinese-specific morphological awareness on vocabulary acquisition among young Chinese-speaking students. The participants were 288 Chinese-speaking second graders from three different cities in China. Multiple regression analysis and mediation analysis were used to uncover the mediated and…

  6. 16 CFR 802.42 - Partial exemption for acquisitions in connection with the formation of certain joint ventures or...

    2010-01-01

    ... connection with the formation of certain joint ventures or other corporations. 802.42 Section 802.42... acquisitions in connection with the formation of certain joint ventures or other corporations. (a) Whenever one or more of the contributors in the formation of a joint venture or other corporation which otherwise...

  7. Reducing contrast contamination in radial turbo-spin-echo acquisitions by combining a narrow-band KWIC filter with parallel imaging.

    Neumann, Daniel; Breuer, Felix A; Völker, Michael; Brandt, Tobias; Griswold, Mark A; Jakob, Peter M; Blaimer, Martin

    2014-12-01

    Cartesian turbo spin-echo (TSE) and radial TSE images are usually reconstructed by assembling data containing different contrast information into a single k-space. This approach results in mixed contrast contributions in the images, which may reduce their diagnostic value. The goal of this work is to improve the image contrast from radial TSE acquisitions by reducing the contribution of signals with undesired contrast information. Radial TSE acquisitions allow the reconstruction of multiple images with different T2 contrasts using the k-space weighted image contrast (KWIC) filter. In this work, the image contrast is improved by reducing the band-width of the KWIC filter. Data for the reconstruction of a single image are selected from within a small temporal range around the desired echo time. The resulting dataset is undersampled and, therefore, an iterative parallel imaging algorithm is applied to remove aliasing artifacts. Radial TSE images of the human brain reconstructed with the proposed method show an improved contrast when compared with Cartesian TSE images or radial TSE images with conventional KWIC reconstructions. The proposed method provides multi-contrast images from radial TSE data with contrasts similar to multi spin-echo images. Contaminations from unwanted contrast weightings are strongly reduced. © 2014 Wiley Periodicals, Inc.

  8. Immediate versus delayed loading of strategic mini dental implants for the stabilization of partial removable dental prostheses: a patient cluster randomized, parallel-group 3-year trial.

    Mundt, Torsten; Al Jaghsi, Ahmad; Schwahn, Bernd; Hilgert, Janina; Lucas, Christian; Biffar, Reiner; Schwahn, Christian; Heinemann, Friedhelm

    2016-07-30

    Acceptable short-term survival rates (>90 %) of mini-implants (diameter implants as strategic abutments for a better retention of partial removable dental prosthesis (PRDP) are not available. The purpose of this study is to test the hypothesis that immediately loaded mini-implants show more bone loss and less success than strategic mini-implants with delayed loading. In this four-center (one university hospital, three dental practices in Germany), parallel-group, controlled clinical trial, which is cluster randomized on patient level, a total of 80 partially edentulous patients with unfavourable number and distribution of remaining abutment teeth in at least one jaw will receive supplementary min-implants to stabilize their PRDP. The mini-implant are either immediately loaded after implant placement (test group) or delayed after four months (control group). Follow-up of the patients will be performed for 36 months. The primary outcome is the radiographic bone level changes at implants. The secondary outcome is the implant success as a composite variable. Tertiary outcomes include clinical, subjective (quality of life, satisfaction, chewing ability) and dental or technical complications. Strategic implants under an existing PRDP are only documented for standard-diameter implants. Mini-implants could be a minimal invasive and low cost solution for this treatment modality. The trial is registered at Deutsches Register Klinischer Studien (German register of clinical trials) under DRKS-ID: DRKS00007589 ( www.germanctr.de ) on January 13(th), 2015.

  9. Experimental study on heat transfer enhancement of laminar ferrofluid flow in horizontal tube partially filled porous media under fixed parallel magnet bars

    Sheikhnejad, Yahya; Hosseini, Reza, E-mail: hoseinir@aut.ac.ir; Saffar Avval, Majid

    2017-02-15

    In this study, steady state laminar ferroconvection through circular horizontal tube partially filled with porous media under constant heat flux is experimentally investigated. Transverse magnetic fields were applied on ferrofluid flow by two fixed parallel magnet bar positioned on a certain distance from beginning of the test section. The results show promising notable enhancement in heat transfer as a consequence of partially filled porous media and magnetic field, up to 2.2 and 1.4 fold enhancement were observed in heat transfer coefficient respectively. It was found that presence of both porous media and magnetic field simultaneously can highly improve heat transfer up to 2.4 fold. Porous media of course plays a major role in this configuration. Virtually, application of Magnetic field and porous media also insert higher pressure loss along the pipe which again porous media contribution is higher that magnetic field. - Highlights: • Porous media can improve the coefficient of heat transfer up to 2.2 fold. • Both porous media and Nano particles have undesired pressure drop effect. • Application of both porous media and magnetic field in ferrofluid flow will result in significant enhancement in heat transfer up to 2.4 fold. • Magnet bar effect is mainly restricted to approximately one fourth of the test section. • Diluted Ferrofluids 2%, results in over 1.4 fold enhancement in heat transfer coefficient.

  10. Solving binary-state multi-objective reliability redundancy allocation series-parallel problem using efficient epsilon-constraint, multi-start partial bound enumeration algorithm, and DEA

    Khalili-Damghani, Kaveh; Amiri, Maghsoud

    2012-01-01

    In this paper, a procedure based on efficient epsilon-constraint method and data envelopment analysis (DEA) is proposed for solving binary-state multi-objective reliability redundancy allocation series-parallel problem (MORAP). In first module, a set of qualified non-dominated solutions on Pareto front of binary-state MORAP is generated using an efficient epsilon-constraint method. In order to test the quality of generated non-dominated solutions in this module, a multi-start partial bound enumeration algorithm is also proposed for MORAP. The performance of both procedures is compared using different metrics on well-known benchmark instance. The statistical analysis represents that not only the proposed efficient epsilon-constraint method outperform the multi-start partial bound enumeration algorithm but also it improves the founded upper bound of benchmark instance. Then, in second module, a DEA model is supplied to prune the generated non-dominated solutions of efficient epsilon-constraint method. This helps reduction of non-dominated solutions in a systematic manner and eases the decision making process for practical implementations. - Highlights: ► A procedure based on efficient epsilon-constraint method and DEA was proposed for solving MORAP. ► The performance of proposed procedure was compared with a multi-start PBEA. ► Methods were statistically compared using multi-objective metrics.

  11. VIBE with parallel acquisition technique - a novel approach to dynamic contrast-enhanced MR imaging of the liver; VIBE mit paralleler Akquisitionstechnik - eine neue Moeglichkeit der dynamischen kontrastverstaerkten MRT der Leber

    Dobritz, M.; Radkow, T.; Bautz, W.; Fellner, F.A. [Inst. fuer Diagnostische Radiologie, Friedrich-Alexander-Univ. Erlangen-Nuernberg (Germany); Nittka, M. [Siemens Medical Solutions, Erlangen (Germany)

    2002-06-01

    Purpose: The VIBE (volume interpolated breath-hold examination) sequence in combination with parallel acquisition technique (iPAT: integrated parallel acquisition technique) allows dynamic contrast-enhanced MRI of the liver with high temporal and spatial resolution. The aim of this study was to obtain first clinical experience with this technique for the detection and characterization of focal liver lesions. Materials and Methods: We examined 10 consecutive patients using a 1.5 T MR system (gradient field strength 30 mT/m) with a phased-array coil combination. Following sequences- were acquired: T{sub 2}-w TSE and T{sub 1}-w FLASH, after administration of gadolinium, 6 VIBE sequences with iPAT (TR/TE/matrix/partition thickness/time of acquisition: 6.2 ms/ 3.2 ms/256 x 192/4 mm/13 s), as well as T{sub 1}-weighted FLASH with fat saturation. Two observers evaluated the different sequences concerning the number of lesions and their dignity. Following lesions were found: hepatocellular carcinoma (5 patients), hemangioma (2), metastasis (1), cyst (1), adenoma (1). Results: The VIBE sequences were superior for the detection of lesions with arterial hyperperfusion with a total of 33 focal lesions. 21 lesions were found with T{sub 2}-w TSE and 20 with plain T{sub 1}-weighted FLASH. Diagnostic accuracy increased with the VIBE sequence in comparison to the other sequences. Conclusion: VIBE with iPAT allows MR imaging of the liver with high spatial and temporal resolution providing dynamic contrast-enhanced information about the whole liver. This may lead to improved detection of liver lesions, especially hepatocellular carcinoma. (orig.) [German] Ziel: Die VIBE-Sequenz (Volume Interpolated Breath-hold Examination) in Kombination mit paralleler Bildgebung (iPAT) ermoeglicht eine dynamische kontrastmittel-gestuetzte Untersuchung der Leber in hoher zeitlicher und oertlicher Aufloesung. Ziel war es, erste klinische Erfahrungen mit dieser Technik in der Detektion fokaler

  12. Non-Cartesian parallel imaging reconstruction.

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  13. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    Shimada, Kotaro, E-mail: kotaro@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Isoda, Hiroyoshi, E-mail: sayuki@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Okada, Tomohisa, E-mail: tomokada@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Kamae, Toshikazu, E-mail: toshi13@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Arizono, Shigeki, E-mail: arizono@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Hirokawa, Yuusuke, E-mail: yuusuke@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Shibata, Toshiya, E-mail: ksj@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Togashi, Kaori, E-mail: ktogashi@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan)

    2011-01-15

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 {+-} 1.0 min (mean {+-} standard deviation), 5.9 {+-} 0.8 min, and 5.8 {+-} 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  14. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    Shimada, Kotaro; Isoda, Hiroyoshi; Okada, Tomohisa; Kamae, Toshikazu; Arizono, Shigeki; Hirokawa, Yuusuke; Shibata, Toshiya; Togashi, Kaori

    2011-01-01

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 ± 1.0 min (mean ± standard deviation), 5.9 ± 0.8 min, and 5.8 ± 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  15. Selection and integration of a network of parallel processors in the real time acquisition system of the 4π DIAMANT multidetector: modeling, realization and evaluation of the software installed on this network

    Guirande, F.

    1997-01-01

    The increase in sensitivity of 4π arrays such as EUROBALL or DIAMANT has led to an increase in the data flow rate into the data acquisition system. If at the electronic level, the data flow has been distributed onto several data acquisition buses, it is necessary in the data processing system to increase the processing power. This work regards the modelling and implementation of the software allocated onto an architecture of parallel processors. Object analysis and formal methods were used, benchmark and evolution in the future of this architecture are presented. The thesis consists of two parts. Part A, devoted to 'Nuclear Spectroscopy with 4 π multidetectors', contains a first chapter entitled 'The Physics of 4π multidetectors' and a second chapter entitled 'Integral architecture of 4π multidetectors'. Part B, devoted to 'Parallel acquisition system of DIAMANT' contains three chapters entitled 'Material architecture', 'Software architecture' and 'Validation and Performances'. Four appendices and a term glossary close this work. (author)

  16. Selection and integration of a network of parallel processors in the real time acquisition system of the 4{pi} DIAMANT multidetector: modeling, realization and evaluation of the software installed on this network; Choix et integration d`un reseau de processeurs paralleles dans le systeme d`acquisition temps reel du multidetecteur 4{pi} DIAMANT: modelisation, realisation et evaluation du logiciel implante sur ce reseau

    Guirande, F. [Ecole Doctorale des Sciences Physiques et de l`Ingenieur, Bordeaux-1 Univ., 33 (France)

    1997-07-11

    The increase in sensitivity of 4{pi} arrays such as EUROBALL or DIAMANT has led to an increase in the data flow rate into the data acquisition system. If at the electronic level, the data flow has been distributed onto several data acquisition buses, it is necessary in the data processing system to increase the processing power. This work regards the modelling and implementation of the software allocated onto an architecture of parallel processors. Object analysis and formal methods were used, benchmark and evolution in the future of this architecture are presented. The thesis consists of two parts. Part A, devoted to `Nuclear Spectroscopy with 4 {pi} multidetectors`, contains a first chapter entitled `The Physics of 4{pi} multidetectors` and a second chapter entitled `Integral architecture of 4{pi} multidetectors`. Part B, devoted to `Parallel acquisition system of DIAMANT` contains three chapters entitled `Material architecture`, `Software architecture` and `Validation and Performances`. Four appendices and a term glossary close this work. (author) 58 refs.

  17. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  18. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array−Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique

    Chen Yang

    2017-06-01

    Full Text Available With the development of satellite load technology and very large scale integrated (VLSI circuit technology, onboard real-time synthetic aperture radar (SAR imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT, which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array−application-specific integrated circuit (FPGA-ASIC hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  19. Dynamic MRI of the liver with parallel acquisition technique. Characterization of focal liver lesions and analysis of the hepatic vasculature in a single MRI session

    Heilmaier, C.; Sutter, R.; Lutz, A.M.; Willmann, J.K.; Seifert, B.

    2008-01-01

    Purpose: to retrospectively evaluate the performance of breath-hold contrast-enhanced 3D dynamic parallel gradient echo MRI (pMRT) for the characterization of focal liver lesions (standard of reference: histology) and for the analysis of hepatic vasculature (standard of reference: contrast-enhanced 64-detector row computed tomography; MSCT) in a single MRI session. Materials and method: two blinded readers independently analyzed preoperative pMRT data sets (1.5T-MRT) of 45 patients (23 men, 22 women; 28 - 77 years, average age, 48 years) with a total of 68 focal liver lesions with regard to image quality of hepatic arteries, portal and hepatic veins, presence of variant anatomy of the hepatic vasculature, as well as presence of portal vein thrombosis and hemodynamically significant arterial stenosis. In addition, both readers were asked to identify and characterize focal liver lesions. Imaging parameters of pMRT were: TR/TE/matrix/slice thickness/acquisition time: 3.1 ms/1.4 ms/384 x 224/4 mm/15 - 17 s. MSCT was performed with a pitch of 1.2, an effective slice thickness of 1 mm and a matrix of 512 x 512. Results: based on histology, the 68 liver lesions were found to be 42 hepatocellular carcinomas (HCC), 20 metastases, 3 cholangiocellular carcinomas (CCC) as well as 1 dysplastic nodule, 1 focal nodular hyperplasia (FNH) and 1 atypical hemangioma. Overall, the diagnostic accuracy was high for both readers (91 - 100%) in the characterization of these focal liver lesions with an excellent interobserver agreement (κ-values of 0.89 [metastases], 0.97 [HCC] and 1 [CCC]). On average, the image quality of all vessels under consideration was rated good or excellent in 89% (reader 1) and 90% (reader 2). Anatomical variants of the hepatic arteries, hepatic veins and portal vein as well as thrombosis of the portal vein were reliably detected by pMRT. Significant arterial stenosis was found with a sensitivity between 86% and 100% and an excellent interobserver agreement (κ

  20. Improving parallel imaging by jointly reconstructing multi-contrast data.

    Bilgic, Berkin; Kim, Tae Hyung; Liao, Congyu; Manhard, Mary Kate; Wald, Lawrence L; Haldar, Justin P; Setsompop, Kawin

    2018-08-01

    To develop parallel imaging techniques that simultaneously exploit coil sensitivity encoding, image phase prior information, similarities across multiple images, and complementary k-space sampling for highly accelerated data acquisition. We introduce joint virtual coil (JVC)-generalized autocalibrating partially parallel acquisitions (GRAPPA) to jointly reconstruct data acquired with different contrast preparations, and show its application in 2D, 3D, and simultaneous multi-slice (SMS) acquisitions. We extend the joint parallel imaging concept to exploit limited support and smooth phase constraints through Joint (J-) LORAKS formulation. J-LORAKS allows joint parallel imaging from limited autocalibration signal region, as well as permitting partial Fourier sampling and calibrationless reconstruction. We demonstrate highly accelerated 2D balanced steady-state free precession with phase cycling, SMS multi-echo spin echo, 3D multi-echo magnetization-prepared rapid gradient echo, and multi-echo gradient recalled echo acquisitions in vivo. Compared to conventional GRAPPA, proposed joint acquisition/reconstruction techniques provide more than 2-fold reduction in reconstruction error. JVC-GRAPPA takes advantage of additional spatial encoding from phase information and image similarity, and employs different sampling patterns across acquisitions. J-LORAKS achieves a more parsimonious low-rank representation of local k-space by considering multiple images as additional coils. Both approaches provide dramatic improvement in artifact and noise mitigation over conventional single-contrast parallel imaging reconstruction. Magn Reson Med 80:619-632, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  1. View-sharing in keyhole imaging: Partially compressed central k-space acquisition in time-resolved MRA at 3.0 T

    Hadizadeh, Dariusch R., E-mail: Dariusch.Hadizadeh@ukb.uni-bonn.de [University of Bonn, Department of Radiology, Sigmund-Freud-Strasse 25, 53127 Bonn (Germany); Gieseke, Juergen [University of Bonn, Department of Radiology, Sigmund-Freud-Strasse 25, 53127 Bonn (Germany); Philips Healthcare, Best (Netherlands); Beck, Gabriele; Geerts, Liesbeth [Philips Healthcare, Best (Netherlands); Kukuk, Guido M. [University of Bonn, Department of Radiology, Sigmund-Freud-Strasse 25, 53127 Bonn (Germany); Bostroem, Azize [Department of Neurosurgery, Sigmund-Freud-Strasse 25, 53127 Bonn, Deutschland (Germany); Urbach, Horst; Schild, Hans H.; Willinek, Winfried A. [University of Bonn, Department of Radiology, Sigmund-Freud-Strasse 25, 53127 Bonn (Germany)

    2011-11-15

    Introduction: Time-resolved contrast-enhanced magnetic resonance (MR) angiography (CEMRA) of the intracranial vasculature has proved its clinical value for the evaluation of cerebral vascular disease in cases where both flow hemodynamics and morphology are important. The purpose of this study was to evaluate a combination of view-sharing with keyhole imaging to increase spatial and temporal resolution of time-resolved CEMRA at 3.0 T. Methods: Alternating view-sharing was combined with randomly segmented k-space ordering, keyhole imaging, partial Fourier and parallel imaging (4DkvsMRA). 4DkvsMRA was evaluated using varying compression factors (80-100) resulting in spatial resolutions ranging from (1.1 x 1.1 x 1.4) to (0.96 x 0.96 x 0.95) mm{sup 3} and temporal resolutions ranging from 586 ms/dynamic scan - 288 ms/dynamic scan in three protocols in 10 healthy volunteers and seven patients (17 subjects). DSA correlation was available in four patients with cerebral arteriovenous malformations (cAVMs) and one patient with cerebral teleangiectasia. Results: 4DkvsMRA was successfully performed in all subjects and showed clear depiction of arterial and venous phases with diagnostic image quality. At the maximum view-sharing compression factor (=100), a 'flickering' artefact was observed. Conclusion: View-sharing in keyhole imaging allows for increased spatial and temporal resolution in time-resolved MRA.

  2. View-sharing in keyhole imaging: Partially compressed central k-space acquisition in time-resolved MRA at 3.0 T

    Hadizadeh, Dariusch R.; Gieseke, Juergen; Beck, Gabriele; Geerts, Liesbeth; Kukuk, Guido M.; Bostroem, Azize; Urbach, Horst; Schild, Hans H.; Willinek, Winfried A.

    2011-01-01

    Introduction: Time-resolved contrast-enhanced magnetic resonance (MR) angiography (CEMRA) of the intracranial vasculature has proved its clinical value for the evaluation of cerebral vascular disease in cases where both flow hemodynamics and morphology are important. The purpose of this study was to evaluate a combination of view-sharing with keyhole imaging to increase spatial and temporal resolution of time-resolved CEMRA at 3.0 T. Methods: Alternating view-sharing was combined with randomly segmented k-space ordering, keyhole imaging, partial Fourier and parallel imaging (4DkvsMRA). 4DkvsMRA was evaluated using varying compression factors (80-100) resulting in spatial resolutions ranging from (1.1 x 1.1 x 1.4) to (0.96 x 0.96 x 0.95) mm 3 and temporal resolutions ranging from 586 ms/dynamic scan - 288 ms/dynamic scan in three protocols in 10 healthy volunteers and seven patients (17 subjects). DSA correlation was available in four patients with cerebral arteriovenous malformations (cAVMs) and one patient with cerebral teleangiectasia. Results: 4DkvsMRA was successfully performed in all subjects and showed clear depiction of arterial and venous phases with diagnostic image quality. At the maximum view-sharing compression factor (=100), a 'flickering' artefact was observed. Conclusion: View-sharing in keyhole imaging allows for increased spatial and temporal resolution in time-resolved MRA.

  3. Utilizing generalized autocalibrating partial parallel acquisition (GRAPPA) to achieve high-resolution contrast-enhanced MR angiography of hepatic artery: Initial experience in orthotopic liver transplantation candidates

    Xu Pengju; Yan Fuhua; Wang Jianhua; Lin Jiang; Fan Jia

    2007-01-01

    Objective: To evaluate feasibility of using GRAPPA to acquire high-resolution 3D contrast-enhanced MR angiography (CE-MRA) of hepatic artery and value of GRAPPA for displaying vessels anatomy. Materials and methods: High-resolution CE-MRA using GRAPPA was performed in 67 orthotopic liver transplantation recipient candidates. Signal intensity (SI) and relative SI, i.e., Cv-ro (vessel-to-liver contrast) of the aorta and the hepatic common artery (HCA), were measured. The SI and the relative SI were compared and analyzed using T-test. For purpose of qualitative evaluation, the vessel visualization quality and the order of depicted hepatic artery branches were evaluated by two radiologists independently and assessed by weighted kappa analysis. The depiction of hepatic arterial anatomy and variations was evaluated, and results were correlated with the findings in surgery. Results: The mean SI values were 283.29 ± 65.07 (mean ± S.D.) for aorta and 283.16 ± 64.07 for HCA, respectively. The mean relative SI values were 0.698 ± 0.09 for aorta and 0.696 ± 0.09 for HCA, respectively. Homogeneous enhancement between aorta and HCA was confirmed by statistically insignificant differences (p-values were 0.89 for mean SI values and 0.12 for mean relative SI values, respectively). The average score for vessel visualization ranged from good to excellent for different artery segments. Overall interobserver agreement in the visualization of different artery segments was excellent (kappa value > 0.80). The distal intrahepatic segmental arteries were well delineated for majority of patients with excellent interobserver agreement. Normal hepatic arterial anatomy was correctly demonstrated in 53 patients, and arterial anomalies were accurately detected on high-resolution MRA image of all 14 patients. Conclusion: High-resolution hepatic artery MRA acquired using GRAPPA in a reproducible manner excellently depicts and delineates small vessels and can be routinely used for evaluating OLT candidates

  4. Utilizing generalized autocalibrating partial parallel acquisition (GRAPPA) to achieve high-resolution contrast-enhanced MR angiography of hepatic artery: Initial experience in orthotopic liver transplantation candidates

    Xu Pengju [Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai (China)]. E-mail: xpjbfc@163.com; Yan Fuhua [Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai (China)]. E-mail: yanfuhua@yahoo.com; Wang Jianhua [Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai (China); Lin Jiang [Department of Radiology, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai (China); Fan Jia [Liver Cancer Institute, Zhongshan Hospital, Fudan University, 180 Fenglin Road, Shanghai (China)

    2007-03-15

    Objective: To evaluate feasibility of using GRAPPA to acquire high-resolution 3D contrast-enhanced MR angiography (CE-MRA) of hepatic artery and value of GRAPPA for displaying vessels anatomy. Materials and methods: High-resolution CE-MRA using GRAPPA was performed in 67 orthotopic liver transplantation recipient candidates. Signal intensity (SI) and relative SI, i.e., Cv-ro (vessel-to-liver contrast) of the aorta and the hepatic common artery (HCA), were measured. The SI and the relative SI were compared and analyzed using T-test. For purpose of qualitative evaluation, the vessel visualization quality and the order of depicted hepatic artery branches were evaluated by two radiologists independently and assessed by weighted kappa analysis. The depiction of hepatic arterial anatomy and variations was evaluated, and results were correlated with the findings in surgery. Results: The mean SI values were 283.29 {+-} 65.07 (mean {+-} S.D.) for aorta and 283.16 {+-} 64.07 for HCA, respectively. The mean relative SI values were 0.698 {+-} 0.09 for aorta and 0.696 {+-} 0.09 for HCA, respectively. Homogeneous enhancement between aorta and HCA was confirmed by statistically insignificant differences (p-values were 0.89 for mean SI values and 0.12 for mean relative SI values, respectively). The average score for vessel visualization ranged from good to excellent for different artery segments. Overall interobserver agreement in the visualization of different artery segments was excellent (kappa value > 0.80). The distal intrahepatic segmental arteries were well delineated for majority of patients with excellent interobserver agreement. Normal hepatic arterial anatomy was correctly demonstrated in 53 patients, and arterial anomalies were accurately detected on high-resolution MRA image of all 14 patients. Conclusion: High-resolution hepatic artery MRA acquired using GRAPPA in a reproducible manner excellently depicts and delineates small vessels and can be routinely used for evaluating OLT candidate000.

  5. Parallel MR imaging.

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  6. Parallel magnetic resonance imaging

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  7. Pre-steady-state kinetic analysis of 1-deoxy-D-xylulose-5-phosphate reductoisomerase from Mycobacterium tuberculosis reveals partially rate-limiting product release by parallel pathways.

    Liu, Juan; Murkin, Andrew S

    2012-07-03

    As part of the non-mevalonate pathway for the biosynthesis of the isoprenoid precursor isopentenyl pyrophosphate, 1-deoxy-D-xylulose-5-phosphate (DXP) reductoisomerase (DXR) catalyzes the conversion of DXP into 2-C-methyl-D-erythritol 4-phosphate (MEP) by consecutive isomerization and NADPH-dependent reduction reactions. Because this pathway is essential to many infectious organisms but is absent in humans, DXR is a target for drug discovery. In an attempt to characterize its kinetic mechanism and identify rate-limiting steps, we present the first complete transient kinetic investigation of DXR. Stopped-flow fluorescence measurements with Mycobacterium tuberculosis DXR (MtDXR) revealed that NADPH and MEP bind to the free enzyme and that the two bind together to generate a nonproductive ternary complex. Unlike the Escherichia coli orthologue, MtDXR exhibited a burst in the oxidation of NADPH during pre-steady-state reactions, indicating a partially rate-limiting step follows chemistry. By monitoring NADPH fluorescence during these experiments, the transient generation of MtDXR·NADPH·MEP was observed. Global kinetic analysis supports a model involving random substrate binding and ordered release of NADP(+) followed by MEP. The partially rate-limiting release of MEP occurs via two pathways--directly from the binary complex and indirectly via the MtDXR·NADPH·MEP complex--the partitioning being dependent on NADPH concentration. Previous mechanistic studies, including kinetic isotope effects and product inhibition, are discussed in light of this kinetic mechanism.

  8. Molecular characterization of human T-cell lymphotropic virus type 1 full and partial genomes by Illumina massively parallel sequencing technology.

    Rodrigo Pessôa

    Full Text Available BACKGROUND: Here, we report on the partial and full-length genomic (FLG variability of HTLV-1 sequences from 90 well-characterized subjects, including 48 HTLV-1 asymptomatic carriers (ACs, 35 HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP and 7 adult T-cell leukemia/lymphoma (ATLL patients, using an Illumina paired-end protocol. METHODS: Blood samples were collected from 90 individuals, and DNA was extracted from the PBMCs to measure the proviral load and to amplify the HTLV-1 FLG from two overlapping fragments. The amplified PCR products were subjected to deep sequencing. The sequencing data were assembled, aligned, and mapped against the HTLV-1 genome with sufficient genetic resemblance and utilized for further phylogenetic analysis. RESULTS: A high-throughput sequencing-by-synthesis instrument was used to obtain an average of 3210- and 5200-fold coverage of the partial (n = 14 and FLG (n = 76 data from the HTLV-1 strains, respectively. The results based on the phylogenetic trees of consensus sequences from partial and FLGs revealed that 86 (95.5% individuals were infected with the transcontinental sub-subtypes of the cosmopolitan subtype (aA and that 4 individuals (4.5% were infected with the Japanese sub-subtypes (aB. A comparison of the nucleotide and amino acids of the FLG between the three clinical settings yielded no correlation between the sequenced genotype and clinical outcomes. The evolutionary relationships among the HTLV sequences were inferred from nucleotide sequence, and the results are consistent with the hypothesis that there were multiple introductions of the transcontinental subtype in Brazil. CONCLUSIONS: This study has increased the number of subtype aA full-length genomes from 8 to 81 and HTLV-1 aB from 2 to 5 sequences. The overall data confirmed that the cosmopolitan transcontinental sub-subtypes were the most prevalent in the Brazilian population. It is hoped that this valuable genomic data

  9. Molecular characterization of human T-cell lymphotropic virus type 1 full and partial genomes by Illumina massively parallel sequencing technology.

    Pessôa, Rodrigo; Watanabe, Jaqueline Tomoko; Nukui, Youko; Pereira, Juliana; Casseb, Jorge; Kasseb, Jorge; de Oliveira, Augusto César Penalva; Segurado, Aluisio Cotrim; Sanabani, Sabri Saeed

    2014-01-01

    Here, we report on the partial and full-length genomic (FLG) variability of HTLV-1 sequences from 90 well-characterized subjects, including 48 HTLV-1 asymptomatic carriers (ACs), 35 HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP) and 7 adult T-cell leukemia/lymphoma (ATLL) patients, using an Illumina paired-end protocol. Blood samples were collected from 90 individuals, and DNA was extracted from the PBMCs to measure the proviral load and to amplify the HTLV-1 FLG from two overlapping fragments. The amplified PCR products were subjected to deep sequencing. The sequencing data were assembled, aligned, and mapped against the HTLV-1 genome with sufficient genetic resemblance and utilized for further phylogenetic analysis. A high-throughput sequencing-by-synthesis instrument was used to obtain an average of 3210- and 5200-fold coverage of the partial (n = 14) and FLG (n = 76) data from the HTLV-1 strains, respectively. The results based on the phylogenetic trees of consensus sequences from partial and FLGs revealed that 86 (95.5%) individuals were infected with the transcontinental sub-subtypes of the cosmopolitan subtype (aA) and that 4 individuals (4.5%) were infected with the Japanese sub-subtypes (aB). A comparison of the nucleotide and amino acids of the FLG between the three clinical settings yielded no correlation between the sequenced genotype and clinical outcomes. The evolutionary relationships among the HTLV sequences were inferred from nucleotide sequence, and the results are consistent with the hypothesis that there were multiple introductions of the transcontinental subtype in Brazil. This study has increased the number of subtype aA full-length genomes from 8 to 81 and HTLV-1 aB from 2 to 5 sequences. The overall data confirmed that the cosmopolitan transcontinental sub-subtypes were the most prevalent in the Brazilian population. It is hoped that this valuable genomic data will add to our current understanding of the

  10. Parallel, Rapid Diffuse Optical Tomography of Breast

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  11. Parallel, Rapid Diffuse Optical Tomography of Breast

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  12. Estimating the Acquisition Price of Enshi Yulu Young Tea Shoots Using Near-Infrared Spectroscopy by the Back Propagation Artificial Neural Network Model in Conjunction with Backward Interval Partial Least Squares Algorithm

    Wang, Sh.-P.; Gong, Z.-M.; Su, X.-Zh.; Liao, J.-Zh.

    2017-09-01

    Near infrared spectroscopy and the back propagation artificial neural network model in conjunction with backward interval partial least squares algorithm were used to estimate the purchasing price of Enshi yulu young tea shoots. The near-infrared spectra regions most relevant to the tea shoots price model (5700.5-5935.8, 7613.6-7848.9, 8091.8-8327.1, 8331-8566.2, 9287.5-9522.5, and 9526.6-9761.9 cm-1) were selected using backward interval partial least squares algorithm. The first five principal components that explained 99.96% of the variability in those selected spectral data were then used to calibrate the back propagation artificial neural tea shoots purchasing price model. The performance of this model (coefficient of determination for prediction 0.9724; root-mean-square error of prediction 4.727) was superior to those of the back propagation artificial neural model (coefficient of determination for prediction 0.8653, root-mean-square error of prediction 5.125) and the backward interval partial least squares model (coefficient of determination for prediction 0.5932, root-mean-square error of prediction 25.125). The acquisition price model with the combined backward interval partial least squares-back propagation artificial neural network algorithms can evaluate the price of Enshi yulu tea shoots accurately, quickly and objectively.

  13. Dynamic motion analysis of fetuses with central nervous system disorders by cine magnetic resonance imaging using fast imaging employing steady-state acquisition and parallel imaging: a preliminary result.

    Guo, Wan-Yuo; Ono, Shigeki; Oi, Shizuo; Shen, Shu-Huei; Wong, Tai-Tong; Chung, Hsiao-Wen; Hung, Jeng-Hsiu

    2006-08-01

    The authors present a novel cine magnetic resonance (MR) imaging, two-dimensional (2D) fast imaging employing steady-state acquisition (FIESTA) technique with parallel imaging. It achieves temporal resolution at less than half a second as well as high spatial resolution cine imaging free of motion artifacts for evaluating the dynamic motion of fetuses in utero. The information obtained is used to predict postnatal outcome. Twenty-five fetuses with anomalies were studied. Ultrasonography demonstrated severe abnormalities in five of the fetuses; the other 20 fetuses constituted a control group. The cine fetal MR imaging demonstrated fetal head, neck, trunk, extremity, and finger as well as swallowing motions. Imaging findings were evaluated and compared in fetuses with major central nervous system (CNS) anomalies in five cases and minor CNS, non-CNS, or no anomalies in 20 cases. Normal motility was observed in the latter group. For fetuses in the former group, those with abnormal motility failed to survive after delivery, whereas those with normal motility survived with functioning preserved. The power deposition of radiofrequency, presented as specific absorption rate (SAR), was calculated. The SAR of FIESTA was approximately 13 times lower than that of conventional MR imaging of fetuses obtained using single-shot fast spin echo sequences. The following conclusions are drawn: 1) Fetal motion is no longer a limitation for prenatal imaging after the implementation of parallel imaging with 2D FIESTA, 2) Cine MR imaging illustrates fetal motion in utero with high clinical reliability, 3) For cases involving major CNS anomalies, cine MR imaging provides information on extremity motility in fetuses and serves as a prognostic indicator of postnatal outcome, and 4) The cine MR used to observe fetal activity is technically 2D and conceptually three-dimensional. It provides four-dimensional information for making proper and timely obstetrical and/or postnatal management

  14. Parallel rendering

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  15. Parallel computations

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  16. Parallel algorithms

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  17. Parallel grid population

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  18. Uma interface lab-made para aquisição de sinais analógicos instrumentais via porta paralela do microcomputador A lab-made interface for acquisition of instrumental analog signals at the parallel port of a microcomputer

    Edvaldo da Nóbrega Gaião

    2004-10-01

    Full Text Available A lab-made interface for acquisition of instrumental analog signals between 0 and 5 V at a frequency up to 670 kHz at the parallel port of a microcomputer is described. Since it uses few and small components, it was built into the connector of a printer parallel cable. Its performance was evaluated by monitoring the signals of four different instruments and similar analytical curves were obtained with the interface and from readings from the instrument' displays. Because the components are cheap (~U$35,00 and easy to get, the proposed interface is a simple and economical alternative for data acquisition in small laboratories for routine work, research and teaching.

  19. A survey of parallel multigrid algorithms

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  20. Combinatorics of spreads and parallelisms

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  1. Parallel computation

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  2. Aspects of computation on asynchronous parallel processors

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  3. Parallel R

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  4. Parallel Lines

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  5. New algorithms for parallel MRI

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  6. Data acquisition

    Clout, P.N.

    1982-01-01

    Data acquisition systems are discussed for molecular biology experiments using synchrotron radiation sources. The data acquisition system requirements are considered. The components of the solution are described including hardwired solutions and computer-based solutions. Finally, the considerations for the choice of the computer-based solution are outlined. (U.K.)

  7. The Chateau de Cristal data acquisition system

    Villard, M.M.

    1987-05-01

    This data acquisition system is built on several dedicated data transfer busses: ADC data readout through the FERA bus, parallel data processing in two VME crates. High data rates and selectivities are performed via this acquisition structure and new developed processing units. The system modularity allows various experiments with additional detectors

  8. Efficacy and safety of the partial PPARγ agonist balaglitazone compared with pioglitazone and placebo: A phase III, randomised, parallel-group study in patients with type 2 diabetes on stable insulin therapy

    Henriksen, Kim; Byrjalsen, Inger; Qvist, Per

    2011-01-01

    Treatment of patients with full PPARγ agonists is associated with weight gain, heart failure, peripheral oedema and bone loss. However, the safety of partial PPARγ agonists has not been established in a clinical trial. The BALLET trial aimed to establish the glucose-lowering effects and safety...... in all treatment arms. DXA analyses showed balaglitazone 10mg led to less fat and fluid accumulation and no change in bone mineral density, when compared to pioglitazone. In the balaglitazone 10mg treated group clinically relevant reductions in HbA(1c) and glucose levels were observed, although...... it appeared to be a little less potent that pioglitazone 45mg. On the other hand significantly less fluid and fat accumulation were observed, highlighting this treatment regimen for further studies....

  9. Partial Cancellation

    First page Back Continue Last page Overview Graphics. Partial Cancellation. Full Cancellation is desirable. But complexity requirements are enormous. 4000 tones, 100 Users billions of flops !!! Main Idea: Challenge: To determine which cross-talker to cancel on what “tone” for a given victim. Constraint: Total complexity is ...

  10. Development and application of efficient strategies for parallel magnetic resonance imaging

    Breuer, F.

    2006-07-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image

  11. Development and application of efficient strategies for parallel magnetic resonance imaging

    Breuer, F.

    2006-01-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image artifacts

  12. Partial processing

    1978-11-01

    This discussion paper considers the possibility of applying to the recycle of plutonium in thermal reactors a particular method of partial processing based on the PUREX process but named CIVEX to emphasise the differences. The CIVEX process is based primarily on the retention of short-lived fission products. The paper suggests: (1) the recycle of fission products with uranium and plutonium in thermal reactor fuel would be technically feasible; (2) it would, however, take ten years or more to develop the CIVEX process to the point where it could be launched on a commercial scale; (3) since the majority of spent fuel to be reprocessed this century will have been in storage for ten years or more, the recycling of short-lived fission products with the U-Pu would not provide an effective means of making refabrication fuel ''inaccessible'' because the radioactivity associated with the fission products would have decayed. There would therefore be no advantage in partial processing

  13. Partial gigantism

    М.М. Karimova

    2017-05-01

    Full Text Available A girl with partial gigantism (the increased I and II fingers of the left foot is being examined. This condition is a rare and unresolved problem, as the definite reason of its development is not determined. Wait-and-see strategy is recommended, as well as correcting operations after closing of growth zones, and forming of data pool for generalization and development of schemes of drug and radial therapeutic methods.

  14. Pattern-Driven Automatic Parallelization

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  15. Mergers + acquisitions.

    Hoppszallern, Suzanna

    2002-05-01

    The hospital sector in 2001 led the health care field in mergers and acquisitions. Most deals involved a network augmenting its presence within a specific region or in a market adjacent to its primary service area. Analysts expect M&A activity to increase in 2002.

  16. The FINUDA data acquisition system

    Cerello, P.; Marcello, S.; Filippini, V.; Fiore, L.; Gianotti, P.; Raimondo, A.

    1996-07-01

    A parallel scalable Data Acquisition System, based on VME, has been developed to be used in the FINUDA experiment, scheduled to run at the DAPHNE machine at Frascati starting from 1997. The acquisition software runs on embedded RTPC 8067 processors using the LynxOS operating system. The readout of event fragments is coordinated by a suitable trigger Supervisor. data read by different controllers are transported via dedicated bus to a Global Event Builder running on a UNIX machine. Commands from and to VME processors are sent via socket based network protocols. The network hardware is presently ethernet, but it can easily changed to optical fiber

  17. Iterative algorithms for large sparse linear systems on parallel computers

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  18. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    Yuchou Chang

    2017-01-01

    Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.

  19. Parallel Programming with Intel Parallel Studio XE

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  20. Mergers & Acquisitions

    Fomcenco, Alex

    This dissertation is a legal dogmatic thesis, the goal of which is to describe and analyze the current state of law in Europe in regard to some relevant selected elements related to mergers and acquisitions, and the adviser’s counsel in this regard. Having regard to the topic of the dissertation...... and fiscal neutrality, group-related issues, holding-structure issues, employees, stock exchange listing issues, and corporate nationality....

  1. Parallel-In-Time For Moving Meshes

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  2. Ultrasound Vector Flow Imaging: Part II: Parallel Systems

    Jensen, Jørgen Arendt; Nikolov, Svetoslav Ivanov; Yu, Alfred C. H.

    2016-01-01

    The paper gives a review of the current state-of-theart in ultrasound parallel acquisition systems for flow imaging using spherical and plane waves emissions. The imaging methods are explained along with the advantages of using these very fast and sensitive velocity estimators. These experimental...... ultrasound imaging for studying brain function in animals. The paper explains the underlying acquisition and estimation methods for fast 2-D and 3-D velocity imaging and gives a number of examples. Future challenges and the potentials of parallel acquisition systems for flow imaging are also discussed....

  3. Practical parallel computing

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  4. Parallel sorting algorithms

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  5. Precipitation in partially stabilized zirconia

    Bansal, G.K.

    1975-01-01

    Transmission electron microscopy was used to study the substructure of partially stabilized ZrO 2 (PSZ) samples, i.e., 2-phase systems containing both cubic and monoclinic modifications of zirconia, after various heat treatments. Monoclinic ZrO 2 exists as (1) isolated grains within the polycrystalline aggregate (a grain- boundary phase) and (2) small plate-like particles within cubic grains. These intragranular precipitates are believed to contribute to the useful properties of PSZ via a form of precipitation hardening. These precipitates initially form as tetragonal ZrO 2 , with a habit plane parallel to the brace 100 brace matrix planes. The orientation relations between the tetragonal precipitates and the cubic matrix are brace 100 brace/sub matrix/ 2 parallel brace 100 brace /sub precipitate/ or (001)/sub precipitate/ and broken bracket 100 broken bracket/sub matrix/ 2 parallel broken bracket 100 broken bracket/sub precipitate/ or [001]/sub precipitate/. (U.S.)

  6. Introduction to parallel programming

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  7. Parallel computing works!

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  8. Partial dynamical systems, fell bundles and applications

    Exel, Ruy

    2017-01-01

    Partial dynamical systems, originally developed as a tool to study algebras of operators in Hilbert spaces, has recently become an important branch of algebra. Its most powerful results allow for understanding structural properties of algebras, both in the purely algebraic and in the C*-contexts, in terms of the dynamical properties of certain systems which are often hiding behind algebraic structures. The first indication that the study of an algebra using partial dynamical systems may be helpful is the presence of a grading. While the usual theory of graded algebras often requires gradings to be saturated, the theory of partial dynamical systems is especially well suited to treat nonsaturated graded algebras which are in fact the source of the notion of "partiality". One of the main results of the book states that every graded algebra satisfying suitable conditions may be reconstructed from a partial dynamical system via a process called the partial crossed product. Running in parallel with partial dynamica...

  9. Parallel Atomistic Simulations

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  10. Microcomputer data acquisition and control.

    East, T D

    1986-01-01

    In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence.

  11. Domain decomposition methods and parallel computing

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  12. High temporal resolution functional MRI using parallel echo volumar imaging

    Rabrait, C.; Ciuciu, P.; Ribes, A.; Poupon, C.; Dehaine-Lambertz, G.; LeBihan, D.; Lethimonnier, F.; Le Roux, P.; Dehaine-Lambertz, G.

    2008-01-01

    Purpose: To combine parallel imaging with 3D single-shot acquisition (echo volumar imaging, EVI) in order to acquire high temporal resolution volumar functional MRI (fMRI) data. Materials and Methods: An improved EVI sequence was associated with parallel acquisition and field of view reduction in order to acquire a large brain volume in 200 msec. Temporal stability and functional sensitivity were increased through optimization of all imaging parameters and Tikhonov regularization of parallel reconstruction. Two human volunteers were scanned with parallel EVI in a 1.5 T whole-body MR system, while submitted to a slow event-related auditory paradigm. Results: Thanks to parallel acquisition, the EVI volumes display a low level of geometric distortions and signal losses. After removal of low-frequency drifts and physiological artifacts,activations were detected in the temporal lobes of both volunteers and voxel-wise hemodynamic response functions (HRF) could be computed. On these HRF different habituation behaviors in response to sentence repetition could be identified. Conclusion: This work demonstrates the feasibility of high temporal resolution 3D fMRI with parallel EVI. Combined with advanced estimation tools,this acquisition method should prove useful to measure neural activity timing differences or study the nonlinearities and non-stationarities of the BOLD response. (authors)

  13. Parallelization in Modern C++

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  14. Parallelism in matrix computations

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  15. A parallel buffer tree

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  16. Parallel Algorithms and Patterns

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  17. Application Portable Parallel Library

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  18. 2017 NAIP Acquisition Map

    Farm Service Agency, Department of Agriculture — Planned States for 2017 NAIP acquisition and acquisition status layer (updated daily). Updates to the acquisition seasons may be made during the season to...

  19. Parallel preconditioning techniques for sparse CG solvers

    Basermann, A.; Reichel, B.; Schelthoff, C. [Central Institute for Applied Mathematics, Juelich (Germany)

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  20. Data acquisition systems at Fermilab

    Votava, M.

    1999-01-01

    Experiments at Fermilab require an ongoing program of development for high speed, distributed data acquisition systems. The physics program at the lab has recently started the operation of a Fixed Target run in which experiments are running the DART[1] data acquisition system. The CDF and D experiments are preparing for the start of the next Collider run in mid 2000. Each will read out on the order of 1 million detector channels. In parallel, future experiments such as BTeV R ampersand D and Minos have already started prototype and test beam work. BTeV in particular has challenging data acquisition system requirements with an input rate of 1500 Gbytes/sec into Level 1 buffers and a logging rate of 200 Mbytes/sec. This paper will present a general overview of these data acquisition systems on three fronts those currently in use, those to be deployed for the Collider Run in 2000, and those proposed for future experiments. It will primarily focus on the CDF and D architectures and tools

  1. Syntax acquisition.

    Crain, Stephen; Thornton, Rosalind

    2012-03-01

    Every normal child acquires a language in just a few years. By 3- or 4-years-old, children have effectively become adults in their abilities to produce and understand endlessly many sentences in a variety of conversational contexts. There are two alternative accounts of the course of children's language development. These different perspectives can be traced back to the nature versus nurture debate about how knowledge is acquired in any cognitive domain. One perspective dates back to Plato's dialog 'The Meno'. In this dialog, the protagonist, Socrates, demonstrates to Meno, an aristocrat in Ancient Greece, that a young slave knows more about geometry than he could have learned from experience. By extension, Plato's Problem refers to any gap between experience and knowledge. How children fill in the gap in the case of language continues to be the subject of much controversy in cognitive science. Any model of language acquisition must address three factors, inter alia: 1. The knowledge children accrue; 2. The input children receive (often called the primary linguistic data); 3. The nonlinguistic capacities of children to form and test generalizations based on the input. According to the famous linguist Noam Chomsky, the main task of linguistics is to explain how children bridge the gap-Chomsky calls it a 'chasm'-between what they come to know about language, and what they could have learned from experience, even given optimistic assumptions about their cognitive abilities. Proponents of the alternative 'nurture' approach accuse nativists like Chomsky of overestimating the complexity of what children learn, underestimating the data children have to work with, and manifesting undue pessimism about children's abilities to extract information based on the input. The modern 'nurture' approach is often referred to as the usage-based account. We discuss the usage-based account first, and then the nativist account. After that, we report and discuss the findings of several

  2. Parallel discrete event simulation

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  3. Parallel reservoir simulator computations

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  4. Totally parallel multilevel algorithms

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  5. Parallel computing works

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  6. Massively parallel mathematical sieves

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  7. Data acquisition system for SLD

    Sherden, D.J.

    1985-05-01

    This paper describes the data acquisition system planned for the SLD detector which is being constructed for use with the SLAC Linear Collider (SLC). An exclusively FASTBUS front-end system is used together with a VAX-based host system. While the volume of data transferred does not challenge the band-width capabilities of FASTBUS, extensive use is made of the parallel processing capabilities allowed by FASTBUS to reduce the data to a size which can be handled by the host system. The low repetition rate of the SLC allows a relatively simple software-based trigger. The principal components and overall architecture of the hardware and software are described

  8. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  9. Automated, parallel mass spectrometry imaging and structural identification of lipids

    Ellis, Shane R.; Paine, Martin R.L.; Eijkel, Gert B.

    2018-01-01

    We report a method that enables automated data-dependent acquisition of lipid tandem mass spectrometry data in parallel with a high-resolution mass spectrometry imaging experiment. The method does not increase the total image acquisition time and is combined with automatic structural assignments....... This lipidome-per-pixel approach automatically identified and validated 104 unique molecular lipids and their spatial locations from rat cerebellar tissue....

  10. Partial Polarization in Interfered Plasmon Fields

    P. Martínez Vara

    2014-01-01

    Full Text Available We describe the polarization features for plasmon fields generated by the interference between two elemental surface plasmon modes, obtaining a set of Stokes parameters which allows establishing a parallelism with the traditional polarization model. With the analysis presented, we find the corresponding coherence matrix for plasmon fields incorporating to the plasmon optics the study of partial polarization effects.

  11. Parallel data grabbing card based on PCI bus RS422

    Zhang Zhenghui; Shen Ji; Wei Dongshan; Chen Ziyu

    2005-01-01

    This article briefly introduces the developments of the parallel data grabbing card based on RS422 and PCI bus. It could be applied for grabbing the 14 bits parallel data in high speed, coming from the devices with RS422 interface. The methods of data acquisition which bases on the PCI protocol, the functions and their usages of the chips employed, the ideas and principles of the hardware and software designing are presented. (authors)

  12. Partial Discharge Monitoring on Metal-Enclosed Switchgear with Distributed Non-Contact Sensors

    Chongxing Zhang

    2018-02-01

    Full Text Available Metal-enclosed switchgear, which are widely used in the distribution of electrical energy, play an important role in power distribution networks. Their safe operation is directly related to the reliability of power system as well as the power quality on the consumer side. Partial discharge detection is an effective way to identify potential faults and can be utilized for insulation diagnosis of metal-enclosed switchgear. The transient earth voltage method, an effective non-intrusive method, has substantial engineering application value for estimating the insulation condition of switchgear. However, the practical application effectiveness of TEV detection is not satisfactory because of the lack of a TEV detection application method, i.e., a method with sufficient technical cognition and analysis. This paper proposes an innovative online PD detection system and a corresponding application strategy based on an intelligent feedback distributed TEV wireless sensor network, consisting of sensing, communication, and diagnosis layers. In the proposed system, the TEV signal or status data are wirelessly transmitted to the terminal following low-energy signal preprocessing and acquisition by TEV sensors. Then, a central server analyzes the correlation of the uploaded data and gives a fault warning level according to the quantity, trend, parallel analysis, and phase resolved partial discharge pattern recognition. In this way, a TEV detection system and strategy with distributed acquisition, unitized fault warning, and centralized diagnosis is realized. The proposed system has positive significance for reducing the fault rate of medium voltage switchgear and improving its operation and maintenance level.

  13. Partial Discharge Monitoring on Metal-Enclosed Switchgear with Distributed Non-Contact Sensors.

    Zhang, Chongxing; Dong, Ming; Ren, Ming; Huang, Wenguang; Zhou, Jierui; Gao, Xuze; Albarracín, Ricardo

    2018-02-11

    Metal-enclosed switchgear, which are widely used in the distribution of electrical energy, play an important role in power distribution networks. Their safe operation is directly related to the reliability of power system as well as the power quality on the consumer side. Partial discharge detection is an effective way to identify potential faults and can be utilized for insulation diagnosis of metal-enclosed switchgear. The transient earth voltage method, an effective non-intrusive method, has substantial engineering application value for estimating the insulation condition of switchgear. However, the practical application effectiveness of TEV detection is not satisfactory because of the lack of a TEV detection application method, i.e., a method with sufficient technical cognition and analysis. This paper proposes an innovative online PD detection system and a corresponding application strategy based on an intelligent feedback distributed TEV wireless sensor network, consisting of sensing, communication, and diagnosis layers. In the proposed system, the TEV signal or status data are wirelessly transmitted to the terminal following low-energy signal preprocessing and acquisition by TEV sensors. Then, a central server analyzes the correlation of the uploaded data and gives a fault warning level according to the quantity, trend, parallel analysis, and phase resolved partial discharge pattern recognition. In this way, a TEV detection system and strategy with distributed acquisition, unitized fault warning, and centralized diagnosis is realized. The proposed system has positive significance for reducing the fault rate of medium voltage switchgear and improving its operation and maintenance level.

  14. Correction for Eddy Current-Induced Echo-Shifting Effect in Partial-Fourier Diffusion Tensor Imaging.

    Truong, Trong-Kha; Song, Allen W; Chen, Nan-Kuei

    2015-01-01

    In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T2(∗) -weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed.

  15. Algorithms for parallel computers

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  16. Parallelism and array processing

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  17. 48 CFR 49.109-5 - Partial settlements.

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Partial settlements. 49... MANAGEMENT TERMINATION OF CONTRACTS General Principles 49.109-5 Partial settlements. The TCO should attempt... settlements covering particular items of the prime contractor's settlement proposal. However, when a TCO...

  18. Survival of the Partial Reinforcement Extinction Effect after Contextual Shifts

    Boughner, Robert L.; Papini, Mauricio R.

    2006-01-01

    The effects of contextual shifts on the partial reinforcement extinction effect (PREE) were studied in autoshaping with rats. Experiment 1 established that the two contexts used subsequently were easily discriminable and equally salient. In Experiment 2, independent groups of rats received acquisition training under partial reinforcement (PRF) or…

  19. The Future of Additive Manufacturing in Air Force Acquisition

    2017-03-22

    heal, if not cure, the acquisition woes and financial death spiral . 7 Additive Manufacturing as a Partial Solution to Acquisition Woes “As we... methodologies .”7 Additive manufacturing starts with a 3D model, derived either from a 3D scanner or created via software such as a Computer Aided Design (CAD

  20. The Civil Defense Acquisition Workforce: Enhancing Recruitment Through Hiring Flexibilities

    2016-11-22

    20 Other Aspects of Acquisition Workforce Improvement ................................................................ 21 Pay Flexibilities...a subset of civilian acquisition hires (external hires) and may contain some counting discrepancies . These limitations might be partially...potential discrepancies with department-level guidance. DOD has taken steps to encourage better use of hiring flexibilities department-wide. The USD(AT&L

  1. [First language acquisition research and theories of language acquisition].

    Miller, S; Jungheim, M; Ptok, M

    2014-04-01

    In principle, a child can seemingly easily acquire any given language. First language acquisition follows a certain pattern which to some extent is found to be language independent. Since time immemorial, it has been of interest why children are able to acquire language so easily. Different disciplinary and methodological orientations addressing this question can be identified. A selective literature search in PubMed and Scopus was carried out and relevant monographies were considered. Different, partially overlapping phases can be distinguished in language acquisition research: whereas in ancient times, deprivation experiments were carried out to discover the "original human language", the era of diary studies began in the mid-19th century. From the mid-1920s onwards, behaviouristic paradigms dominated this field of research; interests were focussed on the determination of normal, average language acquisition. The subsequent linguistic period was strongly influenced by the nativist view of Chomsky and the constructivist concepts of Piaget. Speech comprehension, the role of speech input and the relevance of genetic disposition became the centre of attention. The interactionist concept led to a revival of the convergence theory according to Stern. Each of these four major theories--behaviourism, cognitivism, interactionism and nativism--have given valuable and unique impulses, but no single theory is universally accepted to provide an explanation of all aspects of language acquisition. Moreover, it can be critically questioned whether clinicians consciously refer to one of these theories in daily routine work and whether therapies are then based on this concept. It remains to be seen whether or not new theories of grammar, such as the so-called construction grammar (CxG), will eventually change the general concept of language acquisition.

  2. Speed in Acquisitions

    Meglio, Olimpia; King, David R.; Risberg, Annette

    2017-01-01

    The advantage of speed is often invoked by academics and practitioners as an essential condition during post-acquisition integration, frequently without consideration of the impact earlier decisions have on acquisition speed. In this article, we examine the role speed plays in acquisitions across...... the acquisition process using research organized around characteristics that display complexity with respect to acquisition speed. We incorporate existing research with a process perspective of acquisitions in order to present trade-offs, and consider the influence of both stakeholders and the pre......-deal-completion context on acquisition speed, as well as the organization’s capabilities to facilitating that speed. Observed trade-offs suggest both that acquisition speed often requires longer planning time before an acquisition and that associated decisions require managerial judgement. A framework for improving...

  3. The Primordial Soup Algorithm : a systematic approach to the specification of parallel parsers

    Janssen, Wil; Janssen, W.P.M.; Poel, Mannes; Sikkel, Nicolaas; Zwiers, Jakob

    1992-01-01

    A general framework for parallel parsing is presented, which allows for a unitied, systematic approach to parallel parsing. The Primordial Soup Algorithm creates trees by allowing partial parse trees to combine arbitrarily. By adding constraints to the general algorithm, a large, class of parallel

  4. The STAPL Parallel Graph Library

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  5. Language Acquisition without an Acquisition Device

    O'Grady, William

    2012-01-01

    Most explanatory work on first and second language learning assumes the primacy of the acquisition phenomenon itself, and a good deal of work has been devoted to the search for an "acquisition device" that is specific to humans, and perhaps even to language. I will consider the possibility that this strategy is misguided and that language…

  6. LAMPF nuclear chemistry data acquisition system

    Giesler, G.C.

    1983-01-01

    The LAMPF Nuclear Chemistry Data Acquisition System (DAS) is designed to provide both real-time control of data acquisition and facilities for data processing for a large variety of users. It consists of a PDP-11/44 connected to a parallel CAMAC branch highway as well as to a large number of peripherals. The various types of radiation counters and spectrometers and their connections to the system will be described. Also discussed will be the various methods of connection considered and their advantages and disadvantages. The operation of the system from the standpoint of both hardware and software will be described as well as plans for the future

  7. The HyperCP data acquisition system

    Kaplan, D.M.

    1997-06-01

    For the HyperCP experiment at Fermilab, we have assembled a data acquisition system that records on up to 45 Exabyte 8505 tape drives in parallel at up to 17 MB/s. During the beam spill, data are acquired from the front-end digitization systems at ∼ 60 MB/s via five parallel data paths. The front-end systems achieve typical readout deadtime of ∼ 1 micros per event, allowing operation at 75-kHz trigger rate with approx-lt 30% deadtime. Event building and tapewriting are handled by 15 Motorola MVME167 processors in 5 VME crates

  8. Massively parallel multicanonical simulations

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  9. Calo trigger acquisition system

    Franchini, Matteo

    2016-01-01

    Calo trigger acquisition system - Evolution of the acquisition system from a multiple boards system (upper, orange cables) to a single board one (below, light blue cables) where all the channels are collected in a single board.

  10. Modelling live forensic acquisition

    Grobler, MM

    2009-06-01

    Full Text Available This paper discusses the development of a South African model for Live Forensic Acquisition - Liforac. The Liforac model is a comprehensive model that presents a range of aspects related to Live Forensic Acquisition. The model provides forensic...

  11. Playing at Serial Acquisitions

    J.T.J. Smit (Han); T. Moraitis (Thras)

    2010-01-01

    textabstractBehavioral biases can result in suboptimal acquisition decisions-with the potential for errors exacerbated in consolidating industries, where consolidators design serial acquisition strategies and fight escalating takeover battles for platform companies that may determine their future

  12. Pattern recognition with parallel associative memory

    Toth, Charles K.; Schenk, Toni

    1990-01-01

    An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.

  13. SPINning parallel systems software

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  14. Parallel programming with Python

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  15. Mergers and Acquisitions

    Frasch, Manfred; Leptin, Maria

    2000-01-01

    Mergers and acquisitions (M&As) are booming a strategy of choice for organizations attempting to maintain a competitive advantage. Previous research on mergers and acquisitions declares that acquirers do not normally benefit from acquisitions. Targets, on the other hand, have a tendency of gaining positive returns in the few days surrounding merger announcements due to several characteristic on the acquisitions deal. The announcement period wealth effect on acquiring firms, however, is as cle...

  16. High spatial resolution whole-body MR angiography featuring parallel imaging: initial experience

    Quick, H.H.; Vogt, F.M.; Madewald, S.; Herborn, C.U.; Bosk, S.; Goehde, S.; Debatin, J.F.; Ladd, M.E.

    2004-01-01

    Materials and methods: whole-body multi-station MRA was performed with a rolling table platform (AngioSURF) on 5 volunteers in two imaging series: 1) standard imaging protocol, 2) modified high-resolution protocol employing PAT using the generalized autocalibrating partially parallel acquisitions (GRAPPA) algorithm with an acceleration factor of 3. For an intra-individual comparison of the two MR examinations, the arterial vasculature was divided into 30 segments. Signal-to-noise ratios (SNR) and contrast-to-noise ratios (CNR) were calculated for all 30 arterial segments of each subject. Vessel segment depiction was qualitatively assessed applying a 5-point scale to each of the segments. Image reconstruction times were recorded for the standard as well as the PAT protocol. Results: compared to the standard protocol, PAT allowed for increased spatial resolution through a 3-fold reduction in mean voxel size for each of the 5 stations. Mean SNR and CNR values over all specified vessel segments decreased by a factor of 1.58 and 1.56, respectively. Despite the reduced SNR and CNR, the depiction of all specified vessel segments increased in PAT images, reflecting the increased spatial resolution. Qualitative comparison of standard and PAT images showed an increase in vessel segment conspicuity with more detailed depiction of intramuscular arterial branches in all volunteers. The time for image data reconstruction of all 5 stations was significantly increased from about 10 minutes to 40 minutes when using the PAT acquisition. (orig.) [de

  17. Research in Parallel Algorithms and Software for Computational Aerosciences

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  18. Partial tooth gear bearings

    Vranish, John M. (Inventor)

    2010-01-01

    A partial gear bearing including an upper half, comprising peak partial teeth, and a lower, or bottom, half, comprising valley partial teeth. The upper half also has an integrated roller section between each of the peak partial teeth with a radius equal to the gear pitch radius of the radially outwardly extending peak partial teeth. Conversely, the lower half has an integrated roller section between each of the valley half teeth with a radius also equal to the gear pitch radius of the peak partial teeth. The valley partial teeth extend radially inwardly from its roller section. The peak and valley partial teeth are exactly out of phase with each other, as are the roller sections of the upper and lower halves. Essentially, the end roller bearing of the typical gear bearing has been integrated into the normal gear tooth pattern.

  19. Expressing Parallelism with ROOT

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  20. Expressing Parallelism with ROOT

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  1. Parallel Fast Legendre Transform

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  2. Practical parallel programming

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  3. Parallel hierarchical radiosity rendering

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  4. Parallel universes beguile science

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  5. Parallel k-means++

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  6. Parallel plate detectors

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  7. Parallel hierarchical global illumination

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  8. Data acquisition for the D0 experiment

    Cutts, D.; Hoftun, J.S.; Johnson, C.R.; Zeller, R.T.; Trojak, T.; Van Berg, R.

    1985-01-01

    We describe the acquisition system for the D0 experiment at Fermilab, focusing primarily on the second level, which is based on a large parallel array of MicroVAX-II's. In this design data flows from the detector readout crates at a maximum rate of 320 Mbytes/sec into dual-port memories associated with one selected processor in which a VAXELIN based program performs the filter analysis of a complete event

  9. The UA1 VME data acquisition system

    Cittolin, S.

    1988-01-01

    The data acquisition system of a large-scale experiment such as UA1, running at the CERN proton-antiproton collider, has to cope with very high data rates and to perform sophisticated triggering and filtering in order to analyze interesting events. These functions are performed by a variety of programmable units organized in a parallel multiprocessor system whose central architecture is based on the industry-standard VME/VMXbus. (orig.)

  10. Essays on partial retirement

    Kantarci, T.

    2012-01-01

    The five essays in this dissertation address a range of topics in the micro-economic literature on partial retirement. The focus is on the labor market behavior of older age groups. The essays examine the economic and non-economic determinants of partial retirement behavior, the effect of partial

  11. Smart acquisition EELS

    Sader, Kasim; Schaffer, Bernhard; Vaughan, Gareth; Brydson, Rik; Brown, Andy; Bleloch, Andrew

    2010-01-01

    We have developed a novel acquisition methodology for the recording of electron energy loss spectra (EELS) using a scanning transmission electron microscope (STEM): 'Smart Acquisition'. Smart Acquisition allows the independent control of probe scanning procedures and the simultaneous acquisition of analytical signals such as EELS. The original motivation for this work arose from the need to control the electron dose experienced by beam-sensitive specimens whilst maintaining a sufficiently high signal-to-noise ratio in the EEL signal for the extraction of useful analytical information (such as energy loss near edge spectral features) from relatively undamaged areas. We have developed a flexible acquisition framework which separates beam position data input, beam positioning, and EELS acquisition. In this paper we demonstrate the effectiveness of this technique on beam-sensitive thin films of amorphous aluminium trifluoride. Smart Acquisition has been used to expose lines to the electron beam, followed by analysis of the structures created by line-integrating EELS acquisitions, and the results are compared to those derived from a standard EELS linescan. High angle annular dark-field images show clear reductions in damage for the Smart Acquisition areas compared to the conventional linescan, and the Smart Acquisition low loss EEL spectra are more representative of the undamaged material than those derived using a conventional linescan. Atomically resolved EELS of all four elements of CaNdTiO show the high resolution capabilities of Smart Acquisition.

  12. Smart acquisition EELS

    Sader, Kasim, E-mail: k.sader@leeds.ac.uk [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Schaffer, Bernhard [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Department of Physics and Astronomy, University of Glasgow (United Kingdom); Vaughan, Gareth [Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Brydson, Rik [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Brown, Andy [Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Bleloch, Andrew [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Department of Engineering, University of Liverpool, Liverpool (United Kingdom)

    2010-07-15

    We have developed a novel acquisition methodology for the recording of electron energy loss spectra (EELS) using a scanning transmission electron microscope (STEM): 'Smart Acquisition'. Smart Acquisition allows the independent control of probe scanning procedures and the simultaneous acquisition of analytical signals such as EELS. The original motivation for this work arose from the need to control the electron dose experienced by beam-sensitive specimens whilst maintaining a sufficiently high signal-to-noise ratio in the EEL signal for the extraction of useful analytical information (such as energy loss near edge spectral features) from relatively undamaged areas. We have developed a flexible acquisition framework which separates beam position data input, beam positioning, and EELS acquisition. In this paper we demonstrate the effectiveness of this technique on beam-sensitive thin films of amorphous aluminium trifluoride. Smart Acquisition has been used to expose lines to the electron beam, followed by analysis of the structures created by line-integrating EELS acquisitions, and the results are compared to those derived from a standard EELS linescan. High angle annular dark-field images show clear reductions in damage for the Smart Acquisition areas compared to the conventional linescan, and the Smart Acquisition low loss EEL spectra are more representative of the undamaged material than those derived using a conventional linescan. Atomically resolved EELS of all four elements of CaNdTiO show the high resolution capabilities of Smart Acquisition.

  13. An original approach to data acquisition CHADAC

    CERN. Geneva

    1981-01-01

    Many labs try to boost existing data acquisition systems by inserting high performance intelligent devices in the important nodes of the system's structure. This strategy finds its limits in the system's architecture. The CHADAC project proposes a simple and efficient solution to this problem, using a multiprocessor modular architecture. CHADAC main features are: parallel acquisition of data; CHADAC is fast, it dedicates one processor per branch and each processor can read and store one 16 bit word in 800 ns; original structure; each processor can work in its own private memory, in its own shared memory (double access) and in the shared memory of any other processor. Simple and fast communications between processors are also provided by local DMAs; flexibility; each processor is autonomous and may be used as an independent acquisition system for a branch, by connecting local peripherals to it. Adjunction of fast trigger logic is possible. By its architecture and performances, CHADAC is designed to provide a g...

  14. Recurrent Partial Words

    Francine Blanchet-Sadri

    2011-08-01

    Full Text Available Partial words are sequences over a finite alphabet that may contain wildcard symbols, called holes, which match or are compatible with all letters; partial words without holes are said to be full words (or simply words. Given an infinite partial word w, the number of distinct full words over the alphabet that are compatible with factors of w of length n, called subwords of w, refers to a measure of complexity of infinite partial words so-called subword complexity. This measure is of particular interest because we can construct partial words with subword complexities not achievable by full words. In this paper, we consider the notion of recurrence over infinite partial words, that is, we study whether all of the finite subwords of a given infinite partial word appear infinitely often, and we establish connections between subword complexity and recurrence in this more general framework.

  15. Parallel asynchronous systems and image processing algorithms

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  16. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  17. Ultrascalable petaflop parallel supercomputer

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  18. More parallel please

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  19. PARALLEL MOVING MECHANICAL SYSTEMS

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  20. Xyce parallel electronic simulator.

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  1. Stability of parallel flows

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  2. Algorithmically specialized parallel computers

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  3. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  4. Front-end data processing the SLD data acquisition system

    Nielsen, B.S.

    1986-07-01

    The data acquisition system for the SLD detector will make extensive use of parallel at the front-end level. Fastbus acquisition modules are being built with powerful processing capabilities for calibration, data reduction and further pre-processing of the large amount of analog data handled by each module. This paper describes the read-out electronics chain and data pre-processing system adapted for most of the detector channels, exemplified by the central drift chamber waveform digitization and processing system

  5. Fast image processing on parallel hardware

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  6. Resistor Combinations for Parallel Circuits.

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  7. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  8. Parallel External Memory Graph Algorithms

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  9. Implementing parallel elliptic solver on a Beowulf cluster

    Marcin Paprzycki

    1999-12-01

    Full Text Available In a recent paper cite{zara} a parallel direct solver for the linear systems arising from elliptic partial differential equations has been proposed. The aim of this note is to present the initial evaluation of the performance characteristics of this algorithm on Beowulf-type cluster. In this context the performance of PVM and MPI based implementations is compared.

  10. Parallel inter channel interaction mechanisms

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  11. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. Massively Parallel QCD

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  13. A Parallel Butterfly Algorithm

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  14. A Parallel Butterfly Algorithm

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  15. Fast parallel event reconstruction

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  16. Acquisition Research Program Homepage

    2015-01-01

    Includes an image of the main page on this date and compressed file containing additional web pages. Established in 2003, Naval Postgraduate School’s (NPS) Acquisition Research Program provides leadership in innovation, creative problem solving and an ongoing dialogue, contributing to the evolution of Department of Defense acquisition strategies.

  17. Making Acquisition Measurable

    2011-04-30

    Corporation. All rights reserved End Users Administrator/ Maintainer (A/M) Subject Matter Expert ( SME ) Trainer/ Instructor Manager, Evaluator, Supervisor... CMMI ) - Acquisition (AQ) © 2011 The MITRE Corporation. All rights reserved 13 CMMI -Development Incremental iterative development (planning & execution...objectives Constructing games highlighting particular aspects of proposed CCOD® acquisition, and conducting exercises with Subject Matter Experts ( SMEs

  18. An embedded control and acquisition system for multichannel detectors

    Gori, L.; Tommasini, R.; Cautero, G.; Giuressi, D.; Barnaba, M.; Accardo, A.; Carrato, S.; Paolucci, G.

    1999-01-01

    We present a pulse counting multichannel data acquisition system, characterized by the high number of high speed acquisition channels, and by the modular, embedded system architecture. The former leads to very fast acquisitions and allows to obtain sequences of snapshots, for the study of time dependent phenomena. The latter, thanks to the integration of a CPU into the system, provides high computational capabilities, so that the interfacing with the user computer is very simple and user friendly. Moreover, the user computer is free from control and acquisition tasks. The system has been developed for one of the beamlines of the third generation synchrotron radiation sources ELETTRA, and because of the modular architecture can be useful in various other kinds of experiments, where parallel acquisition, high data rates, and user friendliness are required. First experimental results on a double pass hemispherical electron analyser provided with a 96 channel detector confirm the validity of the approach. (author)

  19. Mergers and Acquisitions

    Risberg, Annette

    Introduction to the study of mergers and acquisitions. This book provides an understanding of the mergers and acquisitions process, how and why they occur, and also the broader implications for organizations. It presents issues including motives and planning, partner selection, integration......, employee experiences and communication. Mergers and acquisitions remain one of the most common forms of growth, yet they present considerable challenges for the companies and management involved. The effects on stakeholders, including shareholders, managers and employees, must be considered as well...... by editorial commentaries and reflects the important organizational and behavioural aspects which have often been ignored in the past. By providing this in-depth understanding of the mergers and acquisitions process, the reader understands not only how and why mergers and acquisitions occur, but also...

  20. Data Acquisition System

    Cirstea, C.D.; Buda, S.I.; Constantin, F.

    2005-01-01

    This paper deals with a multi parametric acquisition system developed for a four input Analog to Digital Converter working in CAMAC Standard. The acquisition software is built in MS Visual C++ on a standard PC with a USB interface. It has a visual interface which permits Start/Stop of the acquisition, setting the type of acquisition (True/Live time), the time and various menus for primary data acquisition. The spectrum is dynamically visualized with a moving cursor indicating the content and position. The microcontroller PIC16C765 is used for data transfer from ADC to PC; The microcontroller and the software create an embedded system which emulates the CAMAC protocol programming the 4 input ADC for operating modes ('zero suppression', 'addressed' and 'sequential') and handling the data transfers from ADC to its internal memory. From its memory the data is transferred into the PC by the USB interface. The work is in progress. (authors)

  1. Data acquisition system

    Cirstea, D.C.; Buda, S.I.; Constantin, F.

    2005-01-01

    The topic of this paper deals with a multi parametric acquisition system developed around a four input Analog to Digital Converter working in CAMAC Standard. The acquisition software is built in MS Visual C++ on a standard PC with a USB interface. It has a visual interface which permits Start/Stop of the acquisition, setting the type of acquisition (True/Live time), the time and various menus for primary data acquisition. The spectrum is dynamically visualized with a moving cursor indicating the content and position. The microcontroller PIC16C765 is used for data transfer from ADC to PC; The microcontroller and the software create an embedded system which emulates the CAMAC protocol programming, the 4 input ADC for operating modes ('zero suppression', 'addressed' and 'sequential') and handling the data transfers from ADC to its internal memory. From its memory the data is transferred into the PC by the USB interface. The work is in progress. (authors)

  2. The acquisition of conditioned responding.

    Harris, Justin A

    2011-04-01

    This report analyzes the acquisition of conditioned responses in rats trained in a magazine approach paradigm. Following the suggestion by Gallistel, Fairhurst, and Balsam (2004), Weibull functions were fitted to the trial-by-trial response rates of individual rats. These showed that the emergence of responding was often delayed, after which the response rate would increase relatively gradually across trials. The fit of the Weibull function to the behavioral data of each rat was equaled by that of a cumulative exponential function incorporating a response threshold. Thus, the growth in conditioning strength on each trial can be modeled by the derivative of the exponential--a difference term of the form used in many models of associative learning (e.g., Rescorla & Wagner, 1972). Further analyses, comparing the acquisition of responding with a continuously reinforced stimulus (CRf) and a partially reinforced stimulus (PRf), provided further evidence in support of the difference term. In conclusion, the results are consistent with conventional models that describe learning as the growth of associative strength, incremented on each trial by an error-correction process.

  3. Parallel Computing in SCALE

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  4. Parallel Polarization State Generation.

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  5. Parallel imaging microfluidic cytometer.

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Hyperbolic partial differential equations

    Witten, Matthew

    1986-01-01

    Hyperbolic Partial Differential Equations III is a refereed journal issue that explores the applications, theory, and/or applied methods related to hyperbolic partial differential equations, or problems arising out of hyperbolic partial differential equations, in any area of research. This journal issue is interested in all types of articles in terms of review, mini-monograph, standard study, or short communication. Some studies presented in this journal include discretization of ideal fluid dynamics in the Eulerian representation; a Riemann problem in gas dynamics with bifurcation; periodic M

  7. Successful removable partial dentures.

    Lynch, Christopher D

    2012-03-01

    Removable partial dentures (RPDs) remain a mainstay of prosthodontic care for partially dentate patients. Appropriately designed, they can restore masticatory efficiency, improve aesthetics and speech, and help secure overall oral health. However, challenges remain in providing such treatments, including maintaining adequate plaque control, achieving adequate retention, and facilitating patient tolerance. The aim of this paper is to review the successful provision of RPDs. Removable partial dentures are a successful form of treatment for replacing missing teeth, and can be successfully provided with appropriate design and fabrication concepts in mind.

  8. Beginning partial differential equations

    O'Neil, Peter V

    2011-01-01

    A rigorous, yet accessible, introduction to partial differential equations-updated in a valuable new edition Beginning Partial Differential Equations, Second Edition provides a comprehensive introduction to partial differential equations (PDEs) with a special focus on the significance of characteristics, solutions by Fourier series, integrals and transforms, properties and physical interpretations of solutions, and a transition to the modern function space approach to PDEs. With its breadth of coverage, this new edition continues to present a broad introduction to the field, while also addres

  9. Dynamic surface-pressure instrumentation for rods in parallel flow

    Mulcahy, T.M.; Lawrence, W.

    1979-01-01

    Methods employed and experience gained in measuring random fluid boundary layer pressures on the surface of a small diameter cylindrical rod subject to dense, nonhomogeneous, turbulent, parallel flow in a relatively noise-contaminated flow loop are described. Emphasis is placed on identification of instrumentation problems; description of transducer construction, mounting, and waterproofing; and the pretest calibration required to achieve instrumentation capable of reliable data acquisition

  10. 3D Hyperpolarized C-13 EPI with Calibrationless Parallel Imaging

    Gordon, Jeremy W.; Hansen, Rie Beck; Shin, Peter J.

    2018-01-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and tem...... strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism....

  11. Indexing mergers and acquisitions

    Gang, Jianhua; Guo, Jie (Michael); Hu, Nan; Li, Xi

    2017-01-01

    We measure the efficiency of mergers and acquisitions by putting forward an index (the ‘M&A Index’) based on stochastic frontier analysis. The M&A Index is calculated for each takeover deal and is standardized between 0 and 1. An acquisition with a higher index encompasses higher efficiency. We find that takeover bids with higher M&A Indices are more likely to succeed. Moreover, the M&A Index shows a strong and positive relation with the acquirers’ post-acquisition stock perfo...

  12. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  13. ENHANCING THE INTERNATIONALIZATION OF THE GLOBAL INSURANCE MARKET: CHANGING DRIVERS OF MERGERS AND ACQUISITIONS

    D. Rasshyvalov

    2014-03-01

    Full Text Available One-third of worldwide mergers and acquisitions involving firms from different countries make M&A one of the key drivers of internationalization. Over the past five years insurance cross-border merger and acquisition activities have globally paralleled deep financial crisis.

  14. Partial knee replacement - slideshow

    ... page: //medlineplus.gov/ency/presentations/100225.htm Partial knee replacement - series—Normal anatomy To use the sharing ... A.M. Editorial team. Related MedlinePlus Health Topics Knee Replacement A.D.A.M., Inc. is accredited ...

  15. Beginning partial differential equations

    O'Neil, Peter V

    2014-01-01

    A broad introduction to PDEs with an emphasis on specialized topics and applications occurring in a variety of fields Featuring a thoroughly revised presentation of topics, Beginning Partial Differential Equations, Third Edition provides a challenging, yet accessible,combination of techniques, applications, and introductory theory on the subjectof partial differential equations. The new edition offers nonstandard coverageon material including Burger's equation, the telegraph equation, damped wavemotion, and the use of characteristics to solve nonhomogeneous problems. The Third Edition is or

  16. Acquisition Workforce Annual Report 2006

    General Services Administration — This is the Federal Acquisition Institute's (FAI's) Annual demographic report on the Federal acquisition workforce, showing trends by occupational series, employment...

  17. Acquisition Workforce Annual Report 2008

    General Services Administration — This is the Federal Acquisition Institute's (FAI's) Annual demographic report on the Federal acquisition workforce, showing trends by occupational series, employment...

  18. The Acquisition of Particles

    process of language acquisition on the basis of linguistic evidence the child is exposed to. ..... particle verbs are recognized in language processing differs from the way morphologically ..... In Natural Language and Linguistic Theory 11.

  19. High speed data acquisition

    Cooper, P.S.

    1997-07-01

    A general introduction to high speed data acquisition system techniques in modern particle physics experiments is given. Examples are drawn from the SELEX(E78 1) high statistics charmed baryon production and decay experiment now taking data at Fermilab

  20. Parallel Framework for Cooperative Processes

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  1. Partial Acquistion as an Entry Mode in Transition Economies

    Jakobsen, Kristian; Meyer, Klaus E.

    2007-01-01

    Multinational enterprises often acquire stakes in an existing enterprise when entering emerging economies. This paper examines the determinants of entry mode choices with a special focus on these partial acquisitions, which have received little attention in the scholarly literature. We show...... negotiations are subject to significant stakeholder interference. (For more information, please contact: Kristian Jakobsen, Copenhagen Business School, Denmark: kj.int@cbs.dk)...

  2. Parallel Monte Carlo reactor neutronics

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  3. Extended data acquisition support at GSI

    Marinescu, D.C.; Busch, F.; Hultzsch, H.; Lowsky, J.; Richter, M.

    1984-01-01

    The Experiment Data Acquisition and Analysis System (EDAS) of GSI, designed to support the data processing associated with nuclear physics experiments, provides three modes of operation: real-time, interactive replay and batch replay. The real-time mode is used for data acquisition and data analysis during an experiment performed at the heavy ion accelerator at GSI. An experiment may be performed either in Stand Alone Mode, using only the Experiment Computers, or in Extended Mode using all computing resources available. The Extended Mode combines the advantages of the real-time response of a dedicated minicomputer with the availability of computing resources in a large computing environment. This paper first gives an overview of EDAS and presents the GSI High Speed Data Acquisition Network. Data Acquisition Modes and the Extended Mode are then introduced. The structure of the system components, their implementation and the functions pertinent to the Extended Mode are presented. The control functions of the Experiment Computer sub-system are discussed in detail. Two aspects of the design of the sub-system running on the mainframe are stressed, namely the use of a multi-user installation for real-time processing and the use of a high level programming language, PL/I, as an implementation language for a system which uses parallel processing. The experience accumulated is summarized in a number of conclusions

  4. An original approach to data acquisition: CHADAC

    Huppert, M.; Nayman, P.; Rivoal, M.

    1981-01-01

    Many labs try to boost existing data acquisition systems by inserting high performance intelligent devices in the important nodes of the system's structure. This strategy finds its limits in the system's architecture. The CHADAC project proposes a simple and efficient solution to this problem, using a multiprocessor modular architecture. CHADAC main features are: a) Parallel acquisition of data: CHADAC is fast; it dedicates one processor per branch; each processor can read and store one 16 bit word in 800 ns. b) Original structure: each processor can work in its own private memory, in its own shared memory (double access) and in the shared memory of any other processor (this feature being particulary useful to avoid wasteful data transfers). Simple and fast communications between processors are also provided by local DMA'S. c) Flexibility: each processor is autonomous and may be used as an independent acquisition system for a branch, by connecting local peripherals to it. Adjunction of fast trigger logic is possible. By its architecture and performances, CHADAC is designed to provide a good support for local intelligent devices and transfer operators developped elsewhere, providing a way to implement systems well fitted to various types of data acquisition. (orig.)

  5. Anti-parallel triplexes

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  6. Parallel consensual neural networks.

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  7. A Parallel Particle Swarm Optimizer

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  8. Patterns for Parallel Software Design

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  9. Seeing or moving in parallel

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  10. Partial differential equations

    Evans, Lawrence C

    2010-01-01

    This text gives a comprehensive survey of modern techniques in the theoretical study of partial differential equations (PDEs) with particular emphasis on nonlinear equations. The exposition is divided into three parts: representation formulas for solutions; theory for linear partial differential equations; and theory for nonlinear partial differential equations. Included are complete treatments of the method of characteristics; energy methods within Sobolev spaces; regularity for second-order elliptic, parabolic, and hyperbolic equations; maximum principles; the multidimensional calculus of variations; viscosity solutions of Hamilton-Jacobi equations; shock waves and entropy criteria for conservation laws; and, much more.The author summarizes the relevant mathematics required to understand current research in PDEs, especially nonlinear PDEs. While he has reworked and simplified much of the classical theory (particularly the method of characteristics), he primarily emphasizes the modern interplay between funct...

  11. When do evolutionary algorithms optimize separable functions in parallel?

    Doerr, Benjamin; Sudholt, Dirk; Witt, Carsten

    2013-01-01

    is that evolutionary algorithms make progress on all subfunctions in parallel, so that optimizing a separable function does not take not much longer than optimizing the hardest subfunction-subfunctions are optimized "in parallel." We show that this is only partially true, already for the simple (1+1) evolutionary...... algorithm ((1+1) EA). For separable functions composed of k Boolean functions indeed the optimization time is the maximum optimization time of these functions times a small O(log k) overhead. More generally, for sums of weighted subfunctions that each attain non-negative integer values less than r = o(log1...

  12. Eigenvalues calculation algorithms for {lambda}-modes determination. Parallelization approach

    Vidal, V. [Universidad Politecnica de Valencia (Spain). Departamento de Sistemas Informaticos y Computacion; Verdu, G.; Munoz-Cobo, J.L. [Universidad Politecnica de Valencia (Spain). Departamento de Ingenieria Quimica y Nuclear; Ginestart, D. [Universidad Politecnica de Valencia (Spain). Departamento de Matematica Aplicada

    1997-03-01

    In this paper, we review two methods to obtain the {lambda}-modes of a nuclear reactor, Subspace Iteration method and Arnoldi`s method, which are popular methods to solve the partial eigenvalue problem for a given matrix. In the developed application for the neutron diffusion equation we include improved acceleration techniques for both methods. Also, we propose two parallelization approaches for these methods, a coarse grain parallelization and a fine grain one. We have tested the developed algorithms with two realistic problems, focusing on the efficiency of the methods according to the CPU times. (author).

  13. PARALLEL IMPORT: REALITY FOR RUSSIA

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  14. Numerical Methods for Partial Differential Equations

    Guo, Ben-yu

    1987-01-01

    These Proceedings of the first Chinese Conference on Numerical Methods for Partial Differential Equations covers topics such as difference methods, finite element methods, spectral methods, splitting methods, parallel algorithm etc., their theoretical foundation and applications to engineering. Numerical methods both for boundary value problems of elliptic equations and for initial-boundary value problems of evolution equations, such as hyperbolic systems and parabolic equations, are involved. The 16 papers of this volume present recent or new unpublished results and provide a good overview of current research being done in this field in China.

  15. Optimization of partial search

    Korepin, Vladimir E

    2005-01-01

    A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm. (letter to the editor)

  16. Post-Acquisition IT Integration

    Henningsson, Stefan; Yetton, Philip

    2013-01-01

    The extant research on post-acquisition IT integration analyzes how acquirers realize IT-based value in individual acquisitions. However, serial acquirers make 60% of acquisitions. These acquisitions are not isolated events, but are components in growth-by-acquisition programs. To explain how...... serial acquirers realize IT-based value, we develop three propositions on the sequential effects on post-acquisition IT integration in acquisition programs. Their combined explanation is that serial acquirers must have a growth-by-acquisition strategy that includes the capability to improve...... IT integration capabilities, to sustain high alignment across acquisitions and to maintain a scalable IT infrastructure with a flat or decreasing cost structure. We begin the process of validating the three propositions by investigating a longitudinal case study of a growth-by-acquisition program....

  17. The Galley Parallel File System

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  18. Parallelization of the FLAPW method

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  19. Parallelization of the FLAPW method

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  20. Seismic data acquisition systems

    Kolvankar, V.G.; Nadre, V.N.; Rao, D.S.

    1989-01-01

    Details of seismic data acquisition systems developed at the Bhabha Atomic Research Centre, Bombay are reported. The seismic signals acquired belong to different signal bandwidths in the band from 0.02 Hz to 250 Hz. All these acquisition systems are built around a unique technique of recording multichannel data on to a single track of an audio tape and in digital form. Techniques of how these signals in different bands of frequencies were acquired and recorded are described. Method of detecting seismic signals and its performance is also discussed. Seismic signals acquired in different set-ups are illustrated. Time indexing systems for different set-ups and multichannel waveform display systems which form essential part of the data acquisition systems are also discussed. (author). 13 refs., 6 figs., 1 tab

  1. Auxiliary partial liver transplantation

    C.B. Reuvers (Cornelis Bastiaan)

    1986-01-01

    textabstractIn this thesis studies on auxiliary partial liver transplantation in the dog and the pig are reported. The motive to perform this study was the fact that patients with acute hepatic failure or end-stage chronic liver disease are often considered to form too great a risk for successful

  2. Partial Remission Definition

    Andersen, Marie Louise Max; Hougaard, Philip; Pörksen, Sven

    2014-01-01

    OBJECTIVE: To validate the partial remission (PR) definition based on insulin dose-adjusted HbA1c (IDAA1c). SUBJECTS AND METHODS: The IDAA1c was developed using data in 251 children from the European Hvidoere cohort. For validation, 129 children from a Danish cohort were followed from the onset...

  3. Fundamental partial compositeness

    Sannino, Francesco; Strumia, Alessandro; Tesi, Andrea

    2016-01-01

    We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Unde...

  4. Partially ordered models

    Fernandez, R.; Deveaux, V.

    2010-01-01

    We provide a formal definition and study the basic properties of partially ordered chains (POC). These systems were proposed to model textures in image processing and to represent independence relations between random variables in statistics (in the later case they are known as Bayesian networks).

  5. Partially Hidden Markov Models

    Forchhammer, Søren Otto; Rissanen, Jorma

    1996-01-01

    Partially Hidden Markov Models (PHMM) are introduced. They differ from the ordinary HMM's in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where...

  6. Honesty in partial logic

    W. van der Hoek (Wiebe); J.O.M. Jaspars; E. Thijsse

    1995-01-01

    textabstractWe propose an epistemic logic in which knowledge is fully introspective and implies truth, although truth need not imply epistemic possibility. The logic is presented in sequential format and is interpreted in a natural class of partial models, called balloon models. We examine the

  7. On Shaft Data Acquisition System (OSDAS)

    Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles

    2012-01-01

    On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test

  8. Partial wave analysis using graphics processing units

    Berger, Niklaus; Liu Beijiang; Wang Jike, E-mail: nberger@ihep.ac.c [Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Lu, Shijingshan, 100049 Beijing (China)

    2010-04-01

    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.

  9. LEGS data acquisition facility

    LeVine, M.J.

    1985-01-01

    The data acquisition facility for the LEGS medium energy photonuclear beam line is composed of an auxiliary crate controller (ACC) acting as a front-end processor, loosely coupled to a time-sharing host computer based on a UNIX-like environment. The ACC services all real-time demands in the CAMAC crate: it responds to LAMs generated by data acquisition modules, to keyboard commands, and it refreshes the graphics display at frequent intervals. The host processor is needed only for printing histograms and recording event buffers on magnetic tape. The host also provides the environment for software development. The CAMAC crate is interfaced by a VERSAbus CAMAC branch driver

  10. Acquisition IT Integration

    Henningsson, Stefan; Øhrgaard, Christian

    2015-01-01

    of temporary agency workers. Following an analytic induction approach, theoretically grounded in the re-source-based view of the firm, we identify the complimentary and supplementary roles consultants can assume in acquisition IT integration. Through case studies of three acquirers, we investigate how...... the acquirers appropriate the use of agency workers as part of its acquisition strategy. For the investigated acquirers, assigning roles to agency workers is contingent on balancing the needs of knowledge induction and knowledge retention, as well as experience richness and in-depth under-standing. Composition...

  11. Algebraic partial Boolean algebras

    Smith, Derek

    2003-01-01

    Partial Boolean algebras, first studied by Kochen and Specker in the 1960s, provide the structure for Bell-Kochen-Specker theorems which deny the existence of non-contextual hidden variable theories. In this paper, we study partial Boolean algebras which are 'algebraic' in the sense that their elements have coordinates in an algebraic number field. Several of these algebras have been discussed recently in a debate on the validity of Bell-Kochen-Specker theorems in the context of finite precision measurements. The main result of this paper is that every algebraic finitely-generated partial Boolean algebra B(T) is finite when the underlying space H is three-dimensional, answering a question of Kochen and showing that Conway and Kochen's infinite algebraic partial Boolean algebra has minimum dimension. This result contrasts the existence of an infinite (non-algebraic) B(T) generated by eight elements in an abstract orthomodular lattice of height 3. We then initiate a study of higher-dimensional algebraic partial Boolean algebras. First, we describe a restriction on the determinants of the elements of B(T) that are generated by a given set T. We then show that when the generating set T consists of the rays spanning the minimal vectors in a real irreducible root lattice, B(T) is infinite just if that root lattice has an A 5 sublattice. Finally, we characterize the rays of B(T) when T consists of the rays spanning the minimal vectors of the root lattice E 8

  12. Parallelization Issues and Particle-In Codes.

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on

  13. Is Monte Carlo embarrassingly parallel?

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  14. Is Monte Carlo embarrassingly parallel?

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  15. Parallel integer sorting with medium and fine-scale parallelism

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  16. Template based parallel checkpointing in a massively parallel computer system

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  17. Algorithms for computational fluid dynamics n parallel processors

    Van de Velde, E.F.

    1986-01-01

    A study of parallel algorithms for the numerical solution of partial differential equations arising in computational fluid dynamics is presented. The actual implementation on parallel processors of shared and nonshared memory design is discussed. The performance of these algorithms is analyzed in terms of machine efficiency, communication time, bottlenecks and software development costs. For elliptic equations, a parallel preconditioned conjugate gradient method is described, which has been used to solve pressure equations discretized with high order finite elements on irregular grids. A parallel full multigrid method and a parallel fast Poisson solver are also presented. Hyperbolic conservation laws were discretized with parallel versions of finite difference methods like the Lax-Wendroff scheme and with the Random Choice method. Techniques are developed for comparing the behavior of an algorithm on different architectures as a function of problem size and local computational effort. Effective use of these advanced architecture machines requires the use of machine dependent programming. It is shown that the portability problems can be minimized by introducing high level operations on vectors and matrices structured into program libraries

  18. 48 CFR 352.234-4 - Partial earned value management system.

    2010-10-01

    ... management system. 352.234-4 Section 352.234-4 Federal Acquisition Regulations System HEALTH AND HUMAN....234-4 Partial earned value management system. As prescribed in 334.203-70(d), the Contracting Officer shall insert the following clause: Partial Earned Value Management System (October 2008) (a) The...

  19. Parallel education: what is it?

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  20. Balanced, parallel operation of flashlamps

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  1. ACQUISITIONS LIST, MAY 1966.

    Harvard Univ., Cambridge, MA. Graduate School of Education.

    THIS ACQUISITIONS LIST IS A BIBLIOGRAPHY OF MATERIAL ON VARIOUS ASPECTS OF EDUCATION. OVER 300 UNANNOTATED REFERENCES ARE PROVIDED FOR DOCUMENTS DATING MAINLY FROM 1960 TO 1966. BOOKS, JOURNALS, REPORT MATERIALS, AND UNPUBLISHED MANUSCRIPTS ARE LISTED UNDER THE FOLLOWING HEADINGS--(1) ACHIEVEMENT, (2) ADOLESCENCE, (3) CHILD DEVELOPMENT, (4)…

  2. MAST data acquisition system

    Shibaev, S.; Counsell, G.; Cunningham, G.; Manhood, S.J.; Thomas-Davies, N.; Waterhouse, J.

    2006-01-01

    The data acquisition system of the Mega-Amp Spherical Tokamak (MAST) presently collects up to 400 MB of data in about 3000 data items per shot, and subsequent fast growth is expected. Since the start of MAST operations (in 1999) the system has changed dramatically. Though we continue to use legacy CAMAC hardware, newer VME, PCI, and PXI based sub-systems collect most of the data now. All legacy software has been redesigned and new software has been developed. Last year a major system improvement was made-replacement of the message distribution system. The new message system provides easy connection of any sub-system independently of its platform and serves as a framework for many new applications. A new data acquisition controller provides full control of common sub-systems, central error logging, and data acquisition alarms for the MAST plant. A number of new sub-systems using Linux and Windows OSs on VME, PCI, and PXI platforms have been developed. A new PXI unit has been designed as a base sub-system accommodating any type of data acquisition and control devices. Several web applications for the real-time MAST monitoring and data presentation have been developed

  3. Data acquisition techniques

    Dougherty, R.C.

    1976-01-01

    Testing neutron generators and major subassemblies has undergone a transition in the past few years. Digital information is now used for storage and analysis. The key to the change is the availability of a high-speed digitizer system. The status of the Sandia Laboratory data acquisition and handling system as applied to this area is surveyed. 1 figure

  4. Surviving mergers & acquisitions.

    Dixon, Diane L

    2002-01-01

    Mergers and acquisitions are never easy to implement. The health care landscape is a minefield of failed mergers and uneasy alliances generating great turmoil and pain. But some mergers have been successful, creating health systems that benefit the communities they serve. Five prominent leaders offer their advice on minimizing the difficulties of M&As.

  5. General image acquisition parameters

    Teissier, J.M.; Lopez, F.M.; Langevin, J.F.

    1993-01-01

    The general parameters are of primordial importance to achieve image quality in terms of spatial resolution and contrast. They also play a role in the acquisition time for each sequence. We describe them separately, before associating them in a decision tree gathering the various options that are possible for diagnosis

  6. Decentralized Blended Acquisition

    Berkhout, A.J.

    2013-01-01

    The concept of blending and deblending is reviewed, making use of traditional and dispersed source arrays. The network concept of distributed blended acquisition is introduced. A million-trace robot system is proposed, illustrating that decentralization may bring about a revolution in the way we

  7. MPS Data Acquisition System

    Eiseman, S.E.; Miller, W.J.

    1975-01-01

    A description is given of the data acquisition system used with the multiparticle spectrometer facility at Brookhaven. Detailed information is provided on that part of the system which connects the detectors to the data handler; namely, the detector electronics, device controller, and device port optical isolator

  8. [Acquisition of arithmetic knowledge].

    Fayol, Michel

    2008-01-01

    The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3).

  9. Second Language Acquisition.

    McLaughlin, Barry; Harrington, Michael

    1989-01-01

    A distinction is drawn between representational and processing models of second-language acquisition. The first approach is derived primarily from linguistics, the second from psychology. Both fields, it is argued, need to collaborate more fully, overcoming disciplinary narrowness in order to achieve more fruitful research. (GLR)

  10. Parallel Element Agglomeration Algebraic Multigrid and Upscaling Library

    2017-10-24

    ParELAG is a parallel C++ library for numerical upscaling of finite element discretizations and element-based algebraic multigrid solvers. It provides optimal complexity algorithms to build multilevel hierarchies and solvers that can be used for solving a wide class of partial differential equations (elliptic, hyperbolic, saddle point problems) on general unstructured meshes. Additionally, a novel multilevel solver for saddle point problems with divergence constraint is implemented.

  11. Current and future state of FDA-CMS parallel reviews.

    Messner, D A; Tunis, S R

    2012-03-01

    The US Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid Services (CMS) recently proposed a partial alignment of their respective review processes for new medical products. The proposed "parallel review" not only offers an opportunity for some products to reach the market with Medicare coverage more quickly but may also create new incentives for product developers to conduct studies designed to address simultaneously the information needs of regulators, payers, patients, and clinicians.

  12. Workspace Analysis for Parallel Robot

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  13. "Feeling" Series and Parallel Resistances.

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  14. Parallel encoders for pixel detectors

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  15. Massively Parallel Finite Element Programming

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  16. Event monitoring of parallel computations

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  17. Massively Parallel Finite Element Programming

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  18. The STAPL Parallel Graph Library

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  19. Partially composite Higgs models

    Alanne, Tommi; Buarque Franzosi, Diogo; Frandsen, Mads T.

    2018-01-01

    We study the phenomenology of partially composite-Higgs models where electroweak symmetry breaking is dynamically induced, and the Higgs is a mixture of a composite and an elementary state. The models considered have explicit realizations in terms of gauge-Yukawa theories with new strongly...... interacting fermions coupled to elementary scalars and allow for a very SM-like Higgs state. We study constraints on their parameter spaces from vacuum stability and perturbativity as well as from LHC results and find that requiring vacuum stability up to the compositeness scale already imposes relevant...... constraints. A small part of parameter space around the classically conformal limit is stable up to the Planck scale. This is however already strongly disfavored by LHC results. in different limits, the models realize both (partially) composite-Higgs and (bosonic) technicolor models and a dynamical extension...

  20. Parallel alternating direction preconditioner for isogeometric simulations of explicit dynamics

    Łoś, Marcin

    2015-04-27

    In this paper we present a parallel implementation of the alternating direction preconditioner for isogeometric simulations of explicit dynamics. The Alternating Direction Implicit (ADI) algorithm, belongs to the category of matrix-splitting iterative methods, was proposed almost six decades ago for solving parabolic and elliptic partial differential equations, see [1–4]. The new version of this algorithm has been recently developed for isogeometric simulations of two dimensional explicit dynamics [5] and steady-state diffusion equations with orthotropic heterogenous coefficients [6]. In this paper we present a parallel version of the alternating direction implicit algorithm for three dimensional simulations. The algorithm has been incorporated as a part of PETIGA an isogeometric framework [7] build on top of PETSc [8]. We show the scalability of the parallel algorithm on STAMPEDE linux cluster up to 10,000 processors, as well as the convergence rate of the PCG solver with ADI algorithm as preconditioner.

  1. Photogenic partial seizures.

    Hennessy, M J; Binnie, C D

    2000-01-01

    To establish the incidence and symptoms of partial seizures in a cohort of patients investigated on account of known sensitivity to intermittent photic stimulation and/or precipitation of seizures by environmental visual stimuli such as television (TV) screens or computer monitors. We report 43 consecutive patients with epilepsy, who had exhibited a significant EEG photoparoxysmal response or who had seizures precipitated by environmental visual stimuli and underwent detailed assessment of their photosensitivity in the EEG laboratory, during which all were questioned concerning their ictal symptoms. All patients were considered on clinical grounds to have an idiopathic epilepsy syndrome. Twenty-eight (65%) patients reported visually precipitated attacks occurring initially with maintained consciousness, in some instances evolving to a period of confusion or to a secondarily generalized seizure. Visual symptoms were most commonly reported and included positive symptoms such as coloured circles or spots, but also blindness and subjective symptoms such as "eyes going funny." Other symptoms described included nonspecific cephalic sensations, deja-vu, auditory hallucinations, nausea, and vomiting. No patient reported any clear spontaneous partial seizures, and there were no grounds for supposing that any had partial epilepsy excepting the ictal phenomenology of some or all of the visually induced attacks. These findings provide clinical support for the physiological studies that indicate that the trigger mechanism for human photosensitivity involves binocularly innervated cells located in the visual cortex. Thus the visual cortex is the seat of the primary epileptogenic process, and the photically triggered discharges and seizures may be regarded as partial with secondary generalization.

  2. Arthroscopic partial medial meniscectomy

    Dašić Žarko

    2011-01-01

    Full Text Available Background/Aim. Meniscal injuries are common in professional or recreational sports as well as in daily activities. If meniscal lesions lead to physical impairment they usually require surgical treatment. Arthroscopic treatment of meniscal injuries is one of the most often performed orthopedic operative procedures. Methods. The study analyzed the results of arthroscopic partial medial meniscectomy in 213 patients in a 24-month period, from 2006, to 2008. Results. In our series of arthroscopically treated medial meniscus tears we noted 78 (36.62% vertical complete bucket handle lesions, 19 (8.92% vertical incomplete lesions, 18 (8.45% longitudinal tears, 35 (16.43% oblique tears, 18 (8.45% complex degenerative lesions, 17 (7.98% radial lesions and 28 (13.14% horisontal lesions. Mean preoperative International Knee Documentation Committee (IKDC score was 49.81%, 1 month after the arthroscopic partial medial meniscectomy the mean IKDC score was 84.08%, and 6 months after mean IKDC score was 90.36%. Six months after the procedure 197 (92.49% of patients had good or excellent subjective postoperative clinical outcomes, while 14 (6.57% patients subjectively did not notice a significant improvement after the intervention, and 2 (0.93% patients had no subjective improvement after the partial medial meniscectomy at all. Conclusion. Arthroscopic partial medial meniscetomy is minimally invasive diagnostic and therapeutic procedure and in well selected cases is a method of choice for treatment of medial meniscus injuries when repair techniques are not a viable option. It has small rate of complications, low morbidity and fast rehabilitation.

  3. Data acquisition system for a proton imaging apparatus

    Sipala, V; Bruzzi, M; Bucciolini, M; Candiano, G; Capineri, L; Cirrone, G A P; Civinini, C; Cuttone, G; Lo Presti, D; Marrazzo, L; Mazzaglia, E; Menichelli, D; Randazzo, N; Talamonti, C; Tesi, M; Valentini, S

    2009-01-01

    New developments in the proton-therapy field for cancer treatments, leaded Italian physics researchers to realize a proton imaging apparatus consisting of a silicon microstrip tracker to reconstruct the proton trajectories and a calorimeter to measure their residual energy. For clinical requirements, the detectors used and the data acquisition system should be able to sustain about 1 MHz proton rate. The tracker read-out, using an ASICs developed by the collaboration, acquires the signals detector and sends data in parallel to an FPGA. The YAG:Ce calorimeter generates also the global trigger. The data acquisition system and the results obtained in the calibration phase are presented and discussed.

  4. Hierarchical partial order ranking

    Carlsen, Lars

    2008-01-01

    Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritisation of polluted sites is given. - Hierarchical partial order ranking of polluted sites has been developed for prioritization based on a large number of parameters

  5. Writing parallel programs that work

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  6. Exploiting Symmetry on Parallel Architectures.

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  7. Parallel algorithms for continuum dynamics

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  8. The JET fast central acquisition and trigger system

    Blackler, K.; Edwards, A.W.

    1994-01-01

    This paper describes a new data acquisition system at JET which uses Texas TMS320C40 parallel digital signal processors and the HELIOS parallel operating system to reduce the large amounts of experimental data produced by fast diagnostics. This unified system features a two level trigger system which performs real-time activity detection together with asynchronous event classification and selection. This provides automated data reduction during an experiment. The system's application to future fusion machines which have almost continuous operation is discussed

  9. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  10. Parallel Implicit Algorithms for CFD

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  11. Second derivative parallel block backward differentiation type ...

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  12. A Parallel Approach to Fractal Image Compression

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  13. Time-resolved 3D pulmonary perfusion MRI: comparison of different k-space acquisition strategies at 1.5 and 3 T.

    Attenberger, Ulrike I; Ingrisch, Michael; Dietrich, Olaf; Herrmann, Karin; Nikolaou, Konstantin; Reiser, Maximilian F; Schönberg, Stefan O; Fink, Christian

    2009-09-01

    Time-resolved pulmonary perfusion MRI requires both high temporal and spatial resolution, which can be achieved by using several nonconventional k-space acquisition techniques. The aim of this study is to compare the image quality of time-resolved 3D pulmonary perfusion MRI with different k-space acquisition techniques in healthy volunteers at 1.5 and 3 T. Ten healthy volunteers underwent contrast-enhanced time-resolved 3D pulmonary MRI on 1.5 and 3 T using the following k-space acquisition techniques: (a) generalized autocalibrating partial parallel acquisition (GRAPPA) with an internal acquisition of reference lines (IRS), (b) GRAPPA with a single "external" acquisition of reference lines (ERS) before the measurement, and (c) a combination of GRAPPA with an internal acquisition of reference lines and view sharing (VS). The spatial resolution was kept constant at both field strengths to exclusively evaluate the influences of the temporal resolution achieved with the different k-space sampling techniques on image quality. The temporal resolutions were 2.11 seconds IRS, 1.31 seconds ERS, and 1.07 VS at 1.5 T and 2.04 seconds IRS, 1.30 seconds ERS, and 1.19 seconds VS at 3 T.Image quality was rated by 2 independent radiologists with regard to signal intensity, perfusion homogeneity, artifacts (eg, wrap around, noise), and visualization of pulmonary vessels using a 3 point scale (1 = nondiagnostic, 2 = moderate, 3 = good). Furthermore, the signal-to-noise ratio in the lungs was assessed. At 1.5 T the lowest image quality (sum score: 154) was observed for the ERS technique and the highest quality for the VS technique (sum score: 201). In contrast, at 3 T images acquired with VS were hampered by strong artifacts and image quality was rated significantly inferior (sum score: 137) compared with IRS (sum score: 180) and ERS (sum score: 174). Comparing 1.5 and 3 T, in particular the overall rating of the IRS technique (sum score: 180) was very similar at both field

  14. Data acquisition for PLT

    Thompson, P.A.

    1975-01-01

    DA/PLT, the data acquisition system for the Princeton Large Torus (PLT) fusion research device, consists of a PDP-10 host computer, five satellite PDP-11s connected to the host by a special high-speed interface, miscellaneous other minicomputers and commercially supplied instruments, and much PPPL produced hardware. The software consists of the standard PDP-10 monitor with local modifications and the special systems and applications programs to customize the DA/PLT for the specific job of supporting data acquisition, analysis, display, and archiving, with concurrent off-line analysis, program development, and, in the background, general batch and timesharing. Some details of the over-all architecture are presented, along with a status report of the different PLT experiments being supported

  15. Knowledge Transfers following Acquisition

    Gammelgaard, Jens

    2001-01-01

    Prior relations between the acquiring firm and the target company pave the way for knowledge transfers subsequent to the acquisitions. One major reason is that through the market-based relations the two actors build up mutual trust and simultaneously they learn how to communicate. An empirical...... study of 54 Danish acquisitions taking place abroad from 1994 to 1998 demonstrated that when there was a high level of trust between the acquiring firm and the target firm before the take-over, then medium and strong tie-binding knowledge transfer mechanisms, such as project groups and job rotation......, were used more intensively. Further, the degree of stickiness was significantly lower in the case of prior trust-based relations....

  16. Amplitudes, acquisition and imaging

    Bloor, Robert

    1998-12-31

    Accurate seismic amplitude information is important for the successful evaluation of many prospects and the importance of such amplitude information is increasing with the advent of time lapse seismic techniques. It is now widely accepted that the proper treatment of amplitudes requires seismic imaging in the form of either time or depth migration. A key factor in seismic imaging is the spatial sampling of the data and its relationship to the imaging algorithms. This presentation demonstrates that acquisition caused spatial sampling irregularity can affect the seismic imaging and perturb amplitudes. Equalization helps to balance the amplitudes, and the dealing strategy improves the imaging further when there are azimuth variations. Equalization and dealiasing can also help with the acquisition irregularities caused by shot and receiver dislocation or missing traces. 2 refs., 2 figs.

  17. Data acquisition instruments: Psychopharmacology

    Hartley, D.S. III

    1998-01-01

    This report contains the results of a Direct Assistance Project performed by Lockheed Martin Energy Systems, Inc., for Dr. K. O. Jobson. The purpose of the project was to perform preliminary analysis of the data acquisition instruments used in the field of psychiatry, with the goal of identifying commonalities of data and strategies for handling and using the data in the most advantageous fashion. Data acquisition instruments from 12 sources were provided by Dr. Jobson. Several commonalities were identified and a potentially useful data strategy is reported here. Analysis of the information collected for utility in performing diagnoses is recommended. In addition, further work is recommended to refine the commonalities into a directly useful computer systems structure.

  18. First Language Acquisition and Teaching

    Cruz-Ferreira, Madalena

    2011-01-01

    "First language acquisition" commonly means the acquisition of a single language in childhood, regardless of the number of languages in a child's natural environment. Language acquisition is variously viewed as predetermined, wondrous, a source of concern, and as developing through formal processes. "First language teaching" concerns schooling in…

  19. Multiprocessor data acquisition system

    Haumann, J.R.; Crawford, R.K.

    1987-01-01

    A multiprocessor data acquisition system has been built to replace the single processor systems at the Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory. The multiprocessor system was needed to accommodate the higher data rates at IPNS brought about by improvements in the source and changes in instrument configurations. This paper describes the hardware configuration of the system and the method of task sharing and compares results to the single processor system

  20. Implementing acquisition strategies

    Montgomery, G. K.

    1997-01-01

    The objective of this paper is to address some of the strategies necessary to effect a successful asset or corporate acquisition. Understanding the corporate objective, the full potential of the asset, the specific strategies to be employed, the value of time, and most importantly the interaction of all these are crucial, for missed steps are likely to result in missed opportunities. The amount of factual information that can be obtained and utilized in a timely fashion is the largest single hurdle to the capture of value in the asset or corporate acquisition. Fact, familiarity and experience are key in this context. The importance of the due diligence process prior to title or data transfer cannot be overemphasized. Some of the most important assets acquired in a merger may be the people. To maximize effectiveness, it is essential to merge both existing staff and those that came with the new acquisition as soon as possible. By thinking together as a unit, knowledge and experience can be applied to realize the potential of the asset. Hence team building is one of the challenges, doing it quickly is usually the most effective. Developing new directions for the new enlarged company by combining the strengths of the old and the new creates more value, as well as a more efficient operation. Equally important to maximizing the potential of the new acquisition is the maintenance of the momentum generated by the need to grow that gave the impetus to acquiring new assets in the first place. In brief, the right mix of vision, facts and perceptions, quick enactment of the post-close strategies and keeping the momentum alive, are the principal ingredients of a focused strategy

  1. Internationalize Mergers and Acquisitions

    Zhou, Lili

    2017-01-01

    As globalization processes, an increasing number of companies use mergers and acquisitions as a tool to achieve company growth in the international business world. The purpose of this thesis is to investigate the process of an international M&A and analyze the factors leading to success. The research started with reviewing different academic theory. The important aspects in both pre-M&A phase and post-M&A phase have been studied in depth. Because of the complexity in international...

  2. Renal magnetic resonance angiography at 3.0 Tesla using a 32-element phased-array coil system and parallel imaging in 2 directions.

    Fenchel, Michael; Nael, Kambiz; Deshpande, Vibhas S; Finn, J Paul; Kramer, Ulrich; Miller, Stephan; Ruehm, Stefan; Laub, Gerhard

    2006-09-01

    The aim of the present study was to assess the feasibility of renal magnetic resonance angiography at 3.0 T using a phased-array coil system with 32-coil elements. Specifically, high parallel imaging factors were used for an increased spatial resolution and anatomic coverage of the whole abdomen. Signal-to-noise values and the g-factor distribution of the 32 element coil were examined in phantom studies for the magnetic resonance angiography (MRA) sequence. Eleven volunteers (6 men, median age of 30.0 years) were examined on a 3.0-T MR scanner (Magnetom Trio, Siemens Medical Solutions, Malvern, PA) using a 32-element phased-array coil (prototype from In vivo Corp.). Contrast-enhanced 3D-MRA (TR 2.95 milliseconds, TE 1.12 milliseconds, flip angle 25-30 degrees , bandwidth 650 Hz/pixel) was acquired with integrated generalized autocalibrating partially parallel acquisition (GRAPPA), in both phase- and slice-encoding direction. Images were assessed by 2 independent observers with regard to image quality, noise and presence of artifacts. Signal-to-noise levels of 22.2 +/- 22.0 and 57.9 +/- 49.0 were measured with (GRAPPAx6) and without parallel-imaging, respectively. The mean g-factor of the 32-element coil for GRAPPA with an acceleration of 3 and 2 in the phase-encoding and slice-encoding direction, respectively, was 1.61. High image quality was found in 9 of 11 volunteers (2.6 +/- 0.8) with good overall interobserver agreement (k = 0.87). Relatively low image quality with higher noise levels were encountered in 2 volunteers. MRA at 3.0 T using a 32-element phased-array coil is feasible in healthy volunteers. High diagnostic image quality and extended anatomic coverage could be achieved with application of high parallel imaging factors.

  3. Data Acquisition System

    Watwood, D.; Beatty, J.

    1991-01-01

    The Data Acquisition System (DAS) is comprised of a Hewlett-Packard (HP) model 9816, Series 200 Computer System with the appropriate software to acquire, control, and archive data from a Data Acquisition/Control Unit, models HP3497A and HP3498A. The primary storage medium is an HP9153 16-megabyte hard disc. The data is backed-up on three floppy discs. One floppy disc drive is contained in the HP9153 chassis; the other two comprise an HP9122 dual disc drive. An HP82906A line printer supplies hard copy backup. A block diagram of the hardware setup is shown. The HP3497A/3498A Data Acquisition/Control Units read each input channel and transmit the raw voltage reading to the HP9816 CPU via the HPIB bus. The HP9816 converts this voltage to the appropriate engineering units using the calibration curves for the sensor being read. The HP9816 archives both the raw and processed data along with the time and the readings were taken to hard and floppy discs. The processed values and reading time are printed on the line printer. This system is designed to accommodate several types of sensors; each type is discussed in the following sections

  4. Complexity in language acquisition.

    Clark, Alexander; Lappin, Shalom

    2013-01-01

    Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form. Copyright © 2013 Cognitive Science Society, Inc.

  5. MDSplus data acquisition system

    Stillerman, J.A.; Fredian, T.W.; Klare, K.; Manduchi, G.

    1997-01-01

    MDSplus, a tree based, distributed data acquisition system, was developed in collaboration with the ZTH Group at Los Alamos National Lab and the RFX Group at CNR in Padua, Italy. It is currently in use at MIT, RFX in Padua, TCV at EPFL in Lausanne, and KBSI in South Korea. MDSplus is made up of a set of X/motif based tools for data acquisition and display, as well as diagnostic configuration and management. It is based on a hierarchical experiment description which completely describes the data acquisition and analysis tasks and contains the results from these operations. These tools were designed to operate in a distributed, client/server environment with multiple concurrent readers and writers to the data store. While usually used over a Local Area Network, these tools can be used over the Internet to provide access for remote diagnosticians and even machine operators. An interface to a relational database is provided for storage and management of processed data. IDL is used as the primary data analysis and visualization tool. IDL is a registered trademark of Research Systems Inc. copyright 1996 American Institute of Physics

  6. Partially ordered algebraic systems

    Fuchs, Laszlo

    2011-01-01

    Originally published in an important series of books on pure and applied mathematics, this monograph by a distinguished mathematician explores a high-level area in algebra. It constitutes the first systematic summary of research concerning partially ordered groups, semigroups, rings, and fields. The self-contained treatment features numerous problems, complete proofs, a detailed bibliography, and indexes. It presumes some knowledge of abstract algebra, providing necessary background and references where appropriate. This inexpensive edition of a hard-to-find systematic survey will fill a gap i

  7. Infinite partial summations

    Sprung, D.W.L.

    1975-01-01

    This paper is a brief review of those aspects of the effective interaction problem that can be grouped under the heading of infinite partial summations of the perturbation series. After a brief mention of the classic examples of infinite summations, the author turns to the effective interaction problem for two extra core particles. Their direct interaction is summed to produce the G matrix, while their indirect interaction through the core is summed in a variety of ways under the heading of core polarization. (orig./WL) [de

  8. On universal partial words

    Chen, Herman Z. Q.; Kitaev, Sergey; Mütze, Torsten; Sun, Brian Y.

    2016-01-01

    A universal word for a finite alphabet $A$ and some integer $n\\geq 1$ is a word over $A$ such that every word in $A^n$ appears exactly once as a subword (cyclically or linearly). It is well-known and easy to prove that universal words exist for any $A$ and $n$. In this work we initiate the systematic study of universal partial words. These are words that in addition to the letters from $A$ may contain an arbitrary number of occurrences of a special `joker' symbol $\\Diamond\

  9. Partial differential equations

    Agranovich, M S

    2002-01-01

    Mark Vishik's Partial Differential Equations seminar held at Moscow State University was one of the world's leading seminars in PDEs for over 40 years. This book celebrates Vishik's eightieth birthday. It comprises new results and survey papers written by many renowned specialists who actively participated over the years in Vishik's seminars. Contributions include original developments and methods in PDEs and related fields, such as mathematical physics, tomography, and symplectic geometry. Papers discuss linear and nonlinear equations, particularly linear elliptic problems in angles and gener

  10. Partial differential equations

    Levine, Harold

    1997-01-01

    The subject matter, partial differential equations (PDEs), has a long history (dating from the 18th century) and an active contemporary phase. An early phase (with a separate focus on taut string vibrations and heat flow through solid bodies) stimulated developments of great importance for mathematical analysis, such as a wider concept of functions and integration and the existence of trigonometric or Fourier series representations. The direct relevance of PDEs to all manner of mathematical, physical and technical problems continues. This book presents a reasonably broad introductory account of the subject, with due regard for analytical detail, applications and historical matters.

  11. Partial differential equations

    Sloan, D; Süli, E

    2001-01-01

    /homepage/sac/cam/na2000/index.html7-Volume Set now available at special set price ! Over the second half of the 20th century the subject area loosely referred to as numerical analysis of partial differential equations (PDEs) has undergone unprecedented development. At its practical end, the vigorous growth and steady diversification of the field were stimulated by the demand for accurate and reliable tools for computational modelling in physical sciences and engineering, and by the rapid development of computer hardware and architecture. At the more theoretical end, the analytical insight in

  12. Elliptic partial differential equations

    Han, Qing

    2011-01-01

    Elliptic Partial Differential Equations by Qing Han and FangHua Lin is one of the best textbooks I know. It is the perfect introduction to PDE. In 150 pages or so it covers an amazing amount of wonderful and extraordinary useful material. I have used it as a textbook at both graduate and undergraduate levels which is possible since it only requires very little background material yet it covers an enormous amount of material. In my opinion it is a must read for all interested in analysis and geometry, and for all of my own PhD students it is indeed just that. I cannot say enough good things abo

  13. Generalized Partial Volume

    Darkner, Sune; Sporring, Jon

    2011-01-01

    Mutual Information (MI) and normalized mutual information (NMI) are popular choices as similarity measure for multimodal image registration. Presently, one of two approaches is often used for estimating these measures: The Parzen Window (PW) and the Generalized Partial Volume (GPV). Their theoret...... of view as well as w.r.t. computational complexity. Finally, we present algorithms for both approaches for NMI which is comparable in speed to Sum of Squared Differences (SSD), and we illustrate the differences between PW and GPV on a number of registration examples....

  14. Frames of reference in spatial language acquisition.

    Shusterman, Anna; Li, Peggy

    2016-08-01

    Languages differ in how they encode spatial frames of reference. It is unknown how children acquire the particular frame-of-reference terms in their language (e.g., left/right, north/south). The present paper uses a word-learning paradigm to investigate 4-year-old English-speaking children's acquisition of such terms. In Part I, with five experiments, we contrasted children's acquisition of novel word pairs meaning left-right and north-south to examine their initial hypotheses and the relative ease of learning the meanings of these terms. Children interpreted ambiguous spatial terms as having environment-based meanings akin to north and south, and they readily learned and generalized north-south meanings. These studies provide the first direct evidence that children invoke geocentric representations in spatial language acquisition. However, the studies leave unanswered how children ultimately acquire "left" and "right." In Part II, with three more experiments, we investigated why children struggle to master body-based frame-of-reference words. Children successfully learned "left" and "right" when the novel words were systematically introduced on their own bodies and extended these words to novel (intrinsic and relative) uses; however, they had difficulty learning to talk about the left and right sides of a doll. This difficulty was paralleled in identifying the left and right sides of the doll in a non-linguistic memory task. In contrast, children had no difficulties learning to label the front and back sides of a doll. These studies begin to paint a detailed account of the acquisition of spatial terms in English, and provide insights into the origins of diverse spatial reference frames in the world's languages. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Parallel fabrication of macroporous scaffolds.

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  16. Parallel plasma fluid turbulence calculations

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  17. Evaluating parallel optimization on transputers

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  18. Parallel imaging enhanced MR colonography using a phantom model.

    Morrin, Martina M

    2008-09-01

    To compare various Array Spatial and Sensitivity Encoding Technique (ASSET)-enhanced T2W SSFSE (single shot fast spin echo) and T1-weighted (T1W) 3D SPGR (spoiled gradient recalled echo) sequences for polyp detection and image quality at MR colonography (MRC) in a phantom model. Limitations of MRC using standard 3D SPGR T1W imaging include the long breath-hold required to cover the entire colon within one acquisition and the relatively low spatial resolution due to the long acquisition time. Parallel imaging using ASSET-enhanced T2W SSFSE and 3D T1W SPGR imaging results in much shorter imaging times, which allows for increased spatial resolution.

  19. Non-Cartesian Parallel Imaging Reconstruction of Undersampled IDEAL Spiral 13C CSI Data

    Hansen, Rie Beck; Hanson, Lars G.; Ardenkjær-Larsen, Jan Henrik

    scan times based on spatial information inherent to each coil element. In this work, we explored the combination of non-cartesian parallel imaging reconstruction and spatially undersampled IDEAL spiral CSI1 acquisition for efficient encoding of multiple chemical shifts within a large FOV with high...

  20. Parallel computing for homogeneous diffusion and transport equations in neutronics

    Pinchedez, K.

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  1. D0 experiment: its trigger, data acquisition, and computers

    Cutts, D.; Zeller, R.; Schamberger, D.; Van Berg, R.

    1984-05-01

    The new collider facility to be built at Fermilab's Tevatron-I D0 region is described. The data acquisition requirements are discussed, as well as the hardware and software triggers designed to meet these needs. An array of MicroVAX computers running VAXELN will filter in parallel (a complete event in each microcomputer) and transmit accepted events via Ethernet to a host. This system, together with its subsequent offline needs, is briefly presented

  2. Unilateral removable partial dentures.

    Goodall, W A; Greer, A C; Martin, N

    2017-01-27

    Removable partial dentures (RPDs) are widely used to replace missing teeth in order to restore both function and aesthetics for the partially dentate patient. Conventional RPD design is frequently bilateral and consists of a major connector that bridges both sides of the arch. Some patients cannot and will not tolerate such an extensive appliance. For these patients, bridgework may not be a predictable option and it is not always possible to provide implant-retained restorations. This article presents unilateral RPDs as a potential treatment modality for such patients and explores indications and contraindications for their use, including factors relating to patient history, clinical presentation and patient wishes. Through case examples, design, material and fabrication considerations will be discussed. While their use is not widespread, there are a number of patients who benefit from the provision of unilateral RPDs. They are a useful treatment to have in the clinician's armamentarium, but a highly-skilled dental team and a specific patient presentation is required in order for them to be a reasonable and predictable prosthetic option.

  3. Parallel artificial liquid membrane extraction

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  4. Expansion of the data acquisition system for the 20 MV tandem accelerator

    Tomita, Yoshiaki

    1981-02-01

    This report describes an expansion of the program of the data acquisition system for the 20 MV tandem accelerator. By the present expansion it became possible to change the accuisition mode or to use non-standard CAMAC modules with partial modification of the program according to well defined prescriptions. The modification can be made by writing microprograms for the MBD or appending subroutines for the reduced spectra in the LIST mode data acquisition. The new program can handle up to 32 ADC's in the standard LIST mode data acquisition. The present expansion aimed to increase the flexibility in data acquisition. It can also be applied to control experimental devices. (author)

  5. Parallel algorithms for mapping pipelined and parallel computations

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  6. Cellular automata a parallel model

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  7. Tutorial on Online Partial Evaluation

    William R. Cook

    2011-09-01

    Full Text Available This paper is a short tutorial introduction to online partial evaluation. We show how to write a simple online partial evaluator for a simple, pure, first-order, functional programming language. In particular, we show that the partial evaluator can be derived as a variation on a compositionally defined interpreter. We demonstrate the use of the resulting partial evaluator for program optimization in the context of model-driven development.

  8. Parallel algorithms for online trackfinding at PANDA

    Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.

  9. Type-Directed Partial Evaluation

    Danvy, Olivier

    1998-01-01

    Type-directed partial evaluation uses a normalization function to achieve partial evaluation. These lecture notes review its background, foundations, practice, and applications. Of specific interest is the modular technique of offline and online type-directed partial evaluation in Standard ML...

  10. Type-Directed Partial Evaluation

    Danvy, Olivier

    1998-01-01

    Type-directed partial evaluation uses a normalization function to achieve partial evaluation. These lecture notes review its background, foundations, practice, and applications. Of specific interest is the modular technique of offline and online type-directed partial evaluation in Standard ML of ...

  11. Data acquisition and real-time bolometer tomography using LabVIEW RT

    Giannone, L.; Eich, T.; Fuchs, J.C.; Ravindran, M.; Ruan, Q.; Wenzel, L.; Cerna, M.; Concezzi, S.

    2011-01-01

    The currently available multi-core PCI Express systems running LabVIEW RT (real-time), equipped with FPGA cards for data acquisition and real-time parallel signal processing, greatly shorten the design and implementation cycles of large-scale, real-time data acquisition and control systems. This paper details a data acquisition and real-time tomography system using LabVIEW RT for the bolometer diagnostic on the ASDEX Upgrade tokamak (Max Planck Institute for Plasma Physics, Garching, Germany). The transformation matrix for tomography is pre-computed based on the geometry of distributed radiation sources and sensors. A parallelized iterative algorithm is adapted to solve a constrained linear system for the reconstruction of the radiated power density. Real-time bolometer tomography is performed with LabVIEW RT. Using multi-core machines to execute the parallelized algorithm, a cycle time well below 1 ms is reached.

  12. A fast data acquisition system for PHA and MCS measurements

    Eijk, P.J.A. van; Keyser, C.J.; Rigterink, B.J.; Hasper, H.

    1985-01-01

    A microprocessor controlled data acquisition system for pulse height analysis and multichannel scaling is described. A 4K x 24 bit static memory is used to obtain a fast data acquisition rate. The system can store 12 bit ADC or TDC data within 150 ns. Operating commands can be entered via a small keyboard or by a RS-232-C interface. An oscilloscope is used to display a spectrum. The display of a spectrum or the transmission of spectrum data to an external computer causes only a short interruption of a measurement in progress and is accomplished by using a DMA circuit. The program is written in Modular Pascal and is divided into 15 modules. These implement 9 parallel processes which are synchronized by using semaphores. Hardware interrupts from the data acquisition, DMA, keyboard and RS-232-C circuits are used to signal these processes. (orig.)

  13. Applied partial differential equations

    Logan, J David

    2004-01-01

    This primer on elementary partial differential equations presents the standard material usually covered in a one-semester, undergraduate course on boundary value problems and PDEs. What makes this book unique is that it is a brief treatment, yet it covers all the major ideas: the wave equation, the diffusion equation, the Laplace equation, and the advection equation on bounded and unbounded domains. Methods include eigenfunction expansions, integral transforms, and characteristics. Mathematical ideas are motivated from physical problems, and the exposition is presented in a concise style accessible to science and engineering students; emphasis is on motivation, concepts, methods, and interpretation, rather than formal theory. This second edition contains new and additional exercises, and it includes a new chapter on the applications of PDEs to biology: age structured models, pattern formation; epidemic wave fronts, and advection-diffusion processes. The student who reads through this book and solves many of t...

  14. Inductance loop and partial

    Paul, Clayton R

    2010-01-01

    "Inductance is an unprecedented text, thoroughly discussing "loop" inductance as well as the increasingly important "partial" inductance. These concepts and their proper calculation are crucial in designing modern high-speed digital systems. World-renowned leader in electromagnetics Clayton Paul provides the knowledge and tools necessary to understand and calculate inductance." "With the present and increasing emphasis on high-speed digital systems and high-frequency analog systems, it is imperative that system designers develop an intimate understanding of the concepts and methods in this book. Inductance is a much-needed textbook designed for senior and graduate-level engineering students, as well as a hands-on guide for working engineers and professionals engaged in the design of high-speed digital and high-frequency analog systems."--Jacket.

  15. Fundamental partial compositeness

    Sannino, Francesco

    2016-11-07

    We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Successful models exist because gauge quantum numbers of Standard Model fermions admit a minimal enough 'square root'. Furthermore, right-handed SM fermions have an SU(2)$_R$-like structure, yielding a custodially-protected composite Higgs. Baryon and lepton numbers arise accidentally. Standard Model fermions acquire mass at tree level, while the Higgs potential and flavor violations are generated by quantum corrections. We further discuss accidental symmetries and other dynamical features stemming from the new strongly interacting scalars. If the same phenomenology can be obtained from models without our elementary scalars, they would reappear as composite states.

  16. Fundamental partial compositeness

    Sannino, Francesco; Strumia, Alessandro; Tesi, Andrea; Vigiani, Elena

    2016-01-01

    We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Under certain assumptions on the dynamics of the scalars, successful models exist because gauge quantum numbers of Standard Model fermions admit a minimal enough ‘square root’. Furthermore, right-handed SM fermions have an SU(2)_R-like structure, yielding a custodially-protected composite Higgs. Baryon and lepton numbers arise accidentally. Standard Model fermions acquire mass at tree level, while the Higgs potential and flavor violations are generated by quantum corrections. We further discuss accidental symmetries and other dynamical features stemming from the new strongly interacting scalars. If the same phenomenology can be obtained from models without our elementary scalars, they would reappear as composite states.

  17. The NUSTAR data acquisition

    Loeher, B.; Toernqvist, H.T. [TU Darmstadt (Germany); GSI (Germany); Agramunt, J. [IFIC, CSIC (Spain); Bendel, M.; Gernhaeuser, R.; Le Bleis, T.; Winkel, M. [TU Muenchen (Germany); Charpy, A.; Heinz, A.; Johansson, H.T. [Chalmers University of Technology (Sweden); Coleman-Smith, P.; Lazarus, I.H.; Pucknell, V.F.E. [STFC Daresbury (United Kingdom); Czermak, A. [IFJ (Poland); Kurz, N.; Nociforo, C.; Pietri, S.; Schaffner, H.; Simon, H. [GSI (Germany); Scheit, H. [TU Darmstadt (Germany); Taieb, J. [CEA (France)

    2015-07-01

    The diversity of upcoming experiments within the NUSTAR collaboration, including experiments in storage rings, reactions at relativistic energies and high-precision spectroscopy, is reflected in the diversity of the required detection systems. A challenging task is to incorporate the different needs of individual detectors within the unified NUSTAR Data AcQuisition (NDAQ). NDAQ takes up this challenge by providing a high degree of availability via continuously running systems, high flexibility via experiment-specific configuration files for data streams and trigger logic, distributed timestamps and trigger information on km distances, all built on the solid basis of the GSI Multi-Branch System. NDAQ ensures interoperability between individual NUSTAR detectors and allows merging of formerly separate data streams according to the needs of all experiments, increasing reliability in NUSTAR data acquisition. An overview of the NDAQ infrastructure and the current progress is presented. The NUSTAR (NUclear STructure, Astrophysics and Reactions) collaboration represents one of the four pillars motivating the construction of the international FAIR facility. The diversity of NUSTAR experiments, including experiments in storage rings, reactions at relativistic energies and high-precision spectroscopy, is reflected in the diversity of the required detection systems. A challenging task is to incorporate the different needs of individual detectors and components under the umbrella of the unified NUSTAR Data AQuisition (NDAQ) infrastructure. NDAQ takes up this challenge by providing a high degree of availability via continuously running systems, high flexibility via experiment-specific configuration files for data streams and trigger logic, and distributed time stamps and trigger information on km distances, all built on the solid basis of the GSI Multi-Branch System (MBS). NDAQ ensures interoperability between individual NUSTAR detectors and allows merging of formerly separate

  18. Partial oxidation process

    Najjar, M.S.

    1987-01-01

    A process is described for the production of gaseous mixtures comprising H/sub 2/+CO by the partial oxidation of a fuel feedstock comprising a heavy liquid hydrocarbonaceous fuel having a nickel, iron, and vanadium-containing ash or petroleum coke having a nickel, iron, and vanadium-containing ash, or mixtures thereof. The feedstock includes a minimum of 0.5 wt. % of sulfur and the ash includes a minimum of 5.0 wt. % vanadium, a minimum of 0.5 ppm nickel, and a minimum of 0.5 ppm iron. The process comprises: (1) mixing together a copper-containing additive with the fuel feedstock; wherein the weight ratio of copper-containing additive to ash in the fuel feedstock is in the range of about 1.0-10.0, and there is at least 10 parts by weight of copper for each part by weight of vanadium; (2) reacting the mixture from (1) at a temperature in the range of 2200 0 F to 2900 0 F and a pressure in the range of about 5 to 250 atmospheres in a free-flow refactory lined partial oxidation reaction zone with a free-oxygen containing gas in the presence of a temperature moderator and in a reducing atmosphere to produce a hot raw effluent gas stream comprising H/sub 2/+CO and entrained molten slag; and where in the reaction zone and the copper-containing additive combines with at least a portion of the nickel and iron constituents and sulfur found in the feedstock to produce a liquid phase washing agent that collects and transports at least a portion of the vanadium-containing oxide laths and spinels and other ash components and refractory out of the reaction zone; and (3) separating nongaseous materials from the hot raw effluent gas stream

  19. TCABR data acquisition system

    Fagundes, A.N. E-mail: fagundes@if.usp.br; Sa, W.P.; Coelho, P.M.S.A

    2000-08-01

    A brief description of the design of the data acquisition system for the TCABR tokamak is presented. The system comprises the VME standard instrumentation incorporating CAMAC instrumentation through the use of a GPIB interface. All the necessary data for programming different parts of the equipment, as well as the repertoire of actions for the machine control, are stored in a DBMS, with friendly interfaces. Public access software is used, where feasible, in the development of codes. The TCABR distinguished feature is the virtual lack of frontiers in upgrading, either in hardware or software.

  20. Flexible data acquisition system

    Clout, P N; Ridley, P A [Science Research Council, Daresbury (UK). Daresbury Lab.

    1978-06-01

    A data acquisition system has been developed which enables several independent experiments to be controlled by a 24 K word PDP-11 computer. Significant features of the system are the use of CAMAC, a high level language (RTL/2) and a general-purpose operating system executive which assist the rapid implementation of new experiments. This system has been used successfully for EXAFS and photo-electron spectroscopy experiments. It is intended to provide powerful concurrent data analysis and feedback facilities to the experimenter by on-line connection to the central IBM 370/165 computer.

  1. Getting Defense Acquisition Right

    2017-01-01

    on top of events and steer them to get where we need to go as efficiently as possible. Program management is not a spectator sport . Frank Frank...I made in the e-mail above and discusses some of the proactive steps a Program Manager can take, ahead of time , to reduce the potential...The Congress will rescind funds that are not obligated in a timely way. This puts pressure on the DoD’s acquisition managers to put money on

  2. Parallel Sparse Matrix - Vector Product

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  3. [Falsified medicines in parallel trade].

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  4. The parallel adult education system

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  5. Where are the parallel algorithms?

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  6. Default Parallels Plesk Panel Page

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  7. Parallel plate transmission line transformer

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

  8. Matpar: Parallel Extensions for MATLAB

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  9. Massively parallel quantum computer simulator

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  10. Parallel multiscale simulations of a brain aneurysm

    Grinberg, Leopold [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Fedosov, Dmitry A. [Institute of Complex Systems and Institute for Advanced Simulation, Forschungszentrum Jülich, Jülich 52425 (Germany); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2013-07-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in

  11. Parallel multiscale simulations of a brain aneurysm

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em

    2013-01-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in

  12. Parallel computing: numerics, applications, and trends

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  13. Experiments with parallel algorithms for combinatorial problems

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  14. Parallel R-matrix computation

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  15. computational study of Couette flow between parallel plates for steady and unsteady cases

    Rihan, Y.

    2008-01-01

    Couette flow between parallel plates is a classical problem that has important applications in various industrial processing. In this investigation an analytical solution was obtained to predict the steady and unsteady Couette flow between parallel plates. One of the plates was stationary and the other plate moved with constant velocity. The governing partial differential equations were solved numerically using Crank-Nicolson implicit method to represent the flow behavior of the fluid

  16. The numerical parallel computing of photon transport

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  17. Data-acquisition systems

    Cyborski, D.R.; Teh, K.M.

    1995-01-01

    Up to now, DAPHNE, the data-acquisition system developed for ATLAS, was used routinely for experiments at ATLAS and the Dynamitron. More recently, the Division implemented 2 MSU/DAPHNE systems. The MSU/DAPHNE system is a hybrid data-acquisition system which combines the front-end of the Michigan State University (MSU) DA system with the traditional DAPHNE back-end. The MSU front-end is based on commercially available modules. This alleviates the problems encountered with the DAPHNE front-end which is based on custom designed electronics. The first MSU system was obtained for the APEX experiment and was used there successfully. A second MSU front-end, purchased as a backup for the APEX experiment, was installed as a fully-independent second MSU/DAPHNE system with the procurement of a DEC 3000 Alpha host computer, and was used successfully for data-taking in an experiment at ATLAS. Additional hardware for a third system was bought and will be installed. With the availability of 2 MSU/DAPHNE systems in addition to the existing APEX setup, it is planned that the existing DAPHNE front-end will be decommissioned

  18. Continued Data Acquisition Development

    Schwellenbach, David [National Security Technologies, LLC. (NSTec), Mercury, NV (United States)

    2017-11-27

    This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muon tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.

  19. Unsupervised Language Acquisition

    de Marcken, Carl

    1996-11-01

    This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions that must hold for this general learning strategy to arrive at linguistically desirable grammars. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars, and a learning strategy that separates the ``content'' of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning vocabularies and grammars from unsegmented text and continuous speech, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language.

  20. Application of Pfortran and Co-Array Fortran in the Parallelization of the GROMOS96 Molecular Dynamics Module

    Piotr Bała

    2001-01-01

    Full Text Available After at least a decade of parallel tool development, parallelization of scientific applications remains a significant undertaking. Typically parallelization is a specialized activity supported only partially by the programming tool set, with the programmer involved with parallel issues in addition to sequential ones. The details of concern range from algorithm design down to low-level data movement details. The aim of parallel programming tools is to automate the latter without sacrificing performance and portability, allowing the programmer to focus on algorithm specification and development. We present our use of two similar parallelization tools, Pfortran and Cray's Co-Array Fortran, in the parallelization of the GROMOS96 molecular dynamics module. Our parallelization started from the GROMOS96 distribution's shared-memory implementation of the replicated algorithm, but used little of that existing parallel structure. Consequently, our parallelization was close to starting with the sequential version. We found the intuitive extensions to Pfortran and Co-Array Fortran helpful in the rapid parallelization of the project. We present performance figures for both the Pfortran and Co-Array Fortran parallelizations showing linear speedup within the range expected by these parallelization methods.

  1. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  2. On the Impact of Partial Shading on PV Output Power

    Sera, Dezso; Baghzouz, Yahia

    2008-01-01

    clarifies the mechanism of partial PV shading on a number of PV cells connected in series and/or parallel with and without bypass diodes. The analysis is presented in simple terms and can be useful to someone who wishes to determine the impact of some shading geometry on a PV system. The analysis...... is illustrated by measurements on a commercial 70 W panel, and a 14.4 kW PV array....

  3. Experts' understanding of partial derivatives using the Partial Derivative Machine

    Roundy, David; Dorko, Allison; Dray, Tevian; Manogue, Corinne A.; Weber, Eric

    2014-01-01

    Partial derivatives are used in a variety of different ways within physics. Most notably, thermodynamics uses partial derivatives in ways that students often find confusing. As part of a collaboration with mathematics faculty, we are at the beginning of a study of the teaching of partial derivatives, a goal of better aligning the teaching of multivariable calculus with the needs of students in STEM disciplines. As a part of this project, we have performed a pilot study of expert understanding...

  4. Collection assessment and acquisitions budgets

    Lee, Sul H

    2013-01-01

    This invaluable new book contains timely information about the assessment of academic library collections and the relationship of collection assessment to acquisition budgets. The rising cost of information significantly influences academic libraries'abilities to acquire the necessary materials for students and faculty, and public libraries'abilities to acquire material for their clientele. Collection Assessment and Acquisitions Budgets examines different aspects of the relationship between the assessment of academic library collections and the management of library acquisition budgets. Librar

  5. Acquisitions in the Electricity Sector: Active vs. Passive Owners

    Nese, Gjermund

    2002-07-01

    The starting point of this paper is a mixed oligopoly market consisting of n privately owned profit maximizing firms and 1 state-owned welfare maximizing firm. Motivated by the trend of mergers and acquisitions in the liberalized electricity markets, and by the debate about public or private ownership, the paper looks at two cases. In Case 1, the state-owned company acquires an ownership share in one of the private companies. In Case 2, the state-owned company is partially privatised. The paper focuses on differences in generated quantities and social surplus, depending on whether the investors behind the acquisitions are behaving as active or passive owners. One result shows that in the case of partial privatization, passive ownership provides the highest total industry generation, while active ownership induces maximum social surplus. (author)

  6. Acquisitions in the Electricity Sector: Active vs. Passive Owners

    Nese, Gjermund

    2002-01-01

    The starting point of this paper is a mixed oligopoly market consisting of n privately owned profit maximizing firms and 1 state-owned welfare maximizing firm. Motivated by the trend of mergers and acquisitions in the liberalized electricity markets, and by the debate about public or private ownership, the paper looks at two cases. In Case 1, the state-owned company acquires an ownership share in one of the private companies. In Case 2, the state-owned company is partially privatised. The paper focuses on differences in generated quantities and social surplus, depending on whether the investors behind the acquisitions are behaving as active or passive owners. One result shows that in the case of partial privatization, passive ownership provides the highest total industry generation, while active ownership induces maximum social surplus. (author)

  7. The DISTO data acquisition system at SATURNE

    Balestra, F.; Bedfer, Y.; Bertini, R.

    1998-01-01

    The DISTO collaboration has built a large-acceptance magnetic spectrometer designed to provide broad kinematic coverage of multiparticle final states produced in pp scattering. The spectrometer has been installed in the polarized proton beam of the Saturne accelerator in Saclay to study polarization observables in the rvec pp → pK + rvec Y (Y = Λ, Σ 0 or Y * ) reaction and vector meson production (ψ, ω and ρ) in pp collisions. The data acquisition system is based on a VME 68030 CPU running the OS/9 operating system, housed in a single VME crate together with the CAMAC interface, the triple port ECL memories, and four RISC R3000 CPU. The digitization of signals from the detectors is made by PCOS III and FERA front-end electronics. Data of several events belonging to a single Saturne extraction are stored in VME triple-port ECL memories using a hardwired fast sequencer. The buffer, optionally filtered by the RISC R3000 CPU, is recorded on a DLT cassette by DAQ CPU using the on-board SCSI interface during the acceleration cycle. Two UNIX workstations are connected to the VME CPUs through a fast parallel bus and the Local Area Network. They analyze a subset of events for on-line monitoring. The data acquisition system is able to read and record 3,500 ev/burst in the present configuration with a dead time of 15%

  8. Structural synthesis of parallel robots

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  9. GPU Parallel Bundle Block Adjustment

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  10. A tandem parallel plate analyzer

    Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.

    1996-11-01

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  11. High-speed parallel counter

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  12. An anthropologist in parallel structure

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  13. Wakefield calculations on parallel computers

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  14. Soudan 2 data acquisition and trigger electronics

    Dawson, J.; Haberichter, W.; Laird, R.

    1985-01-01

    The 1.1 kton Soudan 2 calorimetric drift-chamber detector is read out by 16K anode wires and 32K cathode strips. Preamps from each wire or strip are bussed together in groups of 8 to reduce the number of ADC channels. The resulting 6144 channels of ionization signal are flash-digitized every 200 ns and stored in RAM. The raw data hit patterns are continually compared with programmable trigger multiplicity and adjacency conditions. The data acquisition process is managed in a system of 24 parallel crates each containing an Intel 80C86 microprocessor, which supervises a pipe-lined data compactor, and allows transfer of the compacted data via CAMAC to the host computer. The 80C86's also manage the local trigger conditions and can perform some parallel processing of the data. Due to the scale of the system and multiplicity of identical channels, semi-custom gate array chips are used for much of the logic, utilizing 2.5 micron CMOS technology

  15. Soudan 2 data acquisition and trigger electronics

    Dawson, J.; Laird, R.; May, E.; Mondal, N.; Schlereth, J.; Solomey, N.; Thron, J.; Heppelmann, S.

    1985-01-01

    The 1.1 kton Soudan 2 detector is read out by 16K anode wires and 3 2K cathode strips. Preamps from each wire or strip are bussed together in groups of 8 to reduce the number of ADC channels. The resulting 6144 channels of ionization signal are flash-digitized every 150 ns and stored in RAM. The raw data hit patterns are continually compared with programmable trigger multiplicity and adjacency conditions. The data acquisition process is managed in a system of 24 parallel crates each containing an Intel 8086 microprocessors, which supervises a pipe-lined data compactors, and allows transfer of the compacted data via CAMAC to the host computer. The 8086's also manage the local trigger conditions and can perform some parallel processing of the data. Due to the scale of the system and multiplicity of identical channels, semi-custom gate array chips are used for much of the logic, utilizing 2.5 micron CMOS technology

  16. Soudan 2 data acquisition and trigger electronics

    Dawson, J.; Heppelmann, S.; Laird, R.; May, E.; Mondal, N.; Schlereth, J.; Solomey, N.; Thron, J.

    1985-01-01

    The 1.1 kton Soudan 2 detector is read out by 16K anode wires and 32K cathode strips. Preamps from each wire or strip are bussed together in groups of 8 to reduce the number of ADC channels. The resulting 6144 channels of ionization signal are flash-digitized every 150 ns and stored in RAM. The raw data hit patterns are continually compared with programmable trigger multiplicity and adjacency conditions. The data acquisition process is managed in a system of 24 parallel crates each containing an Intel 8086 microprocessors, which supervises a pipe-lined data compactors, and allows transfer of the compacted data via CAMAC to the host computer. The 8086's also manage the local trigger conditions and can perform some parallel processing of the data. Due to the scale of the system and multiplicity of identical channels, semi-custom gate array chips are used for much of the logic, utilizing 2.5 micron CMOS technology

  17. Parallel processing of genomics data

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  18. Partial Actions and Power Sets

    Jesús Ávila

    2013-01-01

    Full Text Available We consider a partial action (X,α with enveloping action (T,β. In this work we extend α to a partial action on the ring (P(X,Δ,∩ and find its enveloping action (E,β. Finally, we introduce the concept of partial action of finite type to investigate the relationship between (E,β and (P(T,β.

  19. Algorithms over partially ordered sets

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....

  20. Anatomic partial nephrectomy: technique evolution.

    Azhar, Raed A; Metcalfe, Charles; Gill, Inderbir S

    2015-03-01

    Partial nephrectomy provides equivalent long-term oncologic and superior functional outcomes as radical nephrectomy for T1a renal masses. Herein, we review the various vascular clamping techniques employed during minimally invasive partial nephrectomy, describe the evolution of our partial nephrectomy technique and provide an update on contemporary thinking about the impact of ischemia on renal function. Recently, partial nephrectomy surgical technique has shifted away from main artery clamping and towards minimizing/eliminating global renal ischemia during partial nephrectomy. Supported by high-fidelity three-dimensional imaging, novel anatomic-based partial nephrectomy techniques have recently been developed, wherein partial nephrectomy can now be performed with segmental, minimal or zero global ischemia to the renal remnant. Sequential innovations have included early unclamping, segmental clamping, super-selective clamping and now culminating in anatomic zero-ischemia surgery. By eliminating 'under-the-gun' time pressure of ischemia for the surgeon, these techniques allow an unhurried, tightly contoured tumour excision with point-specific sutured haemostasis. Recent data indicate that zero-ischemia partial nephrectomy may provide better functional outcomes by minimizing/eliminating global ischemia and preserving greater vascularized kidney volume. Contemporary partial nephrectomy includes a spectrum of surgical techniques ranging from conventional-clamped to novel zero-ischemia approaches. Technique selection should be tailored to each individual case on the basis of tumour characteristics, surgical feasibility, surgeon experience, patient demographics and baseline renal function.

  1. Partial order infinitary term rewriting

    Bahr, Patrick

    2014-01-01

    We study an alternative model of infinitary term rewriting. Instead of a metric on terms, a partial order on partial terms is employed to formalise convergence of reductions. We consider both a weak and a strong notion of convergence and show that the metric model of convergence coincides with th...... to the metric setting -- orthogonal systems are both infinitarily confluent and infinitarily normalising in the partial order setting. The unique infinitary normal forms that the partial order model admits are Böhm trees....

  2. Succinct partial sums and fenwick trees

    Bille, Philip; Christiansen, Anders Roy; Prezza, Nicola

    2017-01-01

    We consider the well-studied partial sums problem in succint space where one is to maintain an array of n k-bit integers subject to updates such that partial sums queries can be efficiently answered. We present two succint versions of the Fenwick Tree – which is known for its simplicity...... and practicality. Our results hold in the encoding model where one is allowed to reuse the space from the input data. Our main result is the first that only requires nk + o(n) bits of space while still supporting sum/update in O(logbn)/O(blogbn) time where 2 ≤ b ≤ log O(1)n. The second result shows how optimal...... time for sum/update can be achieved while only slightly increasing the space usage to nk + o(nk) bits. Beyond Fenwick Trees, the results are primarily based on bit-packing and sampling – making them very practical – and they also allow for simple optimal parallelization....

  3. Fast-Acquisition/Weak-Signal-Tracking GPS Receiver for HEO

    Wintemitz, Luke; Boegner, Greg; Sirotzky, Steve

    2004-01-01

    A report discusses the technical background and design of the Navigator Global Positioning System (GPS) receiver -- . a radiation-hardened receiver intended for use aboard spacecraft. Navigator is capable of weak signal acquisition and tracking as well as much faster acquisition of strong or weak signals with no a priori knowledge or external aiding. Weak-signal acquisition and tracking enables GPS use in high Earth orbits (HEO), and fast acquisition allows for the receiver to remain without power until needed in any orbit. Signal acquisition and signal tracking are, respectively, the processes of finding and demodulating a signal. Acquisition is the more computationally difficult process. Previous GPS receivers employ the method of sequentially searching the two-dimensional signal parameter space (code phase and Doppler). Navigator exploits properties of the Fourier transform in a massively parallel search for the GPS signal. This method results in far faster acquisition times [in the lab, 12 GPS satellites have been acquired with no a priori knowledge in a Low-Earth-Orbit (LEO) scenario in less than one second]. Modeling has shown that Navigator will be capable of acquiring signals down to 25 dB-Hz, appropriate for HEO missions. Navigator is built using the radiation-hardened ColdFire microprocessor and housing the most computationally intense functions in dedicated field-programmable gate arrays. The high performance of the algorithm and of the receiver as a whole are made possible by optimizing computational efficiency and carefully weighing tradeoffs among the sampling rate, data format, and data-path bit width.

  4. Foreign Acquisition, Wages and Productivity

    Bandick, Roger

    This paper studies the effect of foreign acquisition on wages and total factor productivity (TFP) in the years following a takeover by using unique detailed firm-level data for Sweden for the period 1993-2002. The paper takes particular account of the potential endogeneity of the acquisition...

  5. Foreign Acquisition, Wages and Productivity

    Bandick, Roger

    2011-01-01

    This paper studies the effect of foreign acquisition on wages and total factor productivity (TFP) in the years following a takeover by using unique detailed firm-level data for Sweden for the period 1993-2002. The paper takes particular account of the potential endogeneity of the acquisition...

  6. Human parallels to experimental myopia?

    Fledelius, Hans C; Goldschmidt, Ernst; Haargaard, Birgitte

    2014-01-01

    acquiring new and basic knowledge, the practical object of the research is to reduce the burden of human myopia around the world. Acquisition and cost of optical correction is one issue, but associated morbidity counts more, with its global load of myopia-associated visual loss and blindness. The object......Raviola and Wiesel's monkey eyelid suture studies of the 1970s laid the cornerstone for the experimental myopia science undertaken since then. The aim has been to clarify the basic humoral and neuronal mechanisms behind induced myopization, its eye tissue transmitters in particular. Besides...... serve as inspiration to the laboratory research, which aims at solving the basic enigmas on a tissue level....

  7. Solving the Selective Multi-Category Parallel-Servicing Problem

    Range, Troels Martin; Lusby, Richard Martin; Larsen, Jesper

    In this paper we present a new scheduling problem and describe a shortest path based heuristic as well as a dynamic programming based exact optimization algorithm to solve it. The Selective Multi-Category Parallel-Servicing Problem (SMCPSP) arises when a set of jobs has to be scheduled on a server...... (machine) with limited capacity. Each job requests service in a prespecified time window and belongs to a certain category. Jobs may be serviced partially, incurring a penalty; however, only jobs of the same category can be processed simultaneously. One must identify the best subset of jobs to process...

  8. Solving the selective multi-category parallel-servicing problem

    Range, Troels Martin; Lusby, Richard Martin; Larsen, Jesper

    2015-01-01

    In this paper, we present a new scheduling problem and describe a shortest path-based heuristic as well as a dynamic programming-based exact optimization algorithm to solve it. The selective multi-category parallel-servicing problem arises when a set of jobs has to be scheduled on a server (machine......) with limited capacity. Each job requests service in a prespecified time window and belongs to a certain category. Jobs may be serviced partially, incurring a penalty; however, only jobs of the same category can be processed simultaneously. One must identify the best subset of jobs to process in each time...

  9. Developing Acquisition IS Integration Capabilities

    Wynne, Peter J.

    2016-01-01

    An under researched, yet critical challenge of Mergers and Acquisitions (M&A), is what to do with the two organisations’ information systems (IS) post-acquisition. Commonly referred to as acquisition IS integration, existing theory suggests that to integrate the information systems successfully......, an acquiring company must leverage two high level capabilities: diagnosis and integration execution. Through a case study, this paper identifies how a novice acquirer develops these capabilities in anticipation of an acquisition by examining its use of learning processes. The study finds the novice acquirer...... applies trial and error, experimental, and vicarious learning processes, while actively avoiding improvisational learning. The results of the study contribute to the acquisition IS integration literature specifically by exploring it from a new perspective: the learning processes used by novice acquirers...

  10. Data driven parallelism in experimental high energy physics applications

    Pohl, M.

    1987-01-01

    I present global design principles for the implementation of high energy physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of high energy physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordiate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms). (orig.)

  11. Distributed parallel computing in stochastic modeling of groundwater systems.

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  12. Data driven parallelism in experimental high energy physics applications

    Pohl, Martin

    1987-08-01

    I present global design principles for the implementation of High Energy Physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of High Energy Physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The Task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordinate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms).

  13. On Degenerate Partial Differential Equations

    Chen, Gui-Qiang G.

    2010-01-01

    Some of recent developments, including recent results, ideas, techniques, and approaches, in the study of degenerate partial differential equations are surveyed and analyzed. Several examples of nonlinear degenerate, even mixed, partial differential equations, are presented, which arise naturally in some longstanding, fundamental problems in fluid mechanics and differential geometry. The solution to these fundamental problems greatly requires a deep understanding of nonlinear degenerate parti...

  14. [Acrylic resin removable partial dentures

    Baat, C. de; Witter, D.J.; Creugers, N.H.J.

    2011-01-01

    An acrylic resin removable partial denture is distinguished from other types of removable partial dentures by an all-acrylic resin base which is, in principle, solely supported by the edentulous regions of the tooth arch and in the maxilla also by the hard palate. When compared to the other types of

  15. Partial Epilepsy with Auditory Features

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  16. 48 CFR 873.105 - Acquisition planning.

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Acquisition planning. 873.105 Section 873.105 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS DEPARTMENT... planning. (a) Acquisition planning is an indispensable component of the total acquisition process. (b) For...

  17. 48 CFR 34.004 - Acquisition strategy.

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Acquisition strategy. 34... CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 34.004 Acquisition strategy. The program manager, as specified in agency procedures, shall develop an acquisition strategy tailored to the particular...

  18. 48 CFR 3034.004 - Acquisition strategy.

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Acquisition strategy. 3034.004 Section 3034.004 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Acquisition strategy. See (HSAR) 48 CFR 3009.570 for policy applicable to acquisition strategies that consider...

  19. 48 CFR 434.004 - Acquisition strategy.

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Acquisition strategy. 434.004 Section 434.004 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE SPECIAL CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 434.004 Acquisition strategy. (a) The program...

  20. 48 CFR 234.004 - Acquisition strategy.

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Acquisition strategy. 234..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION 234.004 Acquisition strategy. (1) See 209.570 for policy applicable to acquisition strategies that consider the use of lead system...

  1. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    Lu Liuyan; Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-01-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f m pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  2. Comparison of continuous with step and shoot acquisition in SPECT scanning

    McCarthy, L.; Cotterill, T.; Chu, J.M.G.

    1998-01-01

    Full text: Following the recent advent of continuous acquisition for performing SPECT scanning, it was decided to compare the commonly used Step and Shoot mode of acquisition with the new continuous acquisition mode. The aim of the study is to assess any difference in resolution from the resulting images acquired using the two modes of acquisition. Sequential series of studies were performed on a SPECT phantom using both modes of acquisition. Separate sets of data were collected for both high resolution parallel hole and ultra high resolution fan beam collimators. Clinical data was collected on patients undergoing routine gallium, 99m Tc-MDP bone and 99m Tc-HMPAO brain studies. Separate sequential acquisition in both modes were collected for each patient. The sequence of collection was also alternated. Reconstruction was performed utilising the same parameters for each acquisition. The reconstructed data were assessed visually by blinded observers to detect differences in resolution and image quality. No significant difference in the studies collected by either acquisition modes were detected. The time saved by continuous acquisition could be an advantage

  3. Overview of the Force Scientific Parallel Language

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  4. Automatic Loop Parallelization via Compiler Guided Refactoring

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  5. Parallel kinematics type, kinematics, and optimal design

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  6. Applied Parallel Computing Industrial Computation and Optimization

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  7. Partial twisting for scalar mesons

    Agadjanov, Dimitri; Meißner, Ulf-G.; Rusetsky, Akaki

    2014-01-01

    The possibility of imposing partially twisted boundary conditions is investigated for the scalar sector of lattice QCD. According to the commonly shared belief, the presence of quark-antiquark annihilation diagrams in the intermediate state generally hinders the use of the partial twisting. Using effective field theory techniques in a finite volume, and studying the scalar sector of QCD with total isospin I=1, we however demonstrate that partial twisting can still be performed, despite the fact that annihilation diagrams are present. The reason for this are delicate cancellations, which emerge due to the graded symmetry in partially quenched QCD with valence, sea and ghost quarks. The modified Lüscher equation in case of partial twisting is given

  8. Parallel algorithms and cluster computing

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  9. Parallel computation of rotating flows

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

  10. The parallel volume at large distances

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  11. The parallel volume at large distances

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  12. Model-independent partial wave analysis using a massively-parallel fitting framework

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  13. Trigger and data acquisition

    CERN. Geneva; Gaspar, C

    2001-01-01

    Past LEP experiments generate data at 0.5 MByte/s from particle detectors with over a quarter of a million readout channels. The process of reading out the electronic channels, treating them, and storing the date produced by each collision for further analysis by the physicists is called "Data Acquisition". Not all beam crossings produce interesting physics "events", picking the interesting ones is the task of the "Trigger" system. In order to make sure that the data is collected in good conditions the experiment's operation has to be constantly verified. In all, at LEP experiments over 100 000 parameters were monitored, controlled, and synchronized by the "Monotoring and control" system. In the future, LHC experiments will produce as much data in a single day as a LEP detector did in a full year's running with a raw data rate of 10 - 100 MBytes/s and will have to cope with some 800 million proton-proton collisions a second of these collisions only one in 100 million million is interesting for new particle se...

  14. DATA ACQUISITION (DAQ)

    Gerry Bauer

    The CMS Storage Manager System The tail-end of the CMS Data Acquisition System is the Storage Manger (SM), which collects output from the HLT and stages the data at Cessy for transfer to its ultimate home in the Tier-0 center. A SM system has been used by CMS for several years with the steadily evolving software within the XDAQ framework, but until relatively recently, only with provisional hardware. The SM is well known to much of the collaboration through the ‘MiniDAQ’ system, which served as the central DAQ system in 2007, and lives on in 2008 for dedicated sub-detector commissioning. Since March of 2008 a first phase of the final hardware was commissioned and used in CMS Global Runs. The system originally planned for 2008 aimed at recording ~1MB events at a few hundred Hz. The building blocks to achieve this are based on Nexsan's SATABeast storage array - a device  housing up to 40 disks of 1TB each, and possessing two controllers each capable of almost 200 MB/sec throughput....

  15. IPNS data acquisition system

    Worlton, T.G.; Crawford, R.K.; Haumann, J.R.; Daly, R.

    1983-01-01

    The IPNS Data Acquisition System (DAS) was designed to be reliable, flexible, and easy to use. It provides unique methods of acquiring Time-of-Flight neutron scattering data and allows collection, storage, display, and analysis of very large data arrays with a minimum of user input. Data can be collected from normal detectors, linear position-sensitive detectors, and/or area detectors. The data can be corrected for time-delays and can be time-focussed before being binned. Corrections to be made to the data and selection of inputs to be summed are entirely software controlled, as are the time ranges and resolutions for each detector element. Each system can be configured to collect data into millions of channels. Maximum continuous data rates are greater than 2000 counts/sec with full corrections, or 16,000 counts/sec for the simpler binning scheme used with area detectors. Live displays of the data may be made as a function of time, wavevector, wavelength, lattice spacing, or energy. In most cases the complete data analysis can be done on the DAS host computer. The IPNS DAS became operational for four neutron scattering instruments in 1981 and has since been expanded to seven instruments

  16. Advanced data acquisition system for SEVAN

    Chilingaryan, Suren; Chilingarian, Ashot; Danielyan, Varuzhan; Eppler, Wolfgang

    2009-02-01

    Huge magnetic clouds of plasma emitted by the Sun dominate intense geomagnetic storm occurrences and simultaneously they are correlated with variations of spectra of particles and nuclei in the interplanetary space, ranging from subtermal solar wind ions till GeV energy galactic cosmic rays. For a reliable and fast forecast of Space Weather world-wide networks of particle detectors are operated at different latitudes, longitudes, and altitudes. Based on a new type of hybrid particle detector developed in the context of the International Heliophysical Year (IHY 2007) at Aragats Space Environmental Center (ASEC) we start to prepare hardware and software for the first sites of Space Environmental Viewing and Analysis Network (SEVAN). In the paper the architecture of the newly developed data acquisition system for SEVAN is presented. We plan to run the SEVAN network under one-and-the-same data acquisition system, enabling fast integration of data for on-line analysis of Solar Flare Events. An Advanced Data Acquisition System (ADAS) is designed as a distributed network of uniform components connected by Web Services. Its main component is Unified Readout and Control Server (URCS) which controls the underlying electronics by means of detector specific drivers and makes a preliminary analysis of the on-line data. The lower level components of URCS are implemented in C and a fast binary representation is used for the data exchange with electronics. However, after preprocessing, the data are converted to a self-describing hybrid XML/Binary format. To achieve better reliability all URCS are running on embedded computers without disk and fans to avoid the limited lifetime of moving mechanical parts. The data storage is carried out by means of high performance servers working in parallel to provide data security. These servers are periodically inquiring the data from all URCS and storing it in a MySQL database. The implementation of the control interface is based on high level

  17. A data parallel pseudo-spectral semi-implicit magnetohydrodynamics code

    Keppens, R.; Poedts, S.; Meijer, P. M.; Goedbloed, J. P.; Hertzberger, B.; Sloot, P.

    1997-01-01

    The set of eight nonlinear partial differential equations of magnetohydrodynamics (MHD) is used for time dependent simulations of three-dimensional (3D) fluid flow in a magnetic field. A data parallel code is presented, which integrates the MHD equations in cylindrical geometry, combining a

  18. A Parallel Approach to Fractal Image Compression

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  19. Parallel Computing Using Web Servers and "Servlets".

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  20. An Introduction to Parallel Computation R

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  1. Comparison of parallel viscosity with neoclassical theory

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  2. Advances in randomized parallel computing

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  3. Xyce parallel electronic simulator design.

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  4. Cast Partial Denture versus Acrylic Partial Denture for Replacement of Missing Teeth in Partially Edentulous Patients

    Pramita Suwal

    2017-03-01

    Full Text Available Aim: To compare the effects of cast partial denture with conventional all acrylic denture in respect to retention, stability, masticatory efficiency, comfort and periodontal health of abutments. Methods: 50 adult partially edentulous patient seeking for replacement of missing teeth having Kennedy class I and II arches with or without modification areas were selected for the study. Group-A was treated with cast partial denture and Group-B with acrylic partial denture. Data collected during follow-up visit of 3 months, 6 months, and 1 year by evaluating retention, stability, masticatory efficiency, comfort, periodontal health of abutment. Results: Chi-square test was applied to find out differences between the groups at 95% confidence interval where p = 0.05. One year comparison shows that cast partial denture maintained retention and stability better than acrylic partial denture (p< 0.05. The masticatory efficiency was significantly compromising from 3rd month to 1 year in all acrylic partial denture groups (p< 0.05. The comfort of patient with cast partial denture was maintained better during the observation period (p< 0.05. Periodontal health of abutment was gradually deteriorated in all acrylic denture group (p

  5. Construction of a FASTBUS data-acquisition system for the ELAN experiment

    Noel, A.

    1992-06-01

    To use the FASTBUS data acquisition system for the experiment ELAN at the electron stretcher accelerator ELSA a new software tool has been developed. This tool manages to readout parallel CAMAC with a VME front-end-processor and FASTBUS with the special FASTBUS processor segment AEB. Both processors are connected by a 32 bit high speed VSB data bus. (orig.) [de

  6. A simple low cost speed log interface for oceanographic data acquisition system

    Khedekar, V.D.; Phadte, G.M.

    A speed log interface is designed with parallel Binary Coded Decimal output. This design was mainly required for the oceanographic data acquisition system as an interface between the speed log and the computer. However, this can also be used as a...

  7. Expanded Understanding of IS/IT Related Challenges in Mergers and Acquisitions

    Toppenberg, Gustav

    2015-01-01

    Organizational Mergers and Acquisitions (M&As) occur at an increasingly frequent pace in today’s business life. Paralleling this development, M&As has increasingly attracted attention from the Information Systems (IS) domain. This emerging line of research has started form an understanding...

  8. Acquisition: Acquisition of Targets at the Missile Defense Agency

    Ugone, Mary L; Meling, John E; James, Harold C; Haynes, Christine L; Heller, Brad M; Pomietto, Kenneth M; Bobbio, Jaime; Chang, Bill; Pugh, Jacqueline

    2005-01-01

    Who Should Read This Report and Why? Missile Defense Agency program managers who are responsible for the acquisition and management of targets used to test the Ballistic Missile Defense System should be interested in this report...

  9. The Acquisition Experiences of Kazoil

    Minbaeva, Dana; Muratbekova-Touron, Maral

    2016-01-01

    This case describes two diverging post-acquisition experiences of KazOil, an oil drilling company in Kazakhstan, in the years after the dissolution of the Soviet Union. When the company was bought by the Canadian corporation Hydrocarbons Ltd in 1996, exposed to new human resource strategies...... among students that cultural distance is not the main determinant for the success of social integration mechanisms in post-acquisition situations. On the contrary, the relationship between integration instrument and integration success is also governed by contextual factors such as the attractiveness...... of the acquisition target or state of development of HRM in the target country....

  10. Data acquisition techniques using PC

    Austerlitz, Howard

    1991-01-01

    Data Acquisition Techniques Using Personal Computers contains all the information required by a technical professional (engineer, scientist, technician) to implement a PC-based acquisition system. Including both basic tutorial information as well as some advanced topics, this work is suitable as a reference book for engineers or as a supplemental text for engineering students. It gives the reader enough understanding of the topics to implement a data acquisition system based on commercial products. A reader can alternatively learn how to custom build hardware or write his or her own software.

  11. Bootstrapping language acquisition.

    Abend, Omri; Kwiatkowski, Tom; Smith, Nathaniel J; Goldwater, Sharon; Steedman, Mark

    2017-07-01

    The semantic bootstrapping hypothesis proposes that children acquire their native language through exposure to sentences of the language paired with structured representations of their meaning, whose component substructures can be associated with words and syntactic structures used to express these concepts. The child's task is then to learn a language-specific grammar and lexicon based on (probably contextually ambiguous, possibly somewhat noisy) pairs of sentences and their meaning representations (logical forms). Starting from these assumptions, we develop a Bayesian probabilistic account of semantically bootstrapped first-language acquisition in the child, based on techniques from computational parsing and interpretation of unrestricted text. Our learner jointly models (a) word learning: the mapping between components of the given sentential meaning and lexical words (or phrases) of the language, and (b) syntax learning: the projection of lexical elements onto sentences by universal construction-free syntactic rules. Using an incremental learning algorithm, we apply the model to a dataset of real syntactically complex child-directed utterances and (pseudo) logical forms, the latter including contextually plausible but irrelevant distractors. Taking the Eve section of the CHILDES corpus as input, the model simulates several well-documented phenomena from the developmental literature. In particular, the model exhibits syntactic bootstrapping effects (in which previously learned constructions facilitate the learning of novel words), sudden jumps in learning without explicit parameter setting, acceleration of word-learning (the "vocabulary spurt"), an initial bias favoring the learning of nouns over verbs, and one-shot learning of words and their meanings. The learner thus demonstrates how statistical learning over structured representations can provide a unified account for these seemingly disparate phenomena. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. PDDP, A Data Parallel Programming Model

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  13. Parallelization of quantum molecular dynamics simulation code

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  14. Implementation and performance of parallelized elegant

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  15. Implementation of PHENIX trigger algorithms on massively parallel computers

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  16. Pediatric bowel MRI - accelerated parallel imaging in a single breathhold

    Hohl, C.; Honnef, D.; Krombach, G.; Muehlenbruch, G.; Guenther, R.W.; Niendorf, T.; Ocklenburg, C.; Wenzl, T.G.

    2008-01-01

    Purpose: to compare highly accelerated parallel MRI of the bowel with conventional balanced FFE sequences in children with inflammatory bowel disease (IBD). Materials and methods: 20 children with suspected or proven IBD underwent MRI using a 1.5 T scanner after oral administration of 700-1000 ml of a Mannitol solution and an additional enema. The examination started with a 4-channel receiver coil and a conventional balanced FFE sequence in axial (2.5 s/slice) and coronal (4.7 s/slice) planes. Afterwards highly accelerated (R = 5) balanced FFE sequences in axial (0.5 s/slice) and coronal (0.9 s/slice) were performed using a 32-channel receiver coil and parallel imaging (SENSE). Both receiver coils achieved a resolution of 0.88 x 0.88 mm with a slice thickness of 5 mm (coronal) and 6 mm (axial) respectively. Using the conventional imaging technique, 4 - 8 breathholds were needed to cover the whole abdomen, while parallel imaging shortened the acquisition time down to a single breathhold. Two blinded radiologists did a consensus reading of the images regarding pathological findings, image quality, susceptibility to artifacts and bowel distension. The results for both coil systems were compared using the kappa-(κ)-coefficient, differences in the susceptibility to artifacts were checked with the Wilcoxon signed rank test. Statistical significance was assumed for p = 0.05. Results: 13 of the 20 children had inflammatory bowel wall changes at the time of the examination, which could be correctly diagnosed with both coil systems in 12 of 13 cases (92%). The comparison of both coil systems showed a good agreement for pathological findings (κ = 0.74 - 1.0) and the image quality. Using parallel imaging significantly more artifacts could be observed (κ = 0.47)

  17. Parallelization of 2-D lattice Boltzmann codes

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  18. Parallelization of 2-D lattice Boltzmann codes

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  19. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  20. Rhythm in language acquisition.

    Langus, Alan; Mehler, Jacques; Nespor, Marina

    2017-10-01

    Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Experiences in Data-Parallel Programming

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  2. Streaming for Functional Data-Parallel Languages

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  3. Evolutionary Acquisition and Spiral Development Tutorial

    Hantos, P

    2005-01-01

    .... NSS Acquisition Policy 03-01 provided some space-oriented customization and, similarly to the original DOD directives, also positioned Evolutionary Acquisition and Spiral Development as preferred...

  4. Platform attitude data acquisition system

    Afzulpurkar, S.

    A system for automatic acquisition of underwater platform attitude data has been designed, developed and tested in the laboratory. This is a micro controller based system interfacing dual axis inclinometer, high-resolution digital compass...

  5. Portable Data Acquisition System Project

    National Aeronautics and Space Administration — Armstrong researchers have developed a portable data acquisition system (PDAT) that can be easily transported and set up at remote locations to display and archive...

  6. New KENS data acquisition system

    Arai, M.; Furusaka, M.; Satoh, S.

    1989-01-01

    In this report, the authors discuss a data acquisition system, KENSnet, which is newly introduced to the KENS facility. The criteria for the data acquisition system was about 1 MIPS for CPU speed and 150 Mbytes for storage capacity for a computer per spectrometer. VAX computers were chosen with their propreitary operating system, VMS. The Vax computers are connected by a DECnet network mediated by Ethernet. Front-end computers, Apple Macintosh Plus and Macintosh II, were chosen for their user-friendly manipulation and intelligence. New CAMAC-based data acquisition electronics were developed. The data acquisition control program (ICP) and the general data analysis program (Genie) were both developed at ISIS and have been installed. 2 refs., 3 figs., 1 tab

  7. Schizophrenia and second language acquisition.

    Bersudsky, Yuly; Fine, Jonathan; Gorjaltsan, Igor; Chen, Osnat; Walters, Joel

    2005-05-01

    Language acquisition involves brain processes that can be affected by lesions or dysfunctions in several brain systems and second language acquisition may depend on different brain substrates than first language acquisition in childhood. A total of 16 Russian immigrants to Israel, 8 diagnosed schizophrenics and 8 healthy immigrants, were compared. The primary data for this study were collected via sociolinguistic interviews. The two groups use language and learn language in very much the same way. Only exophoric reference and blocking revealed meaningful differences between the schizophrenics and healthy counterparts. This does not mean of course that schizophrenia does not induce language abnormalities. Our study focuses on those aspects of language that are typically difficult to acquire in second language acquisition. Despite the cognitive compromises in schizophrenia and the manifest atypicalities in language of speakers with schizophrenia, the process of acquiring a second language seems relatively unaffected by schizophrenia.

  8. Massively parallel diffuse optical tomography

    Sandusky, John V.; Pitts, Todd A.

    2017-09-05

    Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

  9. Embodied and Distributed Parallel DJing.

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  10. Device for balancing parallel strings

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  11. Physics of partially ionized plasmas

    Krishan, Vinod

    2016-01-01

    Plasma is one of the four fundamental states of matter; the other three being solid, liquid and gas. Several components, such as molecular clouds, diffuse interstellar gas, the solar atmosphere, the Earth's ionosphere and laboratory plasmas, including fusion plasmas, constitute the partially ionized plasmas. This book discusses different aspects of partially ionized plasmas including multi-fluid description, equilibrium and types of waves. The discussion goes on to cover the reionization phase of the universe, along with a brief description of high discharge plasmas, tokomak plasmas and laser plasmas. Various elastic and inelastic collisions amongst the three particle species are also presented. In addition, the author demonstrates the novelty of partially ionized plasmas using many examples; for instance, in partially ionized plasma the magnetic induction is subjected to the ambipolar diffusion and the Hall effect, as well as the usual resistive dissipation. Also included is an observation of kinematic dynam...

  12. Partially massless fields during inflation

    Baumann, Daniel; Goon, Garrett; Lee, Hayden; Pimentel, Guilherme L.

    2018-04-01

    The representation theory of de Sitter space allows for a category of partially massless particles which have no flat space analog, but could have existed during inflation. We study the couplings of these exotic particles to inflationary perturbations and determine the resulting signatures in cosmological correlators. When inflationary perturbations interact through the exchange of these fields, their correlation functions inherit scalings that cannot be mimicked by extra massive fields. We discuss in detail the squeezed limit of the tensor-scalar-scalar bispectrum, and show that certain partially massless fields can violate the tensor consistency relation of single-field inflation. We also consider the collapsed limit of the scalar trispectrum, and find that the exchange of partially massless fields enhances its magnitude, while giving no contribution to the scalar bispectrum. These characteristic signatures provide clean detection channels for partially massless fields during inflation.

  13. A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel

    Gao, Liang, E-mail: gaol@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois at Urbana–Champaign, 306 N. Wright St., Urbana, IL 61801 (United States); Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana–Champaign, 405 North Mathews Avenue, Urbana, IL 61801 (United States); Wang, Lihong V., E-mail: lhwang@wustl.edu [Optical imaging laboratory, Department of Biomedical Engineering, Washington University in St. Louis, One Brookings Dr., MO, 63130 (United States)

    2016-02-29

    Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition–also dubbed snapshot imaging–has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications.

  14. Linear parallel processing machines I

    Von Kunze, M

    1984-01-01

    As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.

  15. Parallel computing in enterprise modeling.

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  16. Processes Asunder: Acquisition & Planning Misfits

    2009-03-26

    Establishing six Business Enterprise Priorities ( BEPs ) to focus the Department’s business transformation efforts, which now guide DoD investment decisions...three phases which look very much like Milestone A, B, and C of the previously existing Life Cycle Management Framework . With this obvious redundancy...February 2002). 30 6 Defense Acquisition University, “Integrated Defense Acquisition, Technology, & Logistics Life Cycle Management Framework , version 5.2

  17. Introduction to partial differential equations

    Greenspan, Donald

    2000-01-01

    Designed for use in a one-semester course by seniors and beginning graduate students, this rigorous presentation explores practical methods of solving differential equations, plus the unifying theory underlying the mathematical superstructure. Topics include basic concepts, Fourier series, second-order partial differential equations, wave equation, potential equation, heat equation, approximate solution of partial differential equations, and more. Exercises appear at the ends of most chapters. 1961 edition.

  18. Experience from Tore Supra acquisition system and evolutions

    Guillerminet, B.; Buravand, Y.; Chatelier, E.; Leroux, F.

    2004-01-01

    The Tore Supra tokamak has been upgraded to explore long duration plasma discharges up to 1000s. Since summer 2001, the acquisition system operates in continuous mode apart of the data processing which is still done after the pulse. In the first part, we explore a few solutions to process continuously the data during the pulse, based on parallel processing on a Linux farm and then on a CONDOR system. The second part is devoted to the Web service exposing the Tore Supra operation. In the last part, the VME acquisition system has been redesigned to keep up with the high data rates required by a few diagnostics. The workflow is now distributed among a few computers. At the end, we give the current status of the realisation and the future planning

  19. Multiplexed capillary microfluidic immunoassay with smartphone data acquisition for parallel mycotoxin detection.

    Machado, Jessica M D; Soares, Ruben R G; Chu, Virginia; Conde, João P

    2018-01-15

    The field of microfluidics holds great promise for the development of simple and portable lab-on-a-chip systems. The use of capillarity as a means of fluidic manipulation in lab-on-a-chip systems can potentially reduce the complexity of the instrumentation and allow the development of user-friendly devices for point-of-need analyses. In this work, a PDMS microchannel-based, colorimetric, autonomous capillary chip provides a multiplexed and semi-quantitative immunodetection assay. Results are acquired using a standard smartphone camera and analyzed with a simple gray scale quantification procedure. The performance of this device was tested for the simultaneous detection of the mycotoxins ochratoxin A (OTA), aflatoxin B1 (AFB1) and deoxynivalenol (DON) which are strictly regulated food contaminants with severe detrimental effects on human and animal health. The multiplexed assay was performed approximately within 10min and the achieved sensitivities of<40, 0.1-0.2 and<10ng/mL for OTA, AFB1 and DON, respectively, fall within the majority of currently enforced regulatory and/or recommended limits. Furthermore, to assess the potential of the device to analyze real samples, the immunoassay was successfully validated for these 3 mycotoxins in a corn-based feed sample after a simple sample preparation procedure. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. [Acrylic resin removable partial dentures].

    de Baat, C; Witter, D J; Creugers, N H J

    2011-01-01

    An acrylic resin removable partial denture is distinguished from other types of removable partial dentures by an all-acrylic resin base which is, in principle, solely supported by the edentulous regions of the tooth arch and in the maxilla also by the hard palate. When compared to the other types of removable partial dentures, the acrylic resin removable partial denture has 3 favourable aspects: the economic aspect, its aesthetic quality and the ease with which it can be extended and adjusted. Disadvantages are an increased risk of caries developing, gingivitis, periodontal disease, denture stomatitis, alveolar bone reduction, tooth migration, triggering of the gag reflex and damage to the acrylic resin base. Present-day indications are ofa temporary or palliative nature or are motivated by economic factors. Special varieties of the acrylic resin removable partial denture are the spoon denture, the flexible denture fabricated of non-rigid acrylic resin, and the two-piece sectional denture. Furthermore, acrylic resin removable partial dentures can be supplied with clasps or reinforced by fibers or metal wires.

  1. Performance Confirmation Data Acquisition System

    D.W. Markman

    2000-01-01

    The purpose of this analysis is to identify and analyze concepts for the acquisition of data in support of the Performance Confirmation (PC) program at the potential subsurface nuclear waste repository at Yucca Mountain. The scope and primary objectives of this analysis are to: (1) Review the criteria for design as presented in the Performance Confirmation Data Acquisition/Monitoring System Description Document, by way of the Input Transmittal, Performance Confirmation Input Criteria (CRWMS M and O 1999c). (2) Identify and describe existing and potential new trends in data acquisition system software and hardware that would support the PC plan. The data acquisition software and hardware will support the field instruments and equipment that will be installed for the observation and perimeter drift borehole monitoring, and in-situ monitoring within the emplacement drifts. The exhaust air monitoring requirements will be supported by a data communication network interface with the ventilation monitoring system database. (3) Identify the concepts and features that a data acquisition system should have in order to support the PC process and its activities. (4) Based on PC monitoring needs and available technologies, further develop concepts of a potential data acquisition system network in support of the PC program and the Site Recommendation and License Application

  2. Exploiting fine-grain parallelism in recursive LU factorization

    Dongarra, Jack

    2012-01-01

    The LU factorization is an important numerical algorithm for solving system of linear equations. This paper proposes a novel approach for computing the LU factorization in parallel on multicore architectures. It improves the overall performance and also achieves the numerical quality of the standard LU factorization with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic and the atomicity of selecting the appropriate pivots. We remedy this in our new approach to LU factorization of (narrow and tall) panel submatrices. We use a parallel fine-grained recursive formulation of the factorization. It is based on conflict-free partitioning of the data and lock-less synchronization mechanisms. Our implementation lets the overall computation naturally flow with limited contention. Our recursive panel factorization provides the necessary performance increase for the inherently problematic portion of the LU factorization of square matrices. A large panel width results in larger Amdahl\\'s fraction as our experiments have revealed which is consistent with related efforts. The performance results of our implementation reveal superlinear speedup and far exceed what can be achieved with equivalent MKL and/or LAPACK routines. © 2012 The authors and IOS Press. All rights reserved.

  3. Classification and data acquisition with incomplete data

    Williams, David P.

    In remote-sensing applications, incomplete data can result when only a subset of sensors (e.g., radar, infrared, acoustic) are deployed at certain regions. The limitations of single sensor systems have spurred interest in employing multiple sensor modalities simultaneously. For example, in land mine detection tasks, different sensor modalities are better-suited to capture different aspects of the underlying physics of the mines. Synthetic aperture radar sensors may be better at detecting surface mines, while infrared sensors may be better at detecting buried mines. By employing multiple sensor modalities to address the detection task, the strengths of the disparate sensors can be exploited in a synergistic manner to improve performance beyond that which would be achievable with either single sensor alone. When multi-sensor approaches are employed, however, incomplete data can be manifested. If each sensor is located on a separate platform ( e.g., aircraft), each sensor may interrogate---and hence collect data over---only partially overlapping areas of land. As a result, some data points may be characterized by data (i.e., features) from only a subset of the possible sensors employed in the task. Equivalently, this scenario implies that some data points will be missing features. Increasing focus in the future on using---and fusing data from---multiple sensors will make such incomplete-data problems commonplace. In many applications involving incomplete data, it is possible to acquire the missing data at a cost. In multi-sensor remote-sensing applications, data is acquired by deploying sensors to data points. Acquiring data is usually an expensive, time-consuming task, a fact that necessitates an intelligent data acquisition process. Incomplete data is not limited to remote-sensing applications, but rather, can arise in virtually any data set. In this dissertation, we address the general problem of classification when faced with incomplete data. We also address the

  4. 48 CFR 49.603-5 - Cost-reimbursement contracts-partial termination.

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Cost-reimbursement....603-5 Cost-reimbursement contracts—partial termination. [Insert the following in Block 14 of SF 30, Amendment of Solicitation/Modification of Contract, for settlement agreements for cost-reimbursement...

  5. Compiler Technology for Parallel Scientific Computation

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  6. Computer-Aided Parallelizer and Optimizer

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  7. Unpacking the cognitive map: the parallel map theory of hippocampal function.

    Jacobs, Lucia F; Schenk, Françoise

    2003-04-01

    In the parallel map theory, the hippocampus encodes space with 2 mapping systems. The bearing map is constructed primarily in the dentate gyrus from directional cues such as stimulus gradients. The sketch map is constructed within the hippocampus proper from positional cues. The integrated map emerges when data from the bearing and sketch maps are combined. Because the component maps work in parallel, the impairment of one can reveal residual learning by the other. Such parallel function may explain paradoxes of spatial learning, such as learning after partial hippocampal lesions, taxonomic and sex differences in spatial learning, and the function of hippocampal neurogenesis. By integrating evidence from physiology to phylogeny, the parallel map theory offers a unified explanation for hippocampal function.

  8. Sparse Parallel MRI Based on Accelerated Operator Splitting Schemes.

    Cai, Nian; Xie, Weisi; Su, Zhenghang; Wang, Shanshan; Liang, Dong

    2016-01-01

    Recently, the sparsity which is implicit in MR images has been successfully exploited for fast MR imaging with incomplete acquisitions. In this paper, two novel algorithms are proposed to solve the sparse parallel MR imaging problem, which consists of l 1 regularization and fidelity terms. The two algorithms combine forward-backward operator splitting and Barzilai-Borwein schemes. Theoretically, the presented algorithms overcome the nondifferentiable property in l 1 regularization term. Meanwhile, they are able to treat a general matrix operator that may not be diagonalized by fast Fourier transform and to ensure that a well-conditioned optimization system of equations is simply solved. In addition, we build connections between the proposed algorithms and the state-of-the-art existing methods and prove their convergence with a constant stepsize in Appendix. Numerical results and comparisons with the advanced methods demonstrate the efficiency of proposed algorithms.

  9. Parallelization of Rocket Engine Simulator Software (PRESS)

    Cezzar, Ruknet

    1998-01-01

    We have outlined our work in the last half of the funding period. We have shown how a demo package for RESSAP using MPI can be done. However, we also mentioned the difficulties with the UNIX platform. We have reiterated some of the suggestions made during the presentation of the progress of the at Fourth Annual HBCU Conference. Although we have discussed, in some detail, how TURBDES/PUMPDES software can be run in parallel using MPI, at present, we are unable to experiment any further with either MPI or PVM. Due to X windows not being implemented, we are also not able to experiment further with XPVM, which it will be recalled, has a nice GUI interface. There are also some concerns, on our part, about MPI being an appropriate tool. The best thing about MPr is that it is public domain. Although and plenty of documentation exists for the intricacies of using MPI, little information is available on its actual implementations. Other than very typical, somewhat contrived examples, such as Jacobi algorithm for solving Laplace's equation, there are few examples which can readily be applied to real situations, such as in our case. In effect, the review of literature on both MPI and PVM, and there is a lot, indicate something similar to the enormous effort which was spent on LISP and LISP-like languages as tools for artificial intelligence research. During the development of a book on programming languages [12], when we searched the literature for very simple examples like taking averages, reading and writing records, multiplying matrices, etc., we could hardly find a any! Yet, so much was said and done on that topic in academic circles. It appears that we faced the same problem with MPI, where despite significant documentation, we could not find even a simple example which supports course-grain parallelism involving only a few processes. From the foregoing, it appears that a new direction may be required for more productive research during the extension period (10/19/98 - 10

  10. Performance assessment of the SIMFAP parallel cluster at IFIN-HH Bucharest

    Adam, Gh.; Adam, S.; Ayriyan, A.; Dushanov, E.; Hayryan, E.; Korenkov, V.; Lutsenko, A.; Mitsyn, V.; Sapozhnikova, T.; Sapozhnikov, A; Streltsova, O.; Buzatu, F.; Dulea, M.; Vasile, I.; Sima, A.; Visan, C.; Busa, J.; Pokorny, I.

    2008-01-01

    Performance assessment and case study outputs of the parallel SIMFAP cluster at IFIN-HH Bucharest point to its effective and reliable operation. A comparison with results on the supercomputing system in LIT-JINR Dubna adds insight on resource allocation for problem solving by parallel computing. The solution of models asking for very large numbers of knots in the discretization mesh needs the migration to high performance computing based on parallel cluster architectures. The acquisition of ready-to-use parallel computing facilities being beyond limited budgetary resources, the solution at IFIN-HH was to buy the hardware and the inter-processor network, and to implement by own efforts the open software concerning both the operating system and the parallel computing standard. The present paper provides a report demonstrating the successful solution of these tasks. The implementation of the well-known HPL (High Performance LINPACK) Benchmark points to the effective and reliable operation of the cluster. The comparison of HPL outputs obtained on parallel clusters of different magnitudes shows that there is an optimum range of the order N of the linear algebraic system over which a given parallel cluster provides optimum parallel solutions. For the SIMFAP cluster, this range can be inferred to correspond to about 1 to 2 x 10 4 linear algebraic equations. For an algorithm of polynomial complexity N α the task sharing among p processors within a parallel solution mainly follows an (N/p)α behaviour under peak performance achievement. Thus, while the problem complexity remains the same, a substantial decrease of the coefficient of the leading order of the polynomial complexity is achieved. (authors)

  11. Parallel processing for fluid dynamics applications

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  12. Design considerations for parallel graphics libraries

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  13. 48 CFR 970.2301 - Sustainable acquisition.

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Sustainable acquisition. 970.2301 Section 970.2301 Federal Acquisition Regulations System DEPARTMENT OF ENERGY AGENCY..., Renewable Energy Technologies, Occupational Safety and Drug-Free Work Place 970.2301 Sustainable acquisition...

  14. Program design of data acquisition in Windows

    Cai Jianxin; Yan Huawen

    2004-01-01

    Several methods for the design of data acquisition program based on Microsoft Windows are introduced. Then their respective advantages and disadvantages are totally analyzed. At the same time, the data acquisition modes applicable to each method are also pointed out. It is convenient for data acquisition programmers to develop data acquisition systems. (authors)

  15. Parallelism at Cern: real-time and off-line applications in the GP-MIMD2 project

    Calafiura, P.

    1997-01-01

    A wide range of general purpose high-energy physics applications, ranging from Monte Carlo simulation to data acquisition, from interactive data analysis to on-line filtering, have been ported, or developed, and run in parallel on IBM SP-2 and Meiko CS-2 CERN large multi-processor machines. The ESPRIT project GP-MIMD2 has been a catalyst for the interest in parallel computing at CERN. The project provided the 128 processor Meiko CS-2 system that is now succesfully integrated in the CERN computing environment. The CERN experiment NA48 was involved in the GP-MIMD2 project since the beginning. NA48 physicists run, as part of their day-to-day work, simulation and analysis programs parallelized using the message passing interface MPI. The CS-2 is also a vital component of the experiment data acquisition system and will be used to calibrate in real-time the 13000 channels liquid krypton calorimeter. (orig.)

  16. Three-dimensional SPECT [single photon emission computed tomography] reconstruction of combined cone beam and parallel beam data

    Jaszczak, R.J.; Jianying Li; Huili Wang; Coleman, R.E.

    1992-01-01

    Single photon emission computed tomography (SPECT) using cone beam (CB) collimation exhibits increased sensitivity compared with acquisition geometries using parallel (P) hole collimation. However, CB collimation has a smaller field-of-view which may result in truncated projections and image artifacts. A primary objective of this work is to investigate maximum likelihood-expectation maximization (ML-EM) methods to reconstruct simultaneously acquired parallel and cone beam (P and CB) SPECT data. Simultaneous P and CB acquisition can be performed with commercially available triple camera systems by using two cone-beam collimators and a single parallel-hole collimator. The loss in overall sensitivity (relative to the use of three CB collimators) is about 15 to 20%. The authors have developed three methods to combine P and CB data using modified ML-EM algorithms. (author)

  17. Parachute technique for partial penectomy

    Fernando Korkes

    2010-04-01

    Full Text Available PURPOSE: Penile carcinoma is a rare but mutilating malignancy. In this context, partial penectomy is the most commonly applied approach for best oncological results. We herein propose a simple modification of the classic technique of partial penectomy, for better cosmetic and functional results. TECHNIQUE: If partial penectomy is indicated, the present technique can bring additional benefits. Different from classical technique, the urethra is spatulated only ventrally. An inverted "V" skin flap with 0.5 cm of extension is sectioned ventrally. The suture is performed with vicryl 4-0 in a "parachute" fashion, beginning from the ventral portion of the urethra and the "V" flap, followed by the "V" flap angles and than by the dorsal portion of the penis. After completion of the suture, a Foley catheter and light dressing are placed for 24 hours. CONCLUSIONS: Several complex reconstructive techniques have been previously proposed, but normally require specific surgical abilities, adequate patient selection and staged procedures. We believe that these reconstructive techniques are very useful in some specific subsets of patients. However, the technique herein proposed is a simple alternative that can be applied to all men after a partial penectomy, and takes the same amount of time as that in the classic technique. In conclusion, the "parachute" technique for penile reconstruction after partial amputation not only improves the appearance of the penis, but also maintains an adequate function.

  18. Synchronization Techniques in Parallel Discrete Event Simulation

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  19. Parallel processing from applications to systems

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  20. Parallel processing for artificial intelligence 1

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus