WorldWideScience

Sample records for parallel acquisition technique

  1. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    International Nuclear Information System (INIS)

    Tintera, Jaroslav; Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter

    2004-01-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  2. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    Energy Technology Data Exchange (ETDEWEB)

    Tintera, Jaroslav [Institute for Clinical and Experimental Medicine, Prague (Czech Republic); Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter [University Clinic Mainz, Institute of Neuroradiology, Mainz (Germany)

    2004-12-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  3. Rapid musculoskeletal magnetic resonance imaging using integrated parallel acquisition techniques (IPAT) - Initial experiences

    International Nuclear Information System (INIS)

    Romaneehsen, B.; Oberholzer, K.; Kreitner, K.-F.; Mueller, L.P.

    2003-01-01

    Purpose: To investigate the feasibility of using multiple receiver coil elements for time saving integrated parallel imaging techniques (iPAT) in traumatic musculoskeletal disorders. Material and methods: 6 patients with traumatic derangements of the knee, ankle and hip underwent MR imaging at 1.5 T. For signal detection of the knee and ankle, we used a 6-channel body array coil that was placed around the joints, for hip imaging two 4-channel body array coils and two elements of the spine array coil were combined for signal detection. All patients were investigated with a standard imaging protocol that mainly consisted of different turbo spin-echo sequences (PD-, T 2 -weighted TSE with and without fat suppression, STIR). All sequences were repeated with an integrated parallel acquisition technique (iPAT) using a modified sensitivity encoding (mSENSE) technique with an acceleration factor of 2. Overall image quality was subjectively assessed using a five-point scale as well as the ability for detection of pathologic findings. Results: Regarding overall image quality, there were no significant differences between standard imaging and imaging using mSENSE. All pathologies (occult fracture, meniscal tear, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques. iPAT led to a 48% reduction of acquisition time compared with standard technique. Additionally, time savings with iPAT led to a decrease of pain-induced motion artifacts in two cases. Conclusion: In times of increasing cost pressure, iPAT using multiple coil elements seems to be an efficient and economic tool for fast musculoskeletal imaging with diagnostic performance comparable to conventional techniques. (orig.) [de

  4. Fast magnetic resonance imaging of the knee using a parallel acquisition technique (mSENSE): a prospective performance evaluation

    International Nuclear Information System (INIS)

    Kreitner, K.F.; Romaneehsen, Bernd; Oberholzer, Katja; Dueber, Christoph; Krummenauer, Frank; Mueller, L.P.

    2006-01-01

    The performance of a magnetic resonance (MR) imaging strategy that uses multiple receiver coil elements and integrated parallel imaging techniques (iPAT) in traumatic and degenerative disorders of the knee and to compare this technique with a standard MR imaging protocol was evaluated. Ninety patients with suspected internal derangements of the knee joint prospectively underwent MR imaging at 1.5 T. For signal detection, a 6-channel array coil was used. All patients were investigated with a standard imaging protocol consisting of different turbo spin-echo sequences proton density (PD), T 2 -weighted turbo spin echo (TSE) with and without fat suppression in three imaging planes. All sequences were repeated with an integrated parallel acquisition technique (iPAT) using the modified sensitivity encoding (mSENSE) algorithm with an acceleration factor of 2. Two radiologists independently evaluated and scored all images with regard to overall image quality, artefacts and pathologic findings. Agreement of the parallel ratings between readers and imaging techniques, respectively, was evaluated by means of pairwise kappa coefficients that were stratified for the area of evaluation. Agreement between the parallel readers for both the iPAT imaging and the conventional technique, respectively, as well as between imaging techniques was found encouraging with inter-observer kappa values ranging between 0.78 and 0.98 for both imaging techniques, and the inter-method kappa values ranging between 0.88 and 1.00 for both clinical readers. All pathological findings (e.g. occult fractures, meniscal and cruciate ligament tears, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques with comparable performance. The use of iPAT lead to a 48% reduction of acquisition time compared with standard technique. Parallel imaging using mSENSE proved to be an efficient and economic tool for fast musculoskeletal MR imaging of the knee joint with comparable

  5. Rapid musculoskeletal magnetic resonance imaging using integrated parallel acquisition techniques (IPAT) - Initial experiences

    Energy Technology Data Exchange (ETDEWEB)

    Romaneehsen, B.; Oberholzer, K.; Kreitner, K.-F. [Johannes Gutenberg-Univ. Mainz (Germany). Klinik und Poliklinik fuer Radiologie; Mueller, L.P. [Johannes Gutenberg-Univ. Mainz (Germany). Klinik und Poliklinik fuer Unfallchirurgie

    2003-09-01

    Purpose: To investigate the feasibility of using multiple receiver coil elements for time saving integrated parallel imaging techniques (iPAT) in traumatic musculoskeletal disorders. Material and methods: 6 patients with traumatic derangements of the knee, ankle and hip underwent MR imaging at 1.5 T. For signal detection of the knee and ankle, we used a 6-channel body array coil that was placed around the joints, for hip imaging two 4-channel body array coils and two elements of the spine array coil were combined for signal detection. All patients were investigated with a standard imaging protocol that mainly consisted of different turbo spin-echo sequences (PD-, T{sub 2}-weighted TSE with and without fat suppression, STIR). All sequences were repeated with an integrated parallel acquisition technique (iPAT) using a modified sensitivity encoding (mSENSE) technique with an acceleration factor of 2. Overall image quality was subjectively assessed using a five-point scale as well as the ability for detection of pathologic findings. Results: Regarding overall image quality, there were no significant differences between standard imaging and imaging using mSENSE. All pathologies (occult fracture, meniscal tear, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques. iPAT led to a 48% reduction of acquisition time compared with standard technique. Additionally, time savings with iPAT led to a decrease of pain-induced motion artifacts in two cases. Conclusion: In times of increasing cost pressure, iPAT using multiple coil elements seems to be an efficient and economic tool for fast musculoskeletal imaging with diagnostic performance comparable to conventional techniques. (orig.) [German] Ziel: Einsatz integrierter paralleler Akquisitionstechniken (iPAT) zur Verkuerzung der Untersuchungszeit bei muskuloskelettalen Verletzungen. Material und Methoden: 6 Patienten mit einem Knie, Sprunggelenks- oder Huefttrauma wurden bei 1,5 T

  6. VIBE with parallel acquisition technique - a novel approach to dynamic contrast-enhanced MR imaging of the liver

    International Nuclear Information System (INIS)

    Dobritz, M.; Radkow, T.; Bautz, W.; Fellner, F.A.; Nittka, M.

    2002-01-01

    Purpose: The VIBE (volume interpolated breath-hold examination) sequence in combination with parallel acquisition technique (iPAT: integrated parallel acquisition technique) allows dynamic contrast-enhanced MRI of the liver with high temporal and spatial resolution. The aim of this study was to obtain first clinical experience with this technique for the detection and characterization of focal liver lesions. Materials and Methods: We examined 10 consecutive patients using a 1.5 T MR system (gradient field strength 30 mT/m) with a phased-array coil combination. Following sequences- were acquired: T 2 -w TSE and T 1 -w FLASH, after administration of gadolinium, 6 VIBE sequences with iPAT (TR/TE/matrix/partition thickness/time of acquisition: 6.2 ms/ 3.2 ms/256 x 192/4 mm/13 s), as well as T 1 -weighted FLASH with fat saturation. Two observers evaluated the different sequences concerning the number of lesions and their dignity. Following lesions were found: hepatocellular carcinoma (5 patients), hemangioma (2), metastasis (1), cyst (1), adenoma (1). Results: The VIBE sequences were superior for the detection of lesions with arterial hyperperfusion with a total of 33 focal lesions. 21 lesions were found with T 2 -w TSE and 20 with plain T 1 -weighted FLASH. Diagnostic accuracy increased with the VIBE sequence in comparison to the other sequences. Conclusion: VIBE with iPAT allows MR imaging of the liver with high spatial and temporal resolution providing dynamic contrast-enhanced information about the whole liver. This may lead to improved detection of liver lesions, especially hepatocellular carcinoma. (orig.) [de

  7. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands

    International Nuclear Information System (INIS)

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G.; Graessner, J.; Reitmeier, F.; Jaehne, M.; Petersen, K.U.

    2005-01-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD±1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p 2 =0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p 2 =0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post-processing needed. To improve results of MR

  8. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  9. MR sialography: evaluation of an ultra-fast sequence in consideration of a parallel acquisition technique and different functional conditions in patients with salivary gland diseases

    International Nuclear Information System (INIS)

    Petridis, C.; Ries, T.; Cramer, M.C.; Graessner, J.; Petersen, K.U.; Reitmeier, F.; Jaehne, M.; Weiss, F.; Adam, G.; Habermann, C.R.

    2007-01-01

    Purpose: To evaluate an ultra-fast sequence for MR sialography requiring no post-processing and to compare the acquisition technique regarding the effect of oral stimulation with a parallel acquisition technique in patients with salivary gland diseases. Materials and Methods: 128 patients with salivary gland disease were prospectively examined using a 1.5-T superconducting system with a 30 mT/m maximum gradient capability and a maximum slew rate of 125 mT/m/sec. A single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation. All images were obtained with and without a parallel imaging technique. The evaluation of the ductal system of the parotid and submandibular gland was performed using a visual scale of 1-5 for each side. The images were assessed by two independent experienced radiologists. An ANOVA with posthoc comparisons and an overall two tailed significance level of p=0.05 was used for the statistical evaluation. An intraclass correlation was computed to evaluate interobserver variability and a correlation of >0.8 was determined, thereby indicating a high correlation. Results: Depending on the diagnosed diseases and the absence of abruption of the ducts, all parts of excretory ducts were able to be visualized in all patients using the developed technique with an overall rating for all ducts of 2.70 (SD±0.89). A high correlation was achieved between the two observers with an intraclass correlation of 0.73. Oral application of a sialogogum improved the visibility of excretory ducts significantly (p<0.001). In contrast, the use of a parallel imaging technique led to a significant decrease in image quality (p=0,011). (orig.)

  10. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands; MR-Sialographie: Optimierung und Bewertung ultraschneller Sequenzen mit paralleler Bildgebung und oraler Stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G. [Radiologisches Zentrum, Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie, Universitaetsklinikum Hamburg-Eppendorf (Germany); Graessner, J. [Siemens Medical Systems, Hamburg (Germany); Reitmeier, F.; Jaehne, M. [Kopf- und Hautzentrum, Klinik und Poliklinik fuer Hals-, Nasen- und Ohrenheilkunde, Universitaetsklinikum Hamburg-Eppendorf (Germany); Petersen, K.U. [Zentrum fuer Psychosoziale Medizin, Klinik und Poliklinik fuer Psychiatrie und Psychotherapie, Universitaetsklinikum Hamburg-Eppendorf (Germany)

    2005-04-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD{+-}1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p<0.001; {eta}{sup 2}=0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p<0.001; {eta}{sup 2}=0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post

  11. VIBE with parallel acquisition technique - a novel approach to dynamic contrast-enhanced MR imaging of the liver; VIBE mit paralleler Akquisitionstechnik - eine neue Moeglichkeit der dynamischen kontrastverstaerkten MRT der Leber

    Energy Technology Data Exchange (ETDEWEB)

    Dobritz, M.; Radkow, T.; Bautz, W.; Fellner, F.A. [Inst. fuer Diagnostische Radiologie, Friedrich-Alexander-Univ. Erlangen-Nuernberg (Germany); Nittka, M. [Siemens Medical Solutions, Erlangen (Germany)

    2002-06-01

    Purpose: The VIBE (volume interpolated breath-hold examination) sequence in combination with parallel acquisition technique (iPAT: integrated parallel acquisition technique) allows dynamic contrast-enhanced MRI of the liver with high temporal and spatial resolution. The aim of this study was to obtain first clinical experience with this technique for the detection and characterization of focal liver lesions. Materials and Methods: We examined 10 consecutive patients using a 1.5 T MR system (gradient field strength 30 mT/m) with a phased-array coil combination. Following sequences- were acquired: T{sub 2}-w TSE and T{sub 1}-w FLASH, after administration of gadolinium, 6 VIBE sequences with iPAT (TR/TE/matrix/partition thickness/time of acquisition: 6.2 ms/ 3.2 ms/256 x 192/4 mm/13 s), as well as T{sub 1}-weighted FLASH with fat saturation. Two observers evaluated the different sequences concerning the number of lesions and their dignity. Following lesions were found: hepatocellular carcinoma (5 patients), hemangioma (2), metastasis (1), cyst (1), adenoma (1). Results: The VIBE sequences were superior for the detection of lesions with arterial hyperperfusion with a total of 33 focal lesions. 21 lesions were found with T{sub 2}-w TSE and 20 with plain T{sub 1}-weighted FLASH. Diagnostic accuracy increased with the VIBE sequence in comparison to the other sequences. Conclusion: VIBE with iPAT allows MR imaging of the liver with high spatial and temporal resolution providing dynamic contrast-enhanced information about the whole liver. This may lead to improved detection of liver lesions, especially hepatocellular carcinoma. (orig.) [German] Ziel: Die VIBE-Sequenz (Volume Interpolated Breath-hold Examination) in Kombination mit paralleler Bildgebung (iPAT) ermoeglicht eine dynamische kontrastmittel-gestuetzte Untersuchung der Leber in hoher zeitlicher und oertlicher Aufloesung. Ziel war es, erste klinische Erfahrungen mit dieser Technik in der Detektion fokaler

  12. Fast implementations of 3D PET reconstruction using vector and parallel programming techniques

    International Nuclear Information System (INIS)

    Guerrero, T.M.; Cherry, S.R.; Dahlbom, M.; Ricci, A.R.; Hoffman, E.J.

    1993-01-01

    Computationally intensive techniques that offer potential clinical use have arisen in nuclear medicine. Examples include iterative reconstruction, 3D PET data acquisition and reconstruction, and 3D image volume manipulation including image registration. One obstacle in achieving clinical acceptance of these techniques is the computational time required. This study focuses on methods to reduce the computation time for 3D PET reconstruction through the use of fast computer hardware, vector and parallel programming techniques, and algorithm optimization. The strengths and weaknesses of i860 microprocessor based workstation accelerator boards are investigated in implementations of 3D PET reconstruction

  13. A tomograph VMEbus parallel processing data acquisition system

    International Nuclear Information System (INIS)

    Atkins, M.S.; Wilkinson, N.A.; Rogers, J.G.

    1988-11-01

    This paper describes a VME based data acquisition system suitable for the development of Positron Volume Imaging tomographs which use 3-D data for improved image resolution over slice-oriented tomographs. The data acquisition must be flexible enough to accommodate several 3-D reconstruction algorithms; hence, a software-based system is most suitable. Furthermore, because of the increased dimensions and resolution of volume imaging tomographs, the raw data event rate is greater than that of slice-oriented machines. These dual requirements are met by our data acquisition systems. Flexibility is achieved through an array of processors connected over a VMEbus, operating asynchronously and in parallel. High raw data throughput is achieved using a dedicated high speed data transfer device available for the VMEbus. The device can attain a raw data rate of 2.5 million coincidence events per second for raw events per second for raw events which are 64 bits wide. Real-time data acquisition and pre-processing requirements can be met by about forty 20 MHz Motorola 68020/68881 processors

  14. A tomograph VMEbus parallel processing data acquisition system

    International Nuclear Information System (INIS)

    Wilkinson, N.A.; Rogers, J.G.; Atkins, M.S.

    1989-01-01

    This paper describes a VME based data acquisition system suitable for the development of Positron Volume Imaging tomographs which use 3-D data for improved image resolution over slice-oriented tomographs. the data acquisition must be flexible enough to accommodate several 3-D reconstruction algorithms; hence, a software-based system is most suitable. Furthermore, because of the increased dimensions and resolution of volume imaging tomographs, the raw data event rate is greater than that of slice-oriented machines. These dual requirements are met by our data acquisition system. Flexibility is achieved through an array of processors connected over a VMEbus, operating asynchronously and in parallel. High raw data throughput is achieved using a dedicated high speed data transfer device available for the VMEbus. The device can attain a raw data rate of 2.5 million coincidence events per second for raw events which are 64 bits wide

  15. Characterization of Harmonic Signal Acquisition with Parallel Dipole and Multipole Detectors

    Science.gov (United States)

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2018-04-01

    Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) is a powerful instrument for the study of complex biological samples due to its high resolution and mass measurement accuracy. However, the relatively long signal acquisition periods needed to achieve high resolution can serve to limit applications of FTICR-MS. The use of multiple pairs of detector electrodes enables detection of harmonic frequencies present at integer multiples of the fundamental cyclotron frequency, and the obtained resolving power for a given acquisition period increases linearly with the order of harmonic signal. However, harmonic signal detection also increases spectral complexity and presents challenges for interpretation. In the present work, ICR cells with independent dipole and harmonic detection electrodes and preamplifiers are demonstrated. A benefit of this approach is the ability to independently acquire fundamental and multiple harmonic signals in parallel using the same ions under identical conditions, enabling direct comparison of achieved performance as parameters are varied. Spectra from harmonic signals showed generally higher resolving power than spectra acquired with fundamental signals and equal signal duration. In addition, the maximum observed signal to noise (S/N) ratio from harmonic signals exceeded that of fundamental signals by 50 to 100%. Finally, parallel detection of fundamental and harmonic signals enables deconvolution of overlapping harmonic signals since observed fundamental frequencies can be used to unambiguously calculate all possible harmonic frequencies. Thus, the present application of parallel fundamental and harmonic signal acquisition offers a general approach to improve utilization of harmonic signals to yield high-resolution spectra with decreased acquisition time. [Figure not available: see fulltext.

  16. Effects of various event building techniques on data acquisition system architectures

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.

    1990-04-01

    The preliminary specifications for various new detectors throughout the world including those at the Superconducting Super Collider (SSC) already make it clear that existing event building techniques will be inadequate for the high trigger and data rates anticipated for these detectors. In the world of high-energy physics many approaches have been taken to solving the problem of reading out data from a whole detector and presenting a complete event to the physicist, while simultaneously keeping deadtime to a minimum. This paper includes a review of multiprocessor and telecommunications interconnection networks and how these networks relate to event building in general, illustrating advantages of the various approaches. It presents a more detailed study of recent research into new event building techniques which incorporate much greater parallelism to better accommodate high data rates. The future in areas such as front-end electronics architectures, high speed data links, event building and online processor arrays is also examined. Finally, details of a scalable parallel data acquisition system architecture being developed at Fermilab are given. 35 refs., 31 figs., 1 tab

  17. Data acquisition techniques using PC

    CERN Document Server

    Austerlitz, Howard

    1991-01-01

    Data Acquisition Techniques Using Personal Computers contains all the information required by a technical professional (engineer, scientist, technician) to implement a PC-based acquisition system. Including both basic tutorial information as well as some advanced topics, this work is suitable as a reference book for engineers or as a supplemental text for engineering students. It gives the reader enough understanding of the topics to implement a data acquisition system based on commercial products. A reader can alternatively learn how to custom build hardware or write his or her own software.

  18. Parallel image-acquisition in continuous-wave electron paramagnetic resonance imaging with a surface coil array: Proof-of-concept experiments

    Science.gov (United States)

    Enomoto, Ayano; Hirata, Hiroshi

    2014-02-01

    This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

  19. Improving parallel imaging by jointly reconstructing multi-contrast data.

    Science.gov (United States)

    Bilgic, Berkin; Kim, Tae Hyung; Liao, Congyu; Manhard, Mary Kate; Wald, Lawrence L; Haldar, Justin P; Setsompop, Kawin

    2018-08-01

    To develop parallel imaging techniques that simultaneously exploit coil sensitivity encoding, image phase prior information, similarities across multiple images, and complementary k-space sampling for highly accelerated data acquisition. We introduce joint virtual coil (JVC)-generalized autocalibrating partially parallel acquisitions (GRAPPA) to jointly reconstruct data acquired with different contrast preparations, and show its application in 2D, 3D, and simultaneous multi-slice (SMS) acquisitions. We extend the joint parallel imaging concept to exploit limited support and smooth phase constraints through Joint (J-) LORAKS formulation. J-LORAKS allows joint parallel imaging from limited autocalibration signal region, as well as permitting partial Fourier sampling and calibrationless reconstruction. We demonstrate highly accelerated 2D balanced steady-state free precession with phase cycling, SMS multi-echo spin echo, 3D multi-echo magnetization-prepared rapid gradient echo, and multi-echo gradient recalled echo acquisitions in vivo. Compared to conventional GRAPPA, proposed joint acquisition/reconstruction techniques provide more than 2-fold reduction in reconstruction error. JVC-GRAPPA takes advantage of additional spatial encoding from phase information and image similarity, and employs different sampling patterns across acquisitions. J-LORAKS achieves a more parsimonious low-rank representation of local k-space by considering multiple images as additional coils. Both approaches provide dramatic improvement in artifact and noise mitigation over conventional single-contrast parallel imaging reconstruction. Magn Reson Med 80:619-632, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  20. Parallel transmission techniques in magnetic resonance imaging: experimental realization, applications and perspectives

    International Nuclear Information System (INIS)

    Ullmann, P.

    2007-06-01

    The primary objective of this work was the first experimental realization of parallel RF transmission for accelerating spatially selective excitation in magnetic resonance imaging. Furthermore, basic aspects regarding the performance of this technique were investigated, potential risks regarding the specific absorption rate (SAR) were considered and feasibility studies under application-oriented conditions as first steps towards a practical utilisation of this technique were undertaken. At first, based on the RF electronics platform of the Bruker Avance MRI systems, the technical foundations were laid to perform simultaneous transmission of individual RF waveforms on different RF channels. Another essential requirement for the realization of Parallel Excitation (PEX) was the design and construction of suitable RF transmit arrays with elements driven by separate transmit channels. In order to image the PEX results two imaging methods were implemented based on a spin-echo and a gradient-echo sequence, in which a parallel spatially selective pulse was included as an excitation pulse. In the course of this work PEX experiments were successfully performed on three different MRI systems, a 4.7 T and a 9.4 T animal system and a 3 T human scanner, using 5 different RF coil setups in total. In the last part of this work investigations regarding possible applications of Parallel Excitation were performed. A first study comprised experiments of slice-selective B1 inhomogeneity correction by using 3D-selective Parallel Excitation. The investigations were performed in a phantom as well as in a rat fixed in paraformaldehyde solution. In conjunction with these experiments a novel method of calculating RF pulses for spatially selective excitation based on a so-called Direct Calibration approach was developed, which is particularly suitable for this type of experiments. In the context of these experiments it was demonstrated how to combine the advantages of parallel transmission

  1. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  2. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  3. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging

    International Nuclear Information System (INIS)

    Rabrait, C.

    2007-11-01

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  4. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  5. Parallel transmission techniques in magnetic resonance imaging: experimental realization, applications and perspectives; Parallele Sendetechniken in der Magnetresonanztomographie: experimentelle Realisierung, Anwendungen und Perspektiven

    Energy Technology Data Exchange (ETDEWEB)

    Ullmann, P.

    2007-06-15

    The primary objective of this work was the first experimental realization of parallel RF transmission for accelerating spatially selective excitation in magnetic resonance imaging. Furthermore, basic aspects regarding the performance of this technique were investigated, potential risks regarding the specific absorption rate (SAR) were considered and feasibility studies under application-oriented conditions as first steps towards a practical utilisation of this technique were undertaken. At first, based on the RF electronics platform of the Bruker Avance MRI systems, the technical foundations were laid to perform simultaneous transmission of individual RF waveforms on different RF channels. Another essential requirement for the realization of Parallel Excitation (PEX) was the design and construction of suitable RF transmit arrays with elements driven by separate transmit channels. In order to image the PEX results two imaging methods were implemented based on a spin-echo and a gradient-echo sequence, in which a parallel spatially selective pulse was included as an excitation pulse. In the course of this work PEX experiments were successfully performed on three different MRI systems, a 4.7 T and a 9.4 T animal system and a 3 T human scanner, using 5 different RF coil setups in total. In the last part of this work investigations regarding possible applications of Parallel Excitation were performed. A first study comprised experiments of slice-selective B1 inhomogeneity correction by using 3D-selective Parallel Excitation. The investigations were performed in a phantom as well as in a rat fixed in paraformaldehyde solution. In conjunction with these experiments a novel method of calculating RF pulses for spatially selective excitation based on a so-called Direct Calibration approach was developed, which is particularly suitable for this type of experiments. In the context of these experiments it was demonstrated how to combine the advantages of parallel transmission

  6. Single breath-hold real-time cine MR imaging: improved temporal resolution using generalized autocalibrating partially parallel acquisition (GRAPPA) algorithm

    International Nuclear Information System (INIS)

    Wintersperger, Bernd J.; Nikolaou, Konstantin; Dietrich, Olaf; Reiser, Maximilian F.; Schoenberg, Stefan O.; Rieber, Johannes; Nittka, Matthias

    2003-01-01

    The purpose of this study was to test parallel imaging techniques for improvement of temporal resolution in multislice single breath-hold real-time cine steady-state free precession (SSFP) in comparison with standard segmented single-slice SSFP techniques. Eighteen subjects were examined on a 1.5-T scanner using a multislice real-time cine SSFP technique using the GRAPPA algorithm. Global left ventricular parameters (EDV, ESV, SV, EF) were evaluated and results compared with a standard segmented single-slice SSFP technique. Results for EDV (r=0.93), ESV (r=0.99), SV (r=0.83), and EF (r=0.99) of real-time multislice SSFP imaging showed a high correlation with results of segmented SSFP acquisitions. Systematic differences between both techniques were statistically non-significant. Single breath-hold multislice techniques using GRAPPA allow for improvement of temporal resolution and for accurate assessment of global left ventricular functional parameters. (orig.)

  7. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging; Imagerie par resonance magnetique a haute resolution temporelle: developpement d'une methode d'acquisition parallele tridimensionnelle pour l'imagerie fonctionnelle cerebrale

    Energy Technology Data Exchange (ETDEWEB)

    Rabrait, C

    2007-11-15

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  8. Parallel imaging: is GRAPPA a useful acquisition tool for MR imaging intended for volumetric brain analysis?

    Directory of Open Access Journals (Sweden)

    Frank Anders

    2009-08-01

    Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.

  9. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  10. Application of parallel preprocessors in data acquisition

    International Nuclear Information System (INIS)

    Butler, H.S.; Cooper, M.D.; Williams, R.A.; Hughes, E.B.; Rolfe, J.R.; Wilson, S.L.; Zeman, H.D.

    1981-01-01

    A data-acquisition system is being developed for a large-scale experiment at LAMPF. It will make use of four microprocessors running in parallel to acquire and preprocess data from 432 photomultiplier tubes (PMT) attached to 396 NaI crystals. The microprocessors are LSI-11/23s operating through CAMAC Auxiliary Crate Controllers (ACC). Data acquired by the microprocessors will be collected through a programmable Branch Driver (MBD) which also will read data from 52 scintillators (88 PMTs) and 728 wires comprising a drift chamber. The MBD will transfer data from each event into a PDP-11/44 for further processing and taping. The microprocessors will perform the secondary function of monitoring the calibration of the NaI PMTs. A special trigger circuit allows the system to stack data from a second event while the first is still being processed. Major components of the system were tested in April 1981. Timing measurements from this test are reported

  11. Modeling, realization and evaluation of a parallel architecture for the data acquisition in multidetectors

    International Nuclear Information System (INIS)

    Guirande, Ph.; Aleonard, M-M.; Dien, Q-T.; Pedroza, J-L.

    1997-01-01

    The efficiency increasing in four π (EUROGAM, EUROBALL, DIAMANT) is achieved by an increase in the granularity, hence in the event counting rate in the acquisition system. Consequently, an evolution of the architecture of readout systems, coding and software is necessary. To achieve the required evaluation we have implemented a parallel architecture to check the quality of the events. The first application of this architecture was to make available an improved data acquisition system for the DIAMANT multidetector. The data acquisition system of DIAMANT is based on an ensemble of VME cards which must manage: the event readout, their salvation on magnetic support and histogram construction. The ensemble consists of processors distributed in a net, a workstation to control the experiment and a display system for spectra and arrays. In such architecture the task of VME bus becomes quickly a limitation for performances not only for the data transfer but also for coordination of different processors. The parallel architecture used makes the VME bus operation easy. It is based on three DSP C40 (Digital Signal Processor) implanted in a commercial (LSI) VME. It is provided with an external bus used to read the raw data from an interface card (ROCVI) between the 32 bit ECL bus reading the real time VME-based encoders. The performed tests have evidenced jamming after data exchanges between the processors using two communication lines. The analysis of this problem has indicated the necessity of dynamical changes of tasks to avoid this blocking. Intrinsic evaluation (i.e. without transfer on the VME bus) has been carried out for two parallel topologies (processor farm and tree). The simulation software permitted the generation of event packets. The obtained rates are sensibly equivalent (6 Mo/s) independent of topology. The farm topology has been chosen because it is simple to implant. The charge evaluation has reduced the rate in 'simplex' communication mode to 5.3 Mo/s and

  12. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    Science.gov (United States)

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  13. A proposed scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; Vanberg, R.

    1990-01-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequence, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a proposed new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of Gigabytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the proposed Scalable Parallel Open Architecture data acquisition system are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build a prototype of the proposed data acquisition system architecture is given in the paper. The major component of the system, a self-routing parallel event builder, is described in detail

  14. Microprocessor event analysis in parallel with Camac data acquisition

    International Nuclear Information System (INIS)

    Cords, D.; Eichler, R.; Riege, H.

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a Camac System (GEC-ELLIOTT System Crate) and shares the Camac access with a Nord-1OS computer. Interfaces have been designed and tested for execution of Camac cycles, communication with the Nord-1OS computer and DMA-transfer from Camac to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-1OS computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the result of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-1OS buffer will be reset and the event omitted from further processing. (orig.)

  15. Parallel preconditioning techniques for sparse CG solvers

    Energy Technology Data Exchange (ETDEWEB)

    Basermann, A.; Reichel, B.; Schelthoff, C. [Central Institute for Applied Mathematics, Juelich (Germany)

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  16. Parallel computing techniques for rotorcraft aerodynamics

    Science.gov (United States)

    Ekici, Kivanc

    The modification of unsteady three-dimensional Navier-Stokes codes for application on massively parallel and distributed computing environments is investigated. The Euler/Navier-Stokes code TURNS (Transonic Unsteady Rotor Navier-Stokes) was chosen as a test bed because of its wide use by universities and industry. For the efficient implementation of TURNS on parallel computing systems, two algorithmic changes are developed. First, main modifications to the implicit operator, Lower-Upper Symmetric Gauss Seidel (LU-SGS) originally used in TURNS, is performed. Second, application of an inexact Newton method, coupled with a Krylov subspace iterative method (Newton-Krylov method) is carried out. Both techniques have been tried previously for the Euler equations mode of the code. In this work, we have extended the methods to the Navier-Stokes mode. Several new implicit operators were tried because of convergence problems of traditional operators with the high cell aspect ratio (CAR) grids needed for viscous calculations on structured grids. Promising results for both Euler and Navier-Stokes cases are presented for these operators. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. The parallel implicit operators used in the previous step are employed as preconditioners and the results are compared. The Message Passing Interface (MPI) protocol has been used because of its portability to various parallel architectures. It should be noted that the proposed methodology is general and can be applied to several other CFD codes (e.g. OVERFLOW).

  17. Microprocessor event analysis in parallel with CAMAC data acquisition

    CERN Document Server

    Cords, D; Riege, H

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a CAMAC System (GEC-ELLIOTT System Crate) and shares the CAMAC access with a Nord-10S computer. Interfaces have been designed and tested for execution of CAMAC cycles, communication with the Nord-10S computer and DMA-transfer from CAMAC to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-10S computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the results of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-10S buffer will be reset and the event omitted from further processing. (5 refs).

  18. Transient data acquisition techniques under EDS

    International Nuclear Information System (INIS)

    Telford, S.

    1985-06-01

    This paper is the first of a series which describes the Enrichment Diagnostic System (EDS) developed for the MARS project at Lawrence Livermore National Laboratory. Although EDS was developed for use on AVLIS, the functional requirements, overall design, and specific techniques are applicable to any experimental data acquisition system involving large quantities of transient data. In particular this paper will discuss the techniques and equipment used to do the data acquisition. Included are what types of hardware are used and how that hardware (CAMAC, digital oscilloscopes) is interfaced to the HP computers. In this discussion the author will address the problems encountered and the solutions used, as well as the performance of the instrument/computer interfaces. The second topic the author will discuss is how the acquired data is associated to graphics and analysis portions of EDS through efficient real time data bases. This discussion will include how the acquired data is folded into the overall structure of EDS providing the user immediate access to raw and analyzed data. By example you will see how easily a new diagnostic can be added to the EDS structure without modifying the other parts of the system. 8 figs

  19. Impacts of Vocabulary Acquisition Techniques Instruction on Students' Learning

    Science.gov (United States)

    Orawiwatnakul, Wiwat

    2011-01-01

    The objectives of this study were to determine how the selected vocabulary acquisition techniques affected the vocabulary ability of 35 students who took EN 111 and investigate their attitudes towards the techniques instruction. The research study was one-group pretest and post-test design. The instruments employed were in-class exercises…

  20. In search of the best technique for vocabulary acquisition

    Directory of Open Access Journals (Sweden)

    Mohammad Mohseni-Far

    2008-05-01

    Full Text Available Teade plagiaadi kohta / Report of an Act of Plagiarism (6. mai 2012 / 6 May, 2012ERÜ aastaraamatus 4 (2008 lk 121–138 ilmunud Mohammad Mohseni-Far'i artikli "In Search of the Best Technique for Vocabulary Acquisition" näol on tegemist iseenda plagiaadiga. Sama artikkel on 2008. a ilmunud lisaks ERÜ aastaraamatule veel KAKS KORDA ligilähedases sõnastuses ning ligilähedase pealkirjaga. Kuna autor on tegelnud sõnastuse muutmisega, siis järelikult on tegemist teadliku plagiaadiga. Vt ka Check for Plagiarism On the Web.We are sorry to inform that Mohammad Mohseni-Far, the author of 'In Search of the Best Technique for Vocabulary Acquisition' published in ERÜ aastaraamat / EAAL yearbook, Vol. 4 (2008 pp. 121–138, has published the same article TWICE in another journal just by changing the title and a few wordings. The plagiarism is verified, using the Check for Plagiarism On the Web.A Cognitively-oriented Encapsulation of Strategies Utilized for Lexical Development: In search of a flexible and highly interactive curriculum. – Porta Linguarum 9 (2008, 35–42. Techniques and Strategies Utilized for Vocabulary Acquisition: the necessity to design a multifaceted framework with an instructionally wise equilibrium. – Porta Linguarum 8 (2007, 137–152.ERÜ aastaraamatu toimetus / Editors of the EAAL yearbook***The present study is intended to critically examine vocabulary learning/acquisition techniques within second/foreign language context. Accordingly, the purpose of this survey is to concentrate particularly on the variables connected with lexical knowledge and establish a fairly all-inclusive framework which comprises and expounds on the most significant strategies and relevant factors within the vocabulary acquisition context. At the outset, the study introduces four salient variables; learner, task and strategy serve as a general structure of inquiry (Flavell’s cognitive model, 1992. Besides, the variable of context

  1. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  2. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  3. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  4. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  5. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    Science.gov (United States)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  6. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  7. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  8. Improvements in image quality with pseudo-parallel imaging in the phase-scrambling fourier transform technique

    International Nuclear Information System (INIS)

    Ito, Satoshi; Kawawa, Yasuhiro; Yamada, Yoshifumi

    2010-01-01

    The signal obtained in the phase-scrambling Fourier transform (PSFT) imaging technique can be transformed to the signal described by the Fresnel transform of the objects, in which the amplitude of the PSFT presents some kind of blurred image of the objects. Therefore, the signal can be considered to exist in the object domain as well as the Fourier domain of the object. This notable feature makes it possible to assign weights to the reconstructed images by applying a weighting function to the PSFT signal after data acquisition, and as a result, pseudo-parallel image reconstruction using these aliased image data with different weights on the images is feasible. In this study, the improvements in image quality with such pseudo-parallel imaging were examined and demonstrated. The weighting function of the PSFT signal that provides a given weight on the image is estimated using the obtained image data and is iteratively updated after sensitivity encoding (SENSE)-based image reconstruction. Simulation studies showed that reconstruction errors were dramatically reduced and that the spatial resolution was also improved in almost all image spaces. The proposed method was applied to signals synthesized from MR image data with phase variations to verify its effectiveness. It was found that the image quality was improved and that images almost entirely free of aliasing artifacts could be obtained. (author)

  9. A study on evaluating validity of SNR calculation using a conventional two region method in MR images applied a multichannel coil and parallel imaging technique

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Kwan Woo; Son, Soon Yong [Dept. of Radiology, Asan Medical Center, Seoul (Korea, Republic of); Min, Jung Whan [Dept. of Radiological Technology, Shingu University, Sungnam (Korea, Republic of); Kwon, Kyung Tae [Dept. of Radiological Technology, Dongnam Health University, Suwon (Korea, Republic of); Yoo, Beong Gyu; Lee, Jong Seok [Dept. of Radiotechnology, Wonkwang Health Science University, Iksan (Korea, Republic of)

    2015-12-15

    The purpose of this study was to investigate the problems of a signal to noise ratio measurement using a two region measurement method that is conventionally used when using a multi-channel coil and a parallel imaging technique. As a research method, after calculating the standard SNR using a single channel head coil of which coil satisfies three preconditions when using a two region measurement method, we made comparisons and evaluations after calculating an SNR by using a two region measurement method of which method is problematic because it is used without considering the methods recommended by reputable organizations and the preconditions at the time of using a multi-channel coil and a parallel imaging technique. We found that a two region measurement method using a multi-channel coil and a parallel imaging technique shows the highest relative standard deviation, and thus shows a low degree of precision. In addition, we found out that the difference of SNR according to ROI location was very high, and thus a spatial noise distribution was not uniform. Also, 95% confidence interval through Blend-Altman plot is the widest, and thus the conformity degree with a two region measurement method using the standard single channel head coil is low. By directly comparing an AAPM method, which serves as a standard of a performance evaluation test of a magnetic resonance imaging device under the same image acquisition conditions, an NEMA method which can accurately determine the noise level in a signal region and the methods recommended by manufacturers of a magnetic resonance imaging device, there is a significance in that we quantitatively verified the inaccurate problems of a signal to noise ratio using a two region measurement method when using a multi-channel coil and a parallel imaging technique of which method does not satisfy the preconditions that researchers could overlook.

  10. Processing optimization with parallel computing for the J-PET scanner

    Directory of Open Access Journals (Sweden)

    Krzemień Wojciech

    2015-12-01

    Full Text Available The Jagiellonian Positron Emission Tomograph (J-PET collaboration is developing a prototype time of flight (TOF-positron emission tomograph (PET detector based on long polymer scintillators. This novel approach exploits the excellent time properties of the plastic scintillators, which permit very precise time measurements. The very fast field programmable gate array (FPGA-based front-end electronics and the data acquisition system, as well as low- and high-level reconstruction algorithms were specially developed to be used with the J-PET scanner. The TOF-PET data processing and reconstruction are time and resource demanding operations, especially in the case of a large acceptance detector that works in triggerless data acquisition mode. In this article, we discuss the parallel computing methods applied to optimize the data processing for the J-PET detector. We begin with general concepts of parallel computing and then we discuss several applications of those techniques in the J-PET data processing.

  11. Parallel imaging enhanced MR colonography using a phantom model.

    LENUS (Irish Health Repository)

    Morrin, Martina M

    2008-09-01

    To compare various Array Spatial and Sensitivity Encoding Technique (ASSET)-enhanced T2W SSFSE (single shot fast spin echo) and T1-weighted (T1W) 3D SPGR (spoiled gradient recalled echo) sequences for polyp detection and image quality at MR colonography (MRC) in a phantom model. Limitations of MRC using standard 3D SPGR T1W imaging include the long breath-hold required to cover the entire colon within one acquisition and the relatively low spatial resolution due to the long acquisition time. Parallel imaging using ASSET-enhanced T2W SSFSE and 3D T1W SPGR imaging results in much shorter imaging times, which allows for increased spatial resolution.

  12. Self-calibrated multiple-echo acquisition with radial trajectories using the conjugate gradient method (SMART-CG).

    Science.gov (United States)

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F

    2011-04-01

    To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.

  13. Reply to "Comments on Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes"

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available This is a reply to the comments by Gunnam et al. "Comments on 'Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes'", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 704174 on our recent work "Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 723465.

  14. Parallel halftoning technique using dot diffusion optimization

    Science.gov (United States)

    Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara

    2017-05-01

    In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.

  15. Ultrasound Vector Flow Imaging: Part II: Parallel Systems

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav Ivanov; Yu, Alfred C. H.

    2016-01-01

    The paper gives a review of the current state-of-theart in ultrasound parallel acquisition systems for flow imaging using spherical and plane waves emissions. The imaging methods are explained along with the advantages of using these very fast and sensitive velocity estimators. These experimental...... ultrasound imaging for studying brain function in animals. The paper explains the underlying acquisition and estimation methods for fast 2-D and 3-D velocity imaging and gives a number of examples. Future challenges and the potentials of parallel acquisition systems for flow imaging are also discussed....

  16. The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging

    Science.gov (United States)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-06-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.

  17. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  18. Acquisition and visualization techniques for narrow spectral color imaging.

    Science.gov (United States)

    Neumann, László; García, Rafael; Basa, János; Hegedüs, Ramón

    2013-06-01

    This paper introduces a new approach in narrow-band imaging (NBI). Existing NBI techniques generate images by selecting discrete bands over the full visible spectrum or an even wider spectral range. In contrast, here we perform the sampling with filters covering a tight spectral window. This image acquisition method, named narrow spectral imaging, can be particularly useful when optical information is only available within a narrow spectral window, such as in the case of deep-water transmittance, which constitutes the principal motivation of this work. In this study we demonstrate the potential of the proposed photographic technique on nonunderwater scenes recorded under controlled conditions. To this end three multilayer narrow bandpass filters were employed, which transmit at 440, 456, and 470 nm bluish wavelengths, respectively. Since the differences among the images captured in such a narrow spectral window can be extremely small, both image acquisition and visualization require a novel approach. First, high-bit-depth images were acquired with multilayer narrow-band filters either placed in front of the illumination or mounted on the camera lens. Second, a color-mapping method is proposed, using which the input data can be transformed onto the entire display color gamut with a continuous and perceptually nearly uniform mapping, while ensuring optimally high information content for human perception.

  19. Development and application of efficient strategies for parallel magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Breuer, F.

    2006-07-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image

  20. Development and application of efficient strategies for parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Breuer, F.

    2006-01-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image artifacts

  1. Detector techniques and data acquisition for LHC experiments

    CERN Document Server

    AUTHOR|(CDS)2071367; Cittolin, Sergio; CERN. Geneva

    1996-01-01

    An overview of the technologies for LHC tracking detectors, particle identification and calorimeters will be given. In addition, the requirements of the front-end readout electronics for each type of detector will be addressed. The latest results from the R&D studies in each of the technologies will be presented. The data handling techniques needed to read out the LHC detectors and the multi-level trigger systems used to select the events of interest will be described. An overview of the LHC experiments data acquisition architectures and their current state of developments will be presented.

  2. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC [Superconducting Super Collider] detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; VanBerg, R.

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab

  3. Data acquisition techniques

    International Nuclear Information System (INIS)

    Dougherty, R.C.

    1976-01-01

    Testing neutron generators and major subassemblies has undergone a transition in the past few years. Digital information is now used for storage and analysis. The key to the change is the availability of a high-speed digitizer system. The status of the Sandia Laboratory data acquisition and handling system as applied to this area is surveyed. 1 figure

  4. High temporal resolution functional MRI using parallel echo volumar imaging

    International Nuclear Information System (INIS)

    Rabrait, C.; Ciuciu, P.; Ribes, A.; Poupon, C.; Dehaine-Lambertz, G.; LeBihan, D.; Lethimonnier, F.; Le Roux, P.; Dehaine-Lambertz, G.

    2008-01-01

    Purpose: To combine parallel imaging with 3D single-shot acquisition (echo volumar imaging, EVI) in order to acquire high temporal resolution volumar functional MRI (fMRI) data. Materials and Methods: An improved EVI sequence was associated with parallel acquisition and field of view reduction in order to acquire a large brain volume in 200 msec. Temporal stability and functional sensitivity were increased through optimization of all imaging parameters and Tikhonov regularization of parallel reconstruction. Two human volunteers were scanned with parallel EVI in a 1.5 T whole-body MR system, while submitted to a slow event-related auditory paradigm. Results: Thanks to parallel acquisition, the EVI volumes display a low level of geometric distortions and signal losses. After removal of low-frequency drifts and physiological artifacts,activations were detected in the temporal lobes of both volunteers and voxel-wise hemodynamic response functions (HRF) could be computed. On these HRF different habituation behaviors in response to sentence repetition could be identified. Conclusion: This work demonstrates the feasibility of high temporal resolution 3D fMRI with parallel EVI. Combined with advanced estimation tools,this acquisition method should prove useful to measure neural activity timing differences or study the nonlinearities and non-stationarities of the BOLD response. (authors)

  5. Run control techniques for the Fermilab DART data acquisition system

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.; Moore, C.; Pordes, R.; Udumula, L.; Votava, M.; Drunen, E. van; Zioulas, G.

    1996-01-01

    DART is the high speed, Unix based data acquisition system being developed by the Fermilab Computing Division in collaboration with eight High Energy Physics Experiments. This paper describes DART run-control which implements flexible, distributed, extensible and portable paradigms for the control monitoring of a data acquisition systems. We discuss the unique and interesting aspects of the run-control - why we chose the concepts we did, the benefits we have seen from the choices we made, as well as our experiences in deploying and supporting it for experiments during their commissioning and sub-system testing phases. We emphasize the software and techniques we believe are extensible to future use, and potential future modifications and extensions for those we feel are not. (author)

  6. Run control techniques for the Fermilab DART data acquisition system

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-10-01

    DART is the high speed, Unix based data acquisition system being developed by the Fermilab Computing Division in collaboration with eight High Energy Physics Experiments. This paper describes DART run-control which implements flexible, distributed, extensible and portable paradigms for the control and monitoring of data acquisition systems. We discuss the unique and interesting aspects of the run-control - why we chose the concepts we did, the benefits we have seen from the choices we made, as well as our experiences in deploying and supporting it for experiments during their commissioning and sub-system testing phases. We emphasize the software and techniques we believe are extensible to future use, and potential future modifications and extensions for those we feel are not

  7. Selection and integration of a network of parallel processors in the real time acquisition system of the 4{pi} DIAMANT multidetector: modeling, realization and evaluation of the software installed on this network; Choix et integration d`un reseau de processeurs paralleles dans le systeme d`acquisition temps reel du multidetecteur 4{pi} DIAMANT: modelisation, realisation et evaluation du logiciel implante sur ce reseau

    Energy Technology Data Exchange (ETDEWEB)

    Guirande, F. [Ecole Doctorale des Sciences Physiques et de l`Ingenieur, Bordeaux-1 Univ., 33 (France)

    1997-07-11

    The increase in sensitivity of 4{pi} arrays such as EUROBALL or DIAMANT has led to an increase in the data flow rate into the data acquisition system. If at the electronic level, the data flow has been distributed onto several data acquisition buses, it is necessary in the data processing system to increase the processing power. This work regards the modelling and implementation of the software allocated onto an architecture of parallel processors. Object analysis and formal methods were used, benchmark and evolution in the future of this architecture are presented. The thesis consists of two parts. Part A, devoted to `Nuclear Spectroscopy with 4 {pi} multidetectors`, contains a first chapter entitled `The Physics of 4{pi} multidetectors` and a second chapter entitled `Integral architecture of 4{pi} multidetectors`. Part B, devoted to `Parallel acquisition system of DIAMANT` contains three chapters entitled `Material architecture`, `Software architecture` and `Validation and Performances`. Four appendices and a term glossary close this work. (author) 58 refs.

  8. 10-channel fiber array fabrication technique for parallel optical coherence tomography system

    Science.gov (United States)

    Arauz, Lina J.; Luo, Yuan; Castillo, Jose E.; Kostuk, Raymond K.; Barton, Jennifer

    2007-02-01

    Optical Coherence Tomography (OCT) shows great promise for low intrusive biomedical imaging applications. A parallel OCT system is a novel technique that replaces mechanical transverse scanning with electronic scanning. This will reduce the time required to acquire image data. In this system an array of small diameter fibers is required to obtain an image in the transverse direction. Each fiber in the array is configured in an interferometer and is used to image one pixel in the transverse direction. In this paper we describe a technique to package 15μm diameter fibers on a siliconsilica substrate to be used in a 2mm endoscopic probe tip. Single mode fibers are etched to reduce the cladding diameter from 125μm to 15μm. Etched fibers are placed into a 4mm by 150μm trench in a silicon-silica substrate and secured with UV glue. Active alignment was used to simplify the lay out of the fibers and minimize unwanted horizontal displacement of the fibers. A 10-channel fiber array was built, tested and later incorporated into a parallel optical coherence system. This paper describes the packaging, testing, and operation of the array in a parallel OCT system.

  9. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    Science.gov (United States)

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  10. Parallel preprocessing in a nuclear data acquisition system

    International Nuclear Information System (INIS)

    Pichot, G.; Auriol, E.; Lemarchand, G.; Millaud, J.

    1977-01-01

    The appearance of microprocessors and large memory chips has somewhat modified the spectrum of tools usable by the data acquisition system designer. This is particular true in the nuclear research field where the data flow has been continuously growing as a consequence of the increasing capabilities of new detectors. This paper deals with the insertion, between a data acquisition system and a computer, of a preprocessing structure based on microprocessors and large capacity high speed memories. The results shows a significant improvement on several aspects in the operation of the system with returns paying back the investments in 18 months

  11. 3D Hyperpolarized C-13 EPI with Calibrationless Parallel Imaging

    DEFF Research Database (Denmark)

    Gordon, Jeremy W.; Hansen, Rie Beck; Shin, Peter J.

    2018-01-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and tem...... strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism....

  12. An overview of data acquisition, signal coding and data analysis techniques for MST radars

    Science.gov (United States)

    Rastogi, P. K.

    1986-01-01

    An overview is given of the data acquisition, signal processing, and data analysis techniques that are currently in use with high power MST/ST (mesosphere stratosphere troposphere/stratosphere troposphere) radars. This review supplements the works of Rastogi (1983) and Farley (1984) presented at previous MAP workshops. A general description is given of data acquisition and signal processing operations and they are characterized on the basis of their disparate time scales. Then signal coding, a brief description of frequently used codes, and their limitations are discussed, and finally, several aspects of statistical data processing such as signal statistics, power spectrum and autocovariance analysis, outlier removal techniques are discussed.

  13. Evaluation of Parallel and Fan-Beam Data Acquisition Geometries and Strategies for Myocardial SPECT Imaging

    Science.gov (United States)

    Qi, Yujin; Tsui, B. M. W.; Gilland, K. L.; Frey, E. C.; Gullberg, G. T.

    2004-06-01

    This study evaluates myocardial SPECT images obtained from parallel-hole (PH) and fan-beam (FB) collimator geometries using both circular-orbit (CO) and noncircular-orbit (NCO) acquisitions. A newly developed 4-D NURBS-based cardiac-torso (NCAT) phantom was used to simulate the /sup 99m/Tc-sestamibi uptakes in human torso with myocardial defects in the left ventricular (LV) wall. Two phantoms were generated to simulate patients with thick and thin body builds. Projection data including the effects of attenuation, collimator-detector response and scatter were generated using SIMSET Monte Carlo simulations. A large number of photon histories were generated such that the projection data were close to noise free. Poisson noise fluctuations were then added to simulate the count densities found in clinical data. Noise-free and noisy projection data were reconstructed using the iterative OS-EM reconstruction algorithm with attenuation compensation. The reconstructed images from noisy projection data show that the noise levels are lower for the FB as compared to the PH collimator due to increase in detected counts. The NCO acquisition method provides slightly better resolution and small improvement in defect contrast as compared to the CO acquisition method in noise-free reconstructed images. Despite lower projection counts the NCO shows the same noise level as the CO in the attenuation corrected reconstruction images. The results from the channelized Hotelling observer (CHO) study show that FB collimator is superior to PH collimator in myocardial defect detection, but the NCO shows no statistical significant difference from the CO for either PH or FB collimator. In conclusion, our results indicate that data acquisition using NCO makes a very small improvement in the resolution over CO for myocardial SPECT imaging. This small improvement does not make a significant difference on myocardial defect detection. However, an FB collimator provides better defect detection than a

  14. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics. [Data Acquisition by Parallel Histogramming and NEtworking

    Energy Technology Data Exchange (ETDEWEB)

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered.

  15. A Note on Using Partitioning Techniques for Solving Unconstrained Optimization Problems on Parallel Systems

    Directory of Open Access Journals (Sweden)

    Mehiddin Al-Baali

    2015-12-01

    Full Text Available We deal with the design of parallel algorithms by using variable partitioning techniques to solve nonlinear optimization problems. We propose an iterative solution method that is very efficient for separable functions, our scope being to discuss its performance for general functions. Experimental results on an illustrative example have suggested some useful modifications that, even though they improve the efficiency of our parallel method, leave some questions open for further investigation.

  16. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    Science.gov (United States)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  17. Applying of USB interface technique in nuclear spectrum acquisition system

    International Nuclear Information System (INIS)

    Zhou Jianbin; Huang Jinhua

    2004-01-01

    This paper introduces applying of USB technique and constructing nuclear spectrum acquisition system via PC's USB interface. The authors choose the USB component USB100 module and the W77E58μc to do the key work. It's easy to apply USB interface technique, when USB100 module is used. USB100 module can be treated as a common I/O component for the μc controller, and can be treated as a communication interface (COM) when connected to PC' USB interface. It's easy to modify the PC's program for the new system with USB100 module. The authors can smoothly change from ISA, RS232 bus to USB bus. (authors)

  18. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  19. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.; Rauchwerger, Lawrence

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  20. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  1. Improved parallel solution techniques for the integral transport matrix method

    Energy Technology Data Exchange (ETDEWEB)

    Zerr, R. Joseph, E-mail: rjz116@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA (United States); Azmy, Yousry Y., E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Burlington Engineering Laboratories, Raleigh, NC (United States)

    2011-07-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  2. Improved parallel solution techniques for the integral transport matrix method

    International Nuclear Information System (INIS)

    Zerr, R. Joseph; Azmy, Yousry Y.

    2011-01-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  3. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    Science.gov (United States)

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  4. The design and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    International Nuclear Information System (INIS)

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1987-05-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these systems, taking advantages of the independent nature of ''events,'' is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, CPU power equivalent to several VAXs is obtained at a fraction of the cost of one VAX. The front end interfacs to a VAX 11/750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to reply data using the same command language. Design concepts, data structures, performance, and experience to data are discussed

  5. Logical inference techniques for loop parallelization

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2012-01-01

    the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S={}, where S is a set expression representing array indexes. Using...... of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECT-CLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers....

  6. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  7. Image acquisition and planimetry systems to develop wounding techniques in 3D wound model

    Directory of Open Access Journals (Sweden)

    Kiefer Ann-Kathrin

    2017-09-01

    Full Text Available Wound healing represents a complex biological repair process. Established 2D monolayers and wounding techniques investigate cell migration, but do not represent coordinated multi-cellular systems. We aim to use wound surface area measurements obtained from image acquisition and planimetry systems to establish our wounding technique and in vitro organotypic tissue. These systems will be used in our future wound healing treatment studies to assess the rate of wound closure in response to wound healing treatment with light therapy (photobiomodulation. The image acquisition and planimetry systems were developed, calibrated, and verified to measure wound surface area in vitro. The system consists of a recording system (Sony DSC HX60, 20.4 M Pixel, 1/2.3″ CMOS sensor and calibrated with 1mm scale paper. Macro photography with an optical zoom magnification of 2:1 achieves sufficient resolution to evaluate the 3mm wound size and healing growth. The camera system was leveled with an aluminum construction to ensure constant distance and orientation of the images. The JPG-format images were processed with a planimetry system in MATLAB. Edge detection enables definition of the wounded area. Wound area can be calculated with surface integrals. To separate the wounded area from the background, the image was filtered in several steps. Agar models, injured through several test persons with different levels of experience, were used as pilot data to test the planimetry software. These image acquisition and planimetry systems support the development of our wound healing research. The reproducibility of our wounding technique can be assessed by the variability in initial wound surface area. Also, wound healing treatment effects can be assessed by the change in rate of wound closure. These techniques represent the foundations of our wound model, wounding technique, and analysis systems in our ongoing studies in wound healing and therapy.

  8. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  9. Marketing practitioner’s tacit knowledge acquisition using Repertory Grid Technique (RTG)

    Science.gov (United States)

    Azmi, Afdhal; Adriman, Ramzi

    2018-05-01

    The tacit knowledge of Marketing practitioner’s experts is excellent resources and priceless. It takes into account their experiential, skill, ideas, belief systems, insight and speculation into management decision-making. This expertise is an individual intuitive judgment and personal shortcuts to complete the work efficiently. Tacit knowledge of Marketing practitioner’s experts is one of best problem solutions in marketing strategy, environmental analysis, product management and partner’s relationship. This paper proposes the acquisition method of tacit knowledge from Marketing practitioner’s using Repertory Grid Technique (RGT). The RGT is a software application for tacit acquisition knowledge to provide a systematic approach to capture and acquire the constructs from an individual. The result shows the understanding of RGT could make TKE and MPE get a good result in capturing and acquiring tacit knowledge of Marketing practitioner’s experts.

  10. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  11. Parallel Reservoir Simulations with Sparse Grid Techniques and Applications to Wormhole Propagation

    KAUST Repository

    Wu, Yuanqing

    2015-09-08

    In this work, two topics of reservoir simulations are discussed. The first topic is the two-phase compositional flow simulation in hydrocarbon reservoir. The major obstacle that impedes the applicability of the simulation code is the long run time of the simulation procedure, and thus speeding up the simulation code is necessary. Two means are demonstrated to address the problem: parallelism in physical space and the application of sparse grids in parameter space. The parallel code can gain satisfactory scalability, and the sparse grids can remove the bottleneck of flash calculations. Instead of carrying out the flash calculation in each time step of the simulation, a sparse grid approximation of all possible results of the flash calculation is generated before the simulation. Then the constructed surrogate model is evaluated to approximate the flash calculation results during the simulation. The second topic is the wormhole propagation simulation in carbonate reservoir. In this work, different from the traditional simulation technique relying on the Darcy framework, we propose a new framework called Darcy-Brinkman-Forchheimer framework to simulate wormhole propagation. Furthermore, to process the large quantity of cells in the simulation grid and shorten the long simulation time of the traditional serial code, standard domain-based parallelism is employed, using the Hypre multigrid library. In addition to that, a new technique called “experimenting field approach” to set coefficients in the model equations is introduced. In the 2D dissolution experiments, different configurations of wormholes and a series of properties simulated by both frameworks are compared. We conclude that the numerical results of the DBF framework are more like wormholes and more stable than the Darcy framework, which is a demonstration of the advantages of the DBF framework. The scalability of the parallel code is also evaluated, and good scalability can be achieved. Finally, a mixed

  12. Fast MR image reconstruction for partially parallel imaging with arbitrary k-space trajectories.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Lin, Wei; Huang, Feng

    2011-03-01

    Both acquisition and reconstruction speed are crucial for magnetic resonance (MR) imaging in clinical applications. In this paper, we present a fast reconstruction algorithm for SENSE in partially parallel MR imaging with arbitrary k-space trajectories. The proposed method is a combination of variable splitting, the classical penalty technique and the optimal gradient method. Variable splitting and the penalty technique reformulate the SENSE model with sparsity regularization as an unconstrained minimization problem, which can be solved by alternating two simple minimizations: One is the total variation and wavelet based denoising that can be quickly solved by several recent numerical methods, whereas the other one involves a linear inversion which is solved by the optimal first order gradient method in our algorithm to significantly improve the performance. Comparisons with several recent parallel imaging algorithms indicate that the proposed method significantly improves the computation efficiency and achieves state-of-the-art reconstruction quality.

  13. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  14. High-Resolution DCE-MRI of the Pituitary Gland Using Radial k-Space Acquisition with Compressed Sensing Reconstruction.

    Science.gov (United States)

    Rossi Espagnet, M C; Bangiyev, L; Haber, M; Block, K T; Babb, J; Ruggiero, V; Boada, F; Gonen, O; Fatterpekar, G M

    2015-08-01

    The pituitary gland is located outside of the blood-brain barrier. Dynamic T1 weighted contrast enhanced sequence is considered to be the gold standard to evaluate this region. However, it does not allow assessment of intrinsic permeability properties of the gland. Our aim was to demonstrate the utility of radial volumetric interpolated brain examination with the golden-angle radial sparse parallel technique to evaluate permeability characteristics of the individual components (anterior and posterior gland and the median eminence) of the pituitary gland and areas of differential enhancement and to optimize the study acquisition time. A retrospective study was performed in 52 patients (group 1, 25 patients with normal pituitary glands; and group 2, 27 patients with a known diagnosis of microadenoma). Radial volumetric interpolated brain examination sequences with golden-angle radial sparse parallel technique were evaluated with an ROI-based method to obtain signal-time curves and permeability measures of individual normal structures within the pituitary gland and areas of differential enhancement. Statistical analyses were performed to assess differences in the permeability parameters of these individual regions and optimize the study acquisition time. Signal-time curves from the posterior pituitary gland and median eminence demonstrated a faster wash-in and time of maximum enhancement with a lower peak of enhancement compared with the anterior pituitary gland (P pituitary gland evaluation. In the absence of a clinical history, differences in the signal-time curves allow easy distinction between a simple cyst and a microadenoma. This retrospective study confirms the ability of the golden-angle radial sparse parallel technique to evaluate the permeability characteristics of the pituitary gland and establishes 120 seconds as the ideal acquisition time for dynamic pituitary gland imaging. © 2015 by American Journal of Neuroradiology.

  15. Selection and integration of a network of parallel processors in the real time acquisition system of the 4π DIAMANT multidetector: modeling, realization and evaluation of the software installed on this network

    International Nuclear Information System (INIS)

    Guirande, F.

    1997-01-01

    The increase in sensitivity of 4π arrays such as EUROBALL or DIAMANT has led to an increase in the data flow rate into the data acquisition system. If at the electronic level, the data flow has been distributed onto several data acquisition buses, it is necessary in the data processing system to increase the processing power. This work regards the modelling and implementation of the software allocated onto an architecture of parallel processors. Object analysis and formal methods were used, benchmark and evolution in the future of this architecture are presented. The thesis consists of two parts. Part A, devoted to 'Nuclear Spectroscopy with 4 π multidetectors', contains a first chapter entitled 'The Physics of 4π multidetectors' and a second chapter entitled 'Integral architecture of 4π multidetectors'. Part B, devoted to 'Parallel acquisition system of DIAMANT' contains three chapters entitled 'Material architecture', 'Software architecture' and 'Validation and Performances'. Four appendices and a term glossary close this work. (author)

  16. Evaluation of an innovative radiographic technique - parallel profile radiography - to determine the dimensions of dentogingival unit

    Directory of Open Access Journals (Sweden)

    Sushama R Galgali

    2011-01-01

    Full Text Available Background: Maintenance of gingival health is a key factor for longevity of the teeth as well as of restorations. The physiologic dentogingival unit (DGU, which is composed of the epithelial and connective tissue attachments of the gingiva, functions as a barrier against microbial entry into the periodontium. Invasion of this space triggers inflammation and causes periodontal destruction. Despite the clinical relevance of the determination of the length and width of the DGU, there is no standardized technique. The length of the DGU can be either determined by histologic preparations or by transgingival probing. Although width can also be assessed by transgingival probing or with an ultrasound device, they are either invasive or expensive Aims: This study sought to evaluate an innovative radiographic exploration technique - parallel profile radiography - for measuring the dimensions of the DGU on the labial surfaces of anterior teeth. Materials and Methods: Two radiographs were made using the long-cone parallel technique in ten individuals, one in frontal projection, while the second radiograph was a parallel profile radiograph obtained from a lateral position. The length and width of the DGU was measured using computer software. Transgingival probing (trans-sulcular was done for these same patients and length of the DGU was measured. The values obtained by the two methods were compared. Pearson product correlation coefficient was calculated to examine the agreement between the values obtained by PPRx and transgingival probing. Results: The mean biologic width by the parallel profile radiography (PPRx technique was 1.72 mm (range 0.94-2.11 mm, while the mean thickness of the gingiva was 1.38 mm (range 0.92-1.77 mm. The mean biologic width by trans-gingival probing was 1.6 mm (range 0.8-2.2mm. Pearson product correlation coefficient (r for the above values was 0.914; thus, a high degree of agreement exists between the PPRx and TGP techniques

  17. The design, creation, and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    International Nuclear Information System (INIS)

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1986-01-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these system, taking advantage of the independent nature of ''events'', is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, multi-VAX cpu power is obtained at a fraction of the cost of one VAX. The front end interfaces to a VAX 750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to replay data using the same command language. Design concepts, data structures, performance, and experience to data are discussed. 5 refs., 2 figs

  18. Techniques applied in design optimization of parallel manipulators

    CSIR Research Space (South Africa)

    Modungwa, D

    2011-11-01

    Full Text Available the desired dexterous workspace " Robot.Comput.Integrated Manuf., vol. 23, pp. 38 - 46, 2007. [12] A.P. Murray, F. Pierrot, P. Dauchez and J.M. McCarthy, "A planar quaternion approach to the kinematic synthesis of a parallel manipulator " Robotica, vol... design of a three translational DoFs parallel manipulator " Robotica, vol. 24, pp. 239, 2005. [15] J. Angeles, "The robust design of parallel manipulators," in 1st Int. Colloquium, Collaborative Research Centre 562, 2002. [16] S. Bhattacharya, H...

  19. Suitability of helical multislice acquisition technique for routine unenhanced brain CT: an image quality study using a 16-row detector configuration

    Energy Technology Data Exchange (ETDEWEB)

    Hernalsteen, Danielle; Cosnard, Guy; Grandin, Cecile; Duprez, Thierry [Universite Catholique de Louvain, Cliniques Universitaires Saint-Luc, Department of Radiology and Medical Imaging, Brussels (Belgium); Robert, Annie [Public Health School, Universite Catholique de Louvain, Department of Epidemiologics and Medical Statistics, Brussels (Belgium); Vlassenbroek, Alain [CT Clinical Science, Philips Medical Systems, Cleveland, OH (United States)

    2007-04-15

    Subjective and objective image quality (IQ) criteria, radiation doses, and acquisition times were compared using incremental monoslice, incremental multislice, and helical multislice acquisition techniques for routine unenhanced brain computed tomography (CT). Twenty-four patients were examined by two techniques in the same imaging session using a 16-row CT system equipped with 0.75-width detectors. Contiguous ''native'' 3-mm-thick slices were reconstructed for all acquisitions from four detectors for each slice (4 x 0.75 mm), with one channel available per detector. Two protocols were tailored to compare: (1) one-slice vs four-slice incremental images; (2) incremental vs helical four-slice images. Two trained observers independently scored 12 subjective items of IQ. Preference for the technique was assessed by one-tailed t test and the interobserver variation by two-tailed t test. The two observers gave very close IQ scores for the three techniques without significant interobserver variations. Measured IQ parameters failed to reveal any difference between techniques, and an approximate half radiation dose reduction was obtained by using the full 16-row configuration. Acquisition times were cumulatively shortened by using the multislice and the helical modality. (orig.)

  20. Suitability of helical multislice acquisition technique for routine unenhanced brain CT: an image quality study using a 16-row detector configuration

    International Nuclear Information System (INIS)

    Hernalsteen, Danielle; Cosnard, Guy; Grandin, Cecile; Duprez, Thierry; Robert, Annie; Vlassenbroek, Alain

    2007-01-01

    Subjective and objective image quality (IQ) criteria, radiation doses, and acquisition times were compared using incremental monoslice, incremental multislice, and helical multislice acquisition techniques for routine unenhanced brain computed tomography (CT). Twenty-four patients were examined by two techniques in the same imaging session using a 16-row CT system equipped with 0.75-width detectors. Contiguous ''native'' 3-mm-thick slices were reconstructed for all acquisitions from four detectors for each slice (4 x 0.75 mm), with one channel available per detector. Two protocols were tailored to compare: (1) one-slice vs four-slice incremental images; (2) incremental vs helical four-slice images. Two trained observers independently scored 12 subjective items of IQ. Preference for the technique was assessed by one-tailed t test and the interobserver variation by two-tailed t test. The two observers gave very close IQ scores for the three techniques without significant interobserver variations. Measured IQ parameters failed to reveal any difference between techniques, and an approximate half radiation dose reduction was obtained by using the full 16-row configuration. Acquisition times were cumulatively shortened by using the multislice and the helical modality. (orig.)

  1. Automatic Parallelization An Overview of Fundamental Compiler Techniques

    CERN Document Server

    Midkiff, Samuel P

    2012-01-01

    Compiling for parallelism is a longstanding topic of compiler research. This book describes the fundamental principles of compiling "regular" numerical programs for parallelism. We begin with an explanation of analyses that allow a compiler to understand the interaction of data reads and writes in different statements and loop iterations during program execution. These analyses include dependence analysis, use-def analysis and pointer analysis. Next, we describe how the results of these analyses are used to enable transformations that make loops more amenable to parallelization, and

  2. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  3. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  4. High-energy physics software parallelization using database techniques

    International Nuclear Information System (INIS)

    Argante, E.; Van der Stok, P.D.V.; Willers, I.

    1997-01-01

    A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradigm, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI. (orig.)

  5. The Chateau de Cristal data acquisition system

    International Nuclear Information System (INIS)

    Villard, M.M.

    1987-05-01

    This data acquisition system is built on several dedicated data transfer busses: ADC data readout through the FERA bus, parallel data processing in two VME crates. High data rates and selectivities are performed via this acquisition structure and new developed processing units. The system modularity allows various experiments with additional detectors

  6. Spacing Techniques in Second Language Vocabulary Acquisition: Short-Term Gains vs. Long-Term Memory

    Science.gov (United States)

    Schuetze, Ulf

    2015-01-01

    This article reports the results of two experiments using the spacing technique (Leitner, 1972; Landauer & Bjork, 1978) in second language vocabulary acquisition. In the past, studies in this area have produced mixed results attempting to differentiate between massed, uniform and expanded intervals of spacing (Balota, Duchek, & Logan,…

  7. SU-F-J-220: Micro-CT Based Quantification of Mouse Brain Vasculature: The Effects of Acquisition Technique and Contrast Material

    International Nuclear Information System (INIS)

    Tipton, C; Lamba, M; Qi, Z; LaSance, K; Tipton, C

    2016-01-01

    Purpose: Cognitive impairment from radiation therapy to the brain may be linked to the loss of total blood volume in the brain. To account for brain injury, it is crucial to develop an understanding of blood volume loss as a result of radiation therapy. This study investigates µCT based quantification of mouse brain vasculature, focusing on the effect of acquisition technique and contrast material. Methods: Four mice were scanned on a µCT scanner (Siemens Inveon). The reconstructed voxel size was 18µm3 and all protocols were Hounsfield Unit (HU) calibrated. The mice were injected with 40mg of gold nanoparticles (MediLumine) or 100µl of Exitron 12000 (Miltenyi Biotec). Two acquisition techniques were also performed. A single kVp technique scanned the mouse once using an x-ray beam of 80kVp and segmentation was completed based on a threshold of HU values. The dual kVp technique scanned the mouse twice using 50kVp and 80kVp, this segmentation was based on the ratio of the HU value of the two kVps. After image reconstruction and segmentation, the brain blood volume was determined as a percentage of the total brain volume. Results: For the single kVp acquisition at 80kVp, the brain blood volume had an average of 3.5% for gold and 4.0% for Exitron 12000. Also at 80kVp, the contrast-noise ratio was significantly better for images acquired with the gold nanoparticles (2.0) than for those acquired with the Exitron 12000 (1.4). The dual kVp acquisition shows improved separation of skull from vasculature, but increased image noise. Conclusion: In summary, the effects of acquisition technique and contrast material for quantification of mouse brain vasculature showed that gold nanoparticles produced more consistent segmentation of brain vasculature than Exitron 12000. Also, dual kVp acquisition may improve the accuracy of brain vasculature quantification, although the effect of noise amplification warrants further study.

  8. SU-F-J-220: Micro-CT Based Quantification of Mouse Brain Vasculature: The Effects of Acquisition Technique and Contrast Material

    Energy Technology Data Exchange (ETDEWEB)

    Tipton, C; Lamba, M; Qi, Z; LaSance, K; Tipton, C [University of Cincinnati College of Medicine, Cincinnati, OH (United States)

    2016-06-15

    Purpose: Cognitive impairment from radiation therapy to the brain may be linked to the loss of total blood volume in the brain. To account for brain injury, it is crucial to develop an understanding of blood volume loss as a result of radiation therapy. This study investigates µCT based quantification of mouse brain vasculature, focusing on the effect of acquisition technique and contrast material. Methods: Four mice were scanned on a µCT scanner (Siemens Inveon). The reconstructed voxel size was 18µm3 and all protocols were Hounsfield Unit (HU) calibrated. The mice were injected with 40mg of gold nanoparticles (MediLumine) or 100µl of Exitron 12000 (Miltenyi Biotec). Two acquisition techniques were also performed. A single kVp technique scanned the mouse once using an x-ray beam of 80kVp and segmentation was completed based on a threshold of HU values. The dual kVp technique scanned the mouse twice using 50kVp and 80kVp, this segmentation was based on the ratio of the HU value of the two kVps. After image reconstruction and segmentation, the brain blood volume was determined as a percentage of the total brain volume. Results: For the single kVp acquisition at 80kVp, the brain blood volume had an average of 3.5% for gold and 4.0% for Exitron 12000. Also at 80kVp, the contrast-noise ratio was significantly better for images acquired with the gold nanoparticles (2.0) than for those acquired with the Exitron 12000 (1.4). The dual kVp acquisition shows improved separation of skull from vasculature, but increased image noise. Conclusion: In summary, the effects of acquisition technique and contrast material for quantification of mouse brain vasculature showed that gold nanoparticles produced more consistent segmentation of brain vasculature than Exitron 12000. Also, dual kVp acquisition may improve the accuracy of brain vasculature quantification, although the effect of noise amplification warrants further study.

  9. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Kuojun, E-mail: kuojunyang@gmail.com; Guo, Lianping [School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu (China); School of Electrical and Electronic Engineering, Nanyang Technological University (Singapore); Tian, Shulin; Zeng, Hao [School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu (China); Qiu, Lei [School of Electrical and Electronic Engineering, Nanyang Technological University (Singapore)

    2014-04-15

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  10. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    Science.gov (United States)

    Yang, Kuojun; Tian, Shulin; Zeng, Hao; Qiu, Lei; Guo, Lianping

    2014-04-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  11. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    International Nuclear Information System (INIS)

    Yang, Kuojun; Guo, Lianping; Tian, Shulin; Zeng, Hao; Qiu, Lei

    2014-01-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition

  12. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    Science.gov (United States)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  13. Diffusion MRI of the neonate brain: acquisition, processing and analysis techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pannek, Kerstin [University of Queensland, Centre for Clinical Research, Brisbane (Australia); University of Queensland, School of Medicine, Brisbane (Australia); University of Queensland, Centre for Advanced Imaging, Brisbane (Australia); Guzzetta, Andrea [IRCCS Stella Maris, Department of Developmental Neuroscience, Calambrone Pisa (Italy); Colditz, Paul B. [University of Queensland, Centre for Clinical Research, Brisbane (Australia); University of Queensland, Perinatal Research Centre, Brisbane (Australia); Rose, Stephen E. [University of Queensland, Centre for Clinical Research, Brisbane (Australia); University of Queensland, Centre for Advanced Imaging, Brisbane (Australia); University of Queensland Centre for Clinical Research, Royal Brisbane and Women' s Hospital, Brisbane (Australia)

    2012-10-15

    Diffusion MRI (dMRI) is a popular noninvasive imaging modality for the investigation of the neonate brain. It enables the assessment of white matter integrity, and is particularly suited for studying white matter maturation in the preterm and term neonate brain. Diffusion tractography allows the delineation of white matter pathways and assessment of connectivity in vivo. In this review, we address the challenges of performing and analysing neonate dMRI. Of particular importance in dMRI analysis is adequate data preprocessing to reduce image distortions inherent to the acquisition technique, as well as artefacts caused by head movement. We present a summary of techniques that should be used in the preprocessing of neonate dMRI data, and demonstrate the effect of these important correction steps. Furthermore, we give an overview of available analysis techniques, ranging from voxel-based analysis of anisotropy metrics including tract-based spatial statistics (TBSS) to recently developed methods of statistical analysis addressing issues of resolving complex white matter architecture. We highlight the importance of resolving crossing fibres for tractography and outline several tractography-based techniques, including connectivity-based segmentation, the connectome and tractography mapping. These techniques provide powerful tools for the investigation of brain development and maturation. (orig.)

  14. Data acquisition and real-time bolometer tomography using LabVIEW RT

    International Nuclear Information System (INIS)

    Giannone, L.; Eich, T.; Fuchs, J.C.; Ravindran, M.; Ruan, Q.; Wenzel, L.; Cerna, M.; Concezzi, S.

    2011-01-01

    The currently available multi-core PCI Express systems running LabVIEW RT (real-time), equipped with FPGA cards for data acquisition and real-time parallel signal processing, greatly shorten the design and implementation cycles of large-scale, real-time data acquisition and control systems. This paper details a data acquisition and real-time tomography system using LabVIEW RT for the bolometer diagnostic on the ASDEX Upgrade tokamak (Max Planck Institute for Plasma Physics, Garching, Germany). The transformation matrix for tomography is pre-computed based on the geometry of distributed radiation sources and sensors. A parallelized iterative algorithm is adapted to solve a constrained linear system for the reconstruction of the radiated power density. Real-time bolometer tomography is performed with LabVIEW RT. Using multi-core machines to execute the parallelized algorithm, a cycle time well below 1 ms is reached.

  15. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  16. Reducing contrast contamination in radial turbo-spin-echo acquisitions by combining a narrow-band KWIC filter with parallel imaging.

    Science.gov (United States)

    Neumann, Daniel; Breuer, Felix A; Völker, Michael; Brandt, Tobias; Griswold, Mark A; Jakob, Peter M; Blaimer, Martin

    2014-12-01

    Cartesian turbo spin-echo (TSE) and radial TSE images are usually reconstructed by assembling data containing different contrast information into a single k-space. This approach results in mixed contrast contributions in the images, which may reduce their diagnostic value. The goal of this work is to improve the image contrast from radial TSE acquisitions by reducing the contribution of signals with undesired contrast information. Radial TSE acquisitions allow the reconstruction of multiple images with different T2 contrasts using the k-space weighted image contrast (KWIC) filter. In this work, the image contrast is improved by reducing the band-width of the KWIC filter. Data for the reconstruction of a single image are selected from within a small temporal range around the desired echo time. The resulting dataset is undersampled and, therefore, an iterative parallel imaging algorithm is applied to remove aliasing artifacts. Radial TSE images of the human brain reconstructed with the proposed method show an improved contrast when compared with Cartesian TSE images or radial TSE images with conventional KWIC reconstructions. The proposed method provides multi-contrast images from radial TSE data with contrasts similar to multi spin-echo images. Contaminations from unwanted contrast weightings are strongly reduced. © 2014 Wiley Periodicals, Inc.

  17. Uma interface lab-made para aquisição de sinais analógicos instrumentais via porta paralela do microcomputador A lab-made interface for acquisition of instrumental analog signals at the parallel port of a microcomputer

    Directory of Open Access Journals (Sweden)

    Edvaldo da Nóbrega Gaião

    2004-10-01

    Full Text Available A lab-made interface for acquisition of instrumental analog signals between 0 and 5 V at a frequency up to 670 kHz at the parallel port of a microcomputer is described. Since it uses few and small components, it was built into the connector of a printer parallel cable. Its performance was evaluated by monitoring the signals of four different instruments and similar analytical curves were obtained with the interface and from readings from the instrument' displays. Because the components are cheap (~U$35,00 and easy to get, the proposed interface is a simple and economical alternative for data acquisition in small laboratories for routine work, research and teaching.

  18. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    Science.gov (United States)

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2017-01-01

    Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.

  20. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  1. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  2. Parallel sparse direct solver for integrated circuit simulation

    CERN Document Server

    Chen, Xiaoming; Yang, Huazhong

    2017-01-01

    This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques. · Introduces complicated algorithms of sparse linear solvers, using concise principles and simple examples, without complex theory or lengthy derivations; · Describes a parallel sparse direct solver that can be adopted to accelerate any SPICE-like integrated circuit simulato...

  3. Improving image quality of parallel phase-shifting digital holography

    International Nuclear Information System (INIS)

    Awatsuji, Yasuhiro; Tahara, Tatsuki; Kaneko, Atsushi; Koyama, Takamasa; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2008-01-01

    The authors propose parallel two-step phase-shifting digital holography to improve the image quality of parallel phase-shifting digital holography. The proposed technique can increase the effective number of pixels of hologram twice in comparison to the conventional parallel four-step technique. The increase of the number of pixels makes it possible to improve the image quality of the reconstructed image of the parallel phase-shifting digital holography. Numerical simulation and preliminary experiment of the proposed technique were conducted and the effectiveness of the technique was confirmed. The proposed technique is more practical than the conventional parallel phase-shifting digital holography, because the composition of the digital holographic system based on the proposed technique is simpler.

  4. A wavelet-based regularized reconstruction algorithm for SENSE parallel MRI with applications to neuroimaging

    International Nuclear Information System (INIS)

    Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.

    2011-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)

  5. Usefulness of 3D-CE renal artery MRA using parallel imaging with array spatial sensitivity encoding technique (ASSET)

    International Nuclear Information System (INIS)

    Shibasaki, Toshiro; Seno Masafumi; Takoi, Kunihiro; Sato, Hirofumi; Hino, Tsuyoshi

    2003-01-01

    In this study of 3D contrast enhanced MR angiography of the renal artery using the array spatial sensitivity encoding technique (ASSET), the acquisition time per 1 phase shortened fairly. And using the technique of spectral inversion at lipids (SPECIAL) together with ASSET, the quality of image was improved by emphasizing the contrast. The timing of acquisition was determined by the test injection. We started acquiring the MR angiography 2 seconds after the arrival of maximum enhancement of the test injection at the upper abdominal aorta near the renal artery. As a result parenchymal enhancement was not visible and depiction of the segmental artery was possible in 14 (82%) of 17 patients. At the present time we consider it better not to use the Fractional number of excitation (NEX) together with ASSET, as it may cause various artifacts. (author)

  6. Accelerated cardiovascular magnetic resonance of the mouse heart using self-gated parallel imaging strategies does not compromise accuracy of structural and functional measures

    Directory of Open Access Journals (Sweden)

    Dörries Carola

    2010-07-01

    Full Text Available Abstract Background Self-gated dynamic cardiovascular magnetic resonance (CMR enables non-invasive visualization of the heart and accurate assessment of cardiac function in mouse models of human disease. However, self-gated CMR requires the acquisition of large datasets to ensure accurate and artifact-free reconstruction of cardiac cines and is therefore hampered by long acquisition times putting high demands on the physiological stability of the animal. For this reason, we evaluated the feasibility of accelerating the data collection using the parallel imaging technique SENSE with respect to both anatomical definition and cardiac function quantification. Results Findings obtained from accelerated data sets were compared to fully sampled reference data. Our results revealed only minor differences in image quality of short- and long-axis cardiac cines: small anatomical structures (papillary muscles and the aortic valve and left-ventricular (LV remodeling after myocardial infarction (MI were accurately detected even for 3-fold accelerated data acquisition using a four-element phased array coil. Quantitative analysis of LV cardiac function (end-diastolic volume (EDV, end-systolic volume (ESV, stroke volume (SV, ejection fraction (EF and LV mass in healthy and infarcted animals revealed no substantial deviations from reference (fully sampled data for all investigated acceleration factors with deviations ranging from 2% to 6% in healthy animals and from 2% to 8% in infarcted mice for the highest acceleration factor of 3.0. CNR calculations performed between LV myocardial wall and LV cavity revealed a maximum CNR decrease of 50% for the 3-fold accelerated data acquisition when compared to the fully-sampled acquisition. Conclusions We have demonstrated the feasibility of accelerated self-gated retrospective CMR in mice using the parallel imaging technique SENSE. The proposed method led to considerably reduced acquisition times, while preserving high

  7. A Linguistic Technique for Marking and Analyzing Syntactic Parallelism.

    Science.gov (United States)

    Sackler, Jessie Brome

    Sentences in rhetoric texts were used in this study to determine a way in which thetorical syntactic parallelism can be analyzed. A tagmemic analysis determined tagmas which were parallel or identical or similar to one another. These were distinguished from tagmas which were identical because of the syntactic constraints of the language…

  8. Model-based Sensor Data Acquisition and Management

    OpenAIRE

    Aggarwal, Charu C.; Sathe, Saket; Papaioannou, Thanasis G.; Jeung, Ho Young; Aberer, Karl

    2012-01-01

    In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition...

  9. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    Science.gov (United States)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  10. Comments on “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes”

    Directory of Open Access Journals (Sweden)

    Mark B. Yeary

    2009-01-01

    Full Text Available This is a comment article on the publication “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes” Rovini et al. (2009. We mention that there has been similar work reported in the literature before, and the previous work has not been cited correctly, for example Gunnam et al. (2006, 2007. This brief note serves to clarify these issues.

  11. Magnetic resonance imaging acquisition techniques intended to decrease movement artefact in paediatric brain imaging: a systematic review

    International Nuclear Information System (INIS)

    Woodfield, Julie; Kealey, Susan

    2015-01-01

    Attaining paediatric brain images of diagnostic quality can be difficult because of young age or neurological impairment. The use of anaesthesia to reduce movement in MRI increases clinical risk and cost, while CT, though faster, exposes children to potentially harmful ionising radiation. MRI acquisition techniques that aim to decrease movement artefact may allow diagnostic paediatric brain imaging without sedation or anaesthesia. We conducted a systematic review to establish the evidence base for ultra-fast sequences and sequences using oversampling of k-space in paediatric brain MR imaging. Techniques were assessed for imaging time, occurrence of movement artefact, the need for sedation, and either image quality or diagnostic accuracy. We identified 24 relevant studies. We found that ultra-fast techniques had shorter imaging acquisition times compared to standard MRI. Techniques using oversampling of k-space required equal or longer imaging times than standard MRI. Both ultra-fast sequences and those using oversampling of k-space reduced movement artefact compared with standard MRI in unsedated children. Assessment of overall diagnostic accuracy was difficult because of the heterogeneous patient populations, imaging indications, and reporting methods of the studies. In children with shunt-treated hydrocephalus there is evidence that ultra-fast MRI is sufficient for the assessment of ventricular size. (orig.)

  12. Magnetic resonance imaging acquisition techniques intended to decrease movement artefact in paediatric brain imaging: a systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Woodfield, Julie [University of Edinburgh, Child Life and Health, Edinburgh (United Kingdom); Kealey, Susan [Western General Hospital, Department of Neuroradiology, Edinburgh (United Kingdom)

    2015-08-15

    Attaining paediatric brain images of diagnostic quality can be difficult because of young age or neurological impairment. The use of anaesthesia to reduce movement in MRI increases clinical risk and cost, while CT, though faster, exposes children to potentially harmful ionising radiation. MRI acquisition techniques that aim to decrease movement artefact may allow diagnostic paediatric brain imaging without sedation or anaesthesia. We conducted a systematic review to establish the evidence base for ultra-fast sequences and sequences using oversampling of k-space in paediatric brain MR imaging. Techniques were assessed for imaging time, occurrence of movement artefact, the need for sedation, and either image quality or diagnostic accuracy. We identified 24 relevant studies. We found that ultra-fast techniques had shorter imaging acquisition times compared to standard MRI. Techniques using oversampling of k-space required equal or longer imaging times than standard MRI. Both ultra-fast sequences and those using oversampling of k-space reduced movement artefact compared with standard MRI in unsedated children. Assessment of overall diagnostic accuracy was difficult because of the heterogeneous patient populations, imaging indications, and reporting methods of the studies. In children with shunt-treated hydrocephalus there is evidence that ultra-fast MRI is sufficient for the assessment of ventricular size. (orig.)

  13. Automated, parallel mass spectrometry imaging and structural identification of lipids

    DEFF Research Database (Denmark)

    Ellis, Shane R.; Paine, Martin R.L.; Eijkel, Gert B.

    2018-01-01

    We report a method that enables automated data-dependent acquisition of lipid tandem mass spectrometry data in parallel with a high-resolution mass spectrometry imaging experiment. The method does not increase the total image acquisition time and is combined with automatic structural assignments....... This lipidome-per-pixel approach automatically identified and validated 104 unique molecular lipids and their spatial locations from rat cerebellar tissue....

  14. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  15. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  16. Simultaneous Multislice Echo Planar Imaging With Blipped Controlled Aliasing in Parallel Imaging Results in Higher Acceleration: A Promising Technique for Accelerated Diffusion Tensor Imaging of Skeletal Muscle.

    Science.gov (United States)

    Filli, Lukas; Piccirelli, Marco; Kenkel, David; Guggenberger, Roman; Andreisek, Gustav; Beck, Thomas; Runge, Val M; Boss, Andreas

    2015-07-01

    The aim of this study was to investigate the feasibility of accelerated diffusion tensor imaging (DTI) of skeletal muscle using echo planar imaging (EPI) applying simultaneous multislice excitation with a blipped controlled aliasing in parallel imaging results in higher acceleration unaliasing technique. After federal ethics board approval, the lower leg muscles of 8 healthy volunteers (mean [SD] age, 29.4 [2.9] years) were examined in a clinical 3-T magnetic resonance scanner using a 15-channel knee coil. The EPI was performed at a b value of 500 s/mm2 without slice acceleration (conventional DTI) as well as with 2-fold and 3-fold acceleration. Fractional anisotropy (FA) and mean diffusivity (MD) were measured in all 3 acquisitions. Fiber tracking performance was compared between the acquisitions regarding the number of tracks, average track length, and anatomical precision using multivariate analysis of variance and Mann-Whitney U tests. Acquisition time was 7:24 minutes for conventional DTI, 3:53 minutes for 2-fold acceleration, and 2:38 minutes for 3-fold acceleration. Overall FA and MD values ranged from 0.220 to 0.378 and 1.595 to 1.829 mm2/s, respectively. Two-fold acceleration yielded similar FA and MD values (P ≥ 0.901) and similar fiber tracking performance compared with conventional DTI. Three-fold acceleration resulted in comparable MD (P = 0.199) but higher FA values (P = 0.006) and significantly impaired fiber tracking in the soleus and tibialis anterior muscles (number of tracks, P DTI of skeletal muscle with similar image quality and quantification accuracy of diffusion parameters. This may increase the clinical applicability of muscle anisotropy measurements.

  17. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  18. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    Energy Technology Data Exchange (ETDEWEB)

    PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

  19. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER

  20. Evaluation of Medium Spatial Resolution BRDF-Adjustment Techniques Using Multi-Angular SPOT4 (Take5 Acquisitions

    Directory of Open Access Journals (Sweden)

    Martin Claverie

    2015-09-01

    Full Text Available High-resolution sensor Surface Reflectance (SR data are affected by surface anisotropy but are difficult to adjust because of the low temporal frequency of the acquisitions and the low angular sampling. This paper evaluates five high spatial resolution Bidirectional Reflectance Distribution Function (BRDF adjustment techniques. The evaluation is based on the noise level of the SR Time Series (TS corrected to a normalized geometry (nadir view, 45° sun zenith angle extracted from the multi-angular acquisitions of SPOT4 over three study areas (one in Arizona, two in France during the five-month SPOT4 (Take5 experiment. Two uniform techniques (Cst, for Constant, and Av, for Average, relying on the Vermote–Justice–Bréon (VJB BRDF method, assume no variation in space of the BRDF shape. Two methods (VI-dis, for NDVI-based disaggregation and LC-dis, for Land-Cover based disaggregation are based on disaggregation of the MODIS-derived BRDF VJB parameters using vegetation index and land cover, respectively. The last technique (LUM, for Look-Up Map relies on the MCD43 MODIS BRDF products and a crop type data layer. The VI-dis technique produced the lowest level of noise corresponding to the most effective adjustment: reduction from directional to normalized SR TS noises by 40% and 50% on average, for red and near-infrared bands, respectively. The uniform techniques displayed very good results, suggesting that a simple and uniform BRDF-shape assumption is good enough to adjust the BRDF in such geometric configuration (the view zenith angle varies from nadir to 25°. The most complex techniques relying on land cover (LC-dis and LUM displayed contrasting results depending on the land cover.

  1. Parallel SOR methods with a parabolic-diffusion acceleration technique for solving an unstructured-grid Poisson equation on 3D arbitrary geometries

    Science.gov (United States)

    Zapata, M. A. Uh; Van Bang, D. Pham; Nguyen, K. D.

    2016-05-01

    This paper presents a parallel algorithm for the finite-volume discretisation of the Poisson equation on three-dimensional arbitrary geometries. The proposed method is formulated by using a 2D horizontal block domain decomposition and interprocessor data communication techniques with message passing interface. The horizontal unstructured-grid cells are reordered according to the neighbouring relations and decomposed into blocks using a load-balanced distribution to give all processors an equal amount of elements. In this algorithm, two parallel successive over-relaxation methods are presented: a multi-colour ordering technique for unstructured grids based on distributed memory and a block method using reordering index following similar ideas of the partitioning for structured grids. In all cases, the parallel algorithms are implemented with a combination of an acceleration iterative solver. This solver is based on a parabolic-diffusion equation introduced to obtain faster solutions of the linear systems arising from the discretisation. Numerical results are given to evaluate the performances of the methods showing speedups better than linear.

  2. Evaluation of alias-less reconstruction by pseudo-parallel imaging in a phase-scrambling fourier transform technique

    International Nuclear Information System (INIS)

    Ito, Satoshi; Kawawa, Yasuhiro; Yamada, Yoshifumi

    2010-01-01

    We propose an image reconstruction technique in which parallel image reconstruction is performed based on the sensitivity encoding (SENSE) algorithm using only a single set of signals. The signal obtained in the phase-scrambling Fourier transform (PSFT) imaging technique can be transformed to the signal described by the Fresnel transform of the objects, which is known as the diffracted wave-front equation of the object in acoustics or optics. Since the Fresnel transform is a convolution integral on the object space, the space where the PSFT signal exists can be considered as both in the Fourier domain and in the object domain. This notable feature indicates that weighting functions corresponding to the sensitivity of radiofrequency (RF) coils can be approximately given in the PSFT signal space. Therefore, we can obtain two folded images from a single set of signals with different weighting functions, and image reconstruction based on the SENSE parallel imaging algorithm is possible using a series of folded images. Simulation and experimental studies showed that almost alias-free images can be synthesized using a single signal that does not satisfy the sampling theorem. (author)

  3. MRI of degenerative lumbar spine disease: comparison of non-accelerated and parallel imaging

    International Nuclear Information System (INIS)

    Noelte, Ingo; Gerigk, Lars; Brockmann, Marc A.; Kemmling, Andre; Groden, Christoph

    2008-01-01

    Parallel imaging techniques such as GRAPPA have been introduced to optimize image quality and acquisition time. For spinal imaging in a clinical setting no data exist on the equivalency of conventional and parallel imaging techniques. The purpose of this study was to determine whether T1- and T2-weighted GRAPPA sequences are equivalent to conventional sequences for the evaluation of degenerative lumbar spine disease in terms of image quality and artefacts. In patients with clinically suspected degenerative lumbar spine disease two neuroradiologists independently compared sagittal GRAPPA (acceleration factor 2, time reduction approximately 50%) and non-GRAPPA images (25 patients) and transverse GRAPPA (acceleration factor 2, time reduction approximately 50%) and non-GRAPPA images (23 lumbar segments in six patients). Comparative analyses included the minimal diameter of the spinal canal, disc abnormalities, foraminal stenosis, facet joint degeneration, lateral recess, nerve root compression and osteochondrotic vertebral and endplate changes. Image inhomogeneity was evaluated by comparing the nonuniformity in the two techniques. Image quality was assessed by grading the delineation of pathoanatomical structures. Motion and aliasing artefacts were classified from grade 1 (severe) to grade 5 (absent). There was no significant difference between GRAPPA and non-accelerated MRI in the evaluation of degenerative lumbar spine disease (P > 0.05), and there was no difference in the delineation of pathoanatomical structures. For inhomogeneity there was a trend in favour of the conventional sequences. No significant artefacts were observed with either technique. The GRAPPA technique can be used effectively to reduce scanning time in patients with degenerative lumbar spine disease while preserving image quality. (orig.)

  4. The FINUDA data acquisition system

    International Nuclear Information System (INIS)

    Cerello, P.; Marcello, S.; Filippini, V.; Fiore, L.; Gianotti, P.; Raimondo, A.

    1996-07-01

    A parallel scalable Data Acquisition System, based on VME, has been developed to be used in the FINUDA experiment, scheduled to run at the DAPHNE machine at Frascati starting from 1997. The acquisition software runs on embedded RTPC 8067 processors using the LynxOS operating system. The readout of event fragments is coordinated by a suitable trigger Supervisor. data read by different controllers are transported via dedicated bus to a Global Event Builder running on a UNIX machine. Commands from and to VME processors are sent via socket based network protocols. The network hardware is presently ethernet, but it can easily changed to optical fiber

  5. Dual isotope, single acquisition parathyroid imaging

    International Nuclear Information System (INIS)

    Triantafillou, M.; McDonald, H.J.

    1998-01-01

    Full text: Nuclear Medicine parathyroid imaging using Thallium-201(TI) and Technetium-99m(Tc) is an often used imaging modality for the detection of parathyroid adenomas and hyper parathyroidism. The conventional Tl/Tc subtraction technique requires 2 separate injections and acquisitions which are then normalised and subtracted from each other. This lengthy technique is uncomfortable for patients and can result in false positive scan results due to patient movement between and during the acquisition process. We propose a simplified injection and single acquisition technique, that reduces the chance of movement and thus reduces the chance of false positive scan results. The technique involves the injection of Tc followed by the Tl injection 10 minutes later. After a further 10 min wait, imaging is performed using a dual isotope acquisition, with window (W) 1 set on 140 keV 20%W 5% off peak and W2 peaked for 70 keV 20%W., acquired for 10 minutes. We have imaged 27 patients with this technique, 15 had positive parathyroid imaging. Of the 15, 11 had positive ultrasound correlation. Of the remaining 4, 2 have had positive surgical findings for adenomas, the other 2 are awaiting follow-up. Of the 12 patients with negative parathyroid imaging, 2 have been shown to be false - negative with surgery. In conclusion, the single acquisition technique suggested by us is a valid method of imaging parathyroids that reduces the chance of false positive results due to movement

  6. Time-resolved 3D pulmonary perfusion MRI: comparison of different k-space acquisition strategies at 1.5 and 3 T.

    Science.gov (United States)

    Attenberger, Ulrike I; Ingrisch, Michael; Dietrich, Olaf; Herrmann, Karin; Nikolaou, Konstantin; Reiser, Maximilian F; Schönberg, Stefan O; Fink, Christian

    2009-09-01

    Time-resolved pulmonary perfusion MRI requires both high temporal and spatial resolution, which can be achieved by using several nonconventional k-space acquisition techniques. The aim of this study is to compare the image quality of time-resolved 3D pulmonary perfusion MRI with different k-space acquisition techniques in healthy volunteers at 1.5 and 3 T. Ten healthy volunteers underwent contrast-enhanced time-resolved 3D pulmonary MRI on 1.5 and 3 T using the following k-space acquisition techniques: (a) generalized autocalibrating partial parallel acquisition (GRAPPA) with an internal acquisition of reference lines (IRS), (b) GRAPPA with a single "external" acquisition of reference lines (ERS) before the measurement, and (c) a combination of GRAPPA with an internal acquisition of reference lines and view sharing (VS). The spatial resolution was kept constant at both field strengths to exclusively evaluate the influences of the temporal resolution achieved with the different k-space sampling techniques on image quality. The temporal resolutions were 2.11 seconds IRS, 1.31 seconds ERS, and 1.07 VS at 1.5 T and 2.04 seconds IRS, 1.30 seconds ERS, and 1.19 seconds VS at 3 T.Image quality was rated by 2 independent radiologists with regard to signal intensity, perfusion homogeneity, artifacts (eg, wrap around, noise), and visualization of pulmonary vessels using a 3 point scale (1 = nondiagnostic, 2 = moderate, 3 = good). Furthermore, the signal-to-noise ratio in the lungs was assessed. At 1.5 T the lowest image quality (sum score: 154) was observed for the ERS technique and the highest quality for the VS technique (sum score: 201). In contrast, at 3 T images acquired with VS were hampered by strong artifacts and image quality was rated significantly inferior (sum score: 137) compared with IRS (sum score: 180) and ERS (sum score: 174). Comparing 1.5 and 3 T, in particular the overall rating of the IRS technique (sum score: 180) was very similar at both field

  7. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  8. Parallel data grabbing card based on PCI bus RS422

    International Nuclear Information System (INIS)

    Zhang Zhenghui; Shen Ji; Wei Dongshan; Chen Ziyu

    2005-01-01

    This article briefly introduces the developments of the parallel data grabbing card based on RS422 and PCI bus. It could be applied for grabbing the 14 bits parallel data in high speed, coming from the devices with RS422 interface. The methods of data acquisition which bases on the PCI protocol, the functions and their usages of the chips employed, the ideas and principles of the hardware and software designing are presented. (authors)

  9. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  10. Data Acquisition with GPUs: The DAQ for the Muon $g$-$2$ Experiment at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Gohn, W. [Kentucky U.

    2016-11-15

    Graphical Processing Units (GPUs) have recently become a valuable computing tool for the acquisition of data at high rates and for a relatively low cost. The devices work by parallelizing the code into thousands of threads, each executing a simple process, such as identifying pulses from a waveform digitizer. The CUDA programming library can be used to effectively write code to parallelize such tasks on Nvidia GPUs, providing a significant upgrade in performance over CPU based acquisition systems. The muon $g$-$2$ experiment at Fermilab is heavily relying on GPUs to process its data. The data acquisition system for this experiment must have the ability to create deadtime-free records from 700 $\\mu$s muon spills at a raw data rate 18 GB per second. Data will be collected using 1296 channels of $\\mu$TCA-based 800 MSPS, 12 bit waveform digitizers and processed in a layered array of networked commodity processors with 24 GPUs working in parallel to perform a fast recording of the muon decays during the spill. The described data acquisition system is currently being constructed, and will be fully operational before the start of the experiment in 2017.

  11. Data acquisition systems at Fermilab

    International Nuclear Information System (INIS)

    Votava, M.

    1999-01-01

    Experiments at Fermilab require an ongoing program of development for high speed, distributed data acquisition systems. The physics program at the lab has recently started the operation of a Fixed Target run in which experiments are running the DART[1] data acquisition system. The CDF and D experiments are preparing for the start of the next Collider run in mid 2000. Each will read out on the order of 1 million detector channels. In parallel, future experiments such as BTeV R ampersand D and Minos have already started prototype and test beam work. BTeV in particular has challenging data acquisition system requirements with an input rate of 1500 Gbytes/sec into Level 1 buffers and a logging rate of 200 Mbytes/sec. This paper will present a general overview of these data acquisition systems on three fronts those currently in use, those to be deployed for the Collider Run in 2000, and those proposed for future experiments. It will primarily focus on the CDF and D architectures and tools

  12. The HyperCP data acquisition system

    International Nuclear Information System (INIS)

    Kaplan, D.M.

    1997-06-01

    For the HyperCP experiment at Fermilab, we have assembled a data acquisition system that records on up to 45 Exabyte 8505 tape drives in parallel at up to 17 MB/s. During the beam spill, data are acquired from the front-end digitization systems at ∼ 60 MB/s via five parallel data paths. The front-end systems achieve typical readout deadtime of ∼ 1 micros per event, allowing operation at 75-kHz trigger rate with approx-lt 30% deadtime. Event building and tapewriting are handled by 15 Motorola MVME167 processors in 5 VME crates

  13. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  14. Seismic data acquisition systems

    International Nuclear Information System (INIS)

    Kolvankar, V.G.; Nadre, V.N.; Rao, D.S.

    1989-01-01

    Details of seismic data acquisition systems developed at the Bhabha Atomic Research Centre, Bombay are reported. The seismic signals acquired belong to different signal bandwidths in the band from 0.02 Hz to 250 Hz. All these acquisition systems are built around a unique technique of recording multichannel data on to a single track of an audio tape and in digital form. Techniques of how these signals in different bands of frequencies were acquired and recorded are described. Method of detecting seismic signals and its performance is also discussed. Seismic signals acquired in different set-ups are illustrated. Time indexing systems for different set-ups and multichannel waveform display systems which form essential part of the data acquisition systems are also discussed. (author). 13 refs., 6 figs., 1 tab

  15. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  16. Simulation of neutron transport equation using parallel Monte Carlo for deep penetration problems

    International Nuclear Information System (INIS)

    Bekar, K. K.; Tombakoglu, M.; Soekmen, C. N.

    2001-01-01

    Neutron transport equation is simulated using parallel Monte Carlo method for deep penetration neutron transport problem. Monte Carlo simulation is parallelized by using three different techniques; direct parallelization, domain decomposition and domain decomposition with load balancing, which are used with PVM (Parallel Virtual Machine) software on LAN (Local Area Network). The results of parallel simulation are given for various model problems. The performances of the parallelization techniques are compared with each other. Moreover, the effects of variance reduction techniques on parallelization are discussed

  17. ENHANCING THE INTERNATIONALIZATION OF THE GLOBAL INSURANCE MARKET: CHANGING DRIVERS OF MERGERS AND ACQUISITIONS

    Directory of Open Access Journals (Sweden)

    D. Rasshyvalov

    2014-03-01

    Full Text Available One-third of worldwide mergers and acquisitions involving firms from different countries make M&A one of the key drivers of internationalization. Over the past five years insurance cross-border merger and acquisition activities have globally paralleled deep financial crisis.

  18. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics

    International Nuclear Information System (INIS)

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered

  19. The JET fast central acquisition and trigger system

    International Nuclear Information System (INIS)

    Blackler, K.; Edwards, A.W.

    1994-01-01

    This paper describes a new data acquisition system at JET which uses Texas TMS320C40 parallel digital signal processors and the HELIOS parallel operating system to reduce the large amounts of experimental data produced by fast diagnostics. This unified system features a two level trigger system which performs real-time activity detection together with asynchronous event classification and selection. This provides automated data reduction during an experiment. The system's application to future fusion machines which have almost continuous operation is discussed

  20. Technological Similarity, Post-acquisition R&D Reorganization, and Innovation Performance in Horizontal Acquisition

    DEFF Research Database (Denmark)

    Colombo, Massimo G.; Rabbiosi, Larissa

    2014-01-01

    This paper aims to disentangle the mechanisms through which technological similarity between acquiring and acquired firms influences innovation in horizontal acquisitions. We develop a theoretical model that links technological similarity to: (i) two key aspects of post-acquisition reorganization...... of acquired R&D operations – the rationalization of the R&D operations and the replacement of the R&D top manager, and (ii) two intermediate effects that are closely associated with the post-acquisition innovation performance of the combined firm – improvements in R&D productivity and disruptions in R......&D personnel. We rely on PLS techniques to test our theoretical model using detailed information on 31 horizontal acquisitions in high- and medium-tech industries. Our results indicate that in horizontal acquisitions, technological similarity negatively affects post-acquisition innovation performance...

  1. Microwave tomography global optimization, parallelization and performance evaluation

    CERN Document Server

    Noghanian, Sima; Desell, Travis; Ashtari, Ali

    2014-01-01

    This book provides a detailed overview on the use of global optimization and parallel computing in microwave tomography techniques. The book focuses on techniques that are based on global optimization and electromagnetic numerical methods. The authors provide parallelization techniques on homogeneous and heterogeneous computing architectures on high performance and general purpose futuristic computers. The book also discusses the multi-level optimization technique, hybrid genetic algorithm and its application in breast cancer imaging.

  2. A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Liang, E-mail: gaol@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois at Urbana–Champaign, 306 N. Wright St., Urbana, IL 61801 (United States); Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana–Champaign, 405 North Mathews Avenue, Urbana, IL 61801 (United States); Wang, Lihong V., E-mail: lhwang@wustl.edu [Optical imaging laboratory, Department of Biomedical Engineering, Washington University in St. Louis, One Brookings Dr., MO, 63130 (United States)

    2016-02-29

    Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition–also dubbed snapshot imaging–has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications.

  3. Decomposition based parallel processing technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2000-01-01

    In practical design studies, most of designers solve multidisciplinary problems with complex design structure. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder original design processes to minimize total cost and time. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  4. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1998-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  5. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1997-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  6. Post-acquisition data mining techniques for LC-MS/MS-acquired data in drug metabolite identification.

    Science.gov (United States)

    Dhurjad, Pooja Sukhdev; Marothu, Vamsi Krishna; Rathod, Rajeshwari

    2017-08-01

    Metabolite identification is a crucial part of the drug discovery process. LC-MS/MS-based metabolite identification has gained widespread use, but the data acquired by the LC-MS/MS instrument is complex, and thus the interpretation of data becomes troublesome. Fortunately, advancements in data mining techniques have simplified the process of data interpretation with improved mass accuracy and provide a potentially selective, sensitive, accurate and comprehensive way for metabolite identification. In this review, we have discussed the targeted (extracted ion chromatogram, mass defect filter, product ion filter, neutral loss filter and isotope pattern filter) and untargeted (control sample comparison, background subtraction and metabolomic approaches) post-acquisition data mining techniques, which facilitate the drug metabolite identification. We have also discussed the importance of integrated data mining strategy.

  7. LAMPF nuclear chemistry data acquisition system

    International Nuclear Information System (INIS)

    Giesler, G.C.

    1983-01-01

    The LAMPF Nuclear Chemistry Data Acquisition System (DAS) is designed to provide both real-time control of data acquisition and facilities for data processing for a large variety of users. It consists of a PDP-11/44 connected to a parallel CAMAC branch highway as well as to a large number of peripherals. The various types of radiation counters and spectrometers and their connections to the system will be described. Also discussed will be the various methods of connection considered and their advantages and disadvantages. The operation of the system from the standpoint of both hardware and software will be described as well as plans for the future

  8. Smart acquisition EELS

    International Nuclear Information System (INIS)

    Sader, Kasim; Schaffer, Bernhard; Vaughan, Gareth; Brydson, Rik; Brown, Andy; Bleloch, Andrew

    2010-01-01

    We have developed a novel acquisition methodology for the recording of electron energy loss spectra (EELS) using a scanning transmission electron microscope (STEM): 'Smart Acquisition'. Smart Acquisition allows the independent control of probe scanning procedures and the simultaneous acquisition of analytical signals such as EELS. The original motivation for this work arose from the need to control the electron dose experienced by beam-sensitive specimens whilst maintaining a sufficiently high signal-to-noise ratio in the EEL signal for the extraction of useful analytical information (such as energy loss near edge spectral features) from relatively undamaged areas. We have developed a flexible acquisition framework which separates beam position data input, beam positioning, and EELS acquisition. In this paper we demonstrate the effectiveness of this technique on beam-sensitive thin films of amorphous aluminium trifluoride. Smart Acquisition has been used to expose lines to the electron beam, followed by analysis of the structures created by line-integrating EELS acquisitions, and the results are compared to those derived from a standard EELS linescan. High angle annular dark-field images show clear reductions in damage for the Smart Acquisition areas compared to the conventional linescan, and the Smart Acquisition low loss EEL spectra are more representative of the undamaged material than those derived using a conventional linescan. Atomically resolved EELS of all four elements of CaNdTiO show the high resolution capabilities of Smart Acquisition.

  9. Smart acquisition EELS

    Energy Technology Data Exchange (ETDEWEB)

    Sader, Kasim, E-mail: k.sader@leeds.ac.uk [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Schaffer, Bernhard [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Department of Physics and Astronomy, University of Glasgow (United Kingdom); Vaughan, Gareth [Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Brydson, Rik [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Brown, Andy [Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Bleloch, Andrew [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Department of Engineering, University of Liverpool, Liverpool (United Kingdom)

    2010-07-15

    We have developed a novel acquisition methodology for the recording of electron energy loss spectra (EELS) using a scanning transmission electron microscope (STEM): 'Smart Acquisition'. Smart Acquisition allows the independent control of probe scanning procedures and the simultaneous acquisition of analytical signals such as EELS. The original motivation for this work arose from the need to control the electron dose experienced by beam-sensitive specimens whilst maintaining a sufficiently high signal-to-noise ratio in the EEL signal for the extraction of useful analytical information (such as energy loss near edge spectral features) from relatively undamaged areas. We have developed a flexible acquisition framework which separates beam position data input, beam positioning, and EELS acquisition. In this paper we demonstrate the effectiveness of this technique on beam-sensitive thin films of amorphous aluminium trifluoride. Smart Acquisition has been used to expose lines to the electron beam, followed by analysis of the structures created by line-integrating EELS acquisitions, and the results are compared to those derived from a standard EELS linescan. High angle annular dark-field images show clear reductions in damage for the Smart Acquisition areas compared to the conventional linescan, and the Smart Acquisition low loss EEL spectra are more representative of the undamaged material than those derived using a conventional linescan. Atomically resolved EELS of all four elements of CaNdTiO show the high resolution capabilities of Smart Acquisition.

  10. Dynamic MRI of the liver with parallel acquisition technique. Characterization of focal liver lesions and analysis of the hepatic vasculature in a single MRI session

    International Nuclear Information System (INIS)

    Heilmaier, C.; Sutter, R.; Lutz, A.M.; Willmann, J.K.; Seifert, B.

    2008-01-01

    Purpose: to retrospectively evaluate the performance of breath-hold contrast-enhanced 3D dynamic parallel gradient echo MRI (pMRT) for the characterization of focal liver lesions (standard of reference: histology) and for the analysis of hepatic vasculature (standard of reference: contrast-enhanced 64-detector row computed tomography; MSCT) in a single MRI session. Materials and method: two blinded readers independently analyzed preoperative pMRT data sets (1.5T-MRT) of 45 patients (23 men, 22 women; 28 - 77 years, average age, 48 years) with a total of 68 focal liver lesions with regard to image quality of hepatic arteries, portal and hepatic veins, presence of variant anatomy of the hepatic vasculature, as well as presence of portal vein thrombosis and hemodynamically significant arterial stenosis. In addition, both readers were asked to identify and characterize focal liver lesions. Imaging parameters of pMRT were: TR/TE/matrix/slice thickness/acquisition time: 3.1 ms/1.4 ms/384 x 224/4 mm/15 - 17 s. MSCT was performed with a pitch of 1.2, an effective slice thickness of 1 mm and a matrix of 512 x 512. Results: based on histology, the 68 liver lesions were found to be 42 hepatocellular carcinomas (HCC), 20 metastases, 3 cholangiocellular carcinomas (CCC) as well as 1 dysplastic nodule, 1 focal nodular hyperplasia (FNH) and 1 atypical hemangioma. Overall, the diagnostic accuracy was high for both readers (91 - 100%) in the characterization of these focal liver lesions with an excellent interobserver agreement (κ-values of 0.89 [metastases], 0.97 [HCC] and 1 [CCC]). On average, the image quality of all vessels under consideration was rated good or excellent in 89% (reader 1) and 90% (reader 2). Anatomical variants of the hepatic arteries, hepatic veins and portal vein as well as thrombosis of the portal vein were reliably detected by pMRT. Significant arterial stenosis was found with a sensitivity between 86% and 100% and an excellent interobserver agreement (κ

  11. Front-end data processing the SLD data acquisition system

    International Nuclear Information System (INIS)

    Nielsen, B.S.

    1986-07-01

    The data acquisition system for the SLD detector will make extensive use of parallel at the front-end level. Fastbus acquisition modules are being built with powerful processing capabilities for calibration, data reduction and further pre-processing of the large amount of analog data handled by each module. This paper describes the read-out electronics chain and data pre-processing system adapted for most of the detector channels, exemplified by the central drift chamber waveform digitization and processing system

  12. A Survey of Model-based Sensor Data Acquisition and Management

    OpenAIRE

    Aggarwal, Charu C.; Sathe, Saket; Papaioannou, Thanasis; Jeung, Hoyoung; Aberer, Karl

    2013-01-01

    In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition...

  13. Parallel processing based decomposition technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2001-01-01

    In practical design studies, most of designers solve multidisciplinary problems with large sized and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  14. Evaluation of Medium Spatial Resolution BRDF-Adjustment Techniques Using Multi-Angular SPOT4 (Take5) Acquisitions

    OpenAIRE

    Claverie, Martin; Vermote, Eric; Franch, Belen; He, Tao; Hagolle, Olivier; Kadiri, Mohamed; Masek, Jeff

    2015-01-01

    High-resolution sensor Surface Reflectance (SR) data are affected by surface anisotropy but are difficult to adjust because of the low temporal frequency of the acquisitions and the low angular sampling. This paper evaluates five high spatial resolution Bidirectional Reflectance Distribution Function (BRDF) adjustment techniques. The evaluation is based on the noise level of the SR Time Series (TS) corrected to a normalized geometry (nadir view, 45° sun zenith angle) extracted from the multi-...

  15. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  16. Kalman Filter Tracking on Parallel Architectures

    International Nuclear Information System (INIS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-01-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment

  17. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  18. Dynamic motion analysis of fetuses with central nervous system disorders by cine magnetic resonance imaging using fast imaging employing steady-state acquisition and parallel imaging: a preliminary result.

    Science.gov (United States)

    Guo, Wan-Yuo; Ono, Shigeki; Oi, Shizuo; Shen, Shu-Huei; Wong, Tai-Tong; Chung, Hsiao-Wen; Hung, Jeng-Hsiu

    2006-08-01

    The authors present a novel cine magnetic resonance (MR) imaging, two-dimensional (2D) fast imaging employing steady-state acquisition (FIESTA) technique with parallel imaging. It achieves temporal resolution at less than half a second as well as high spatial resolution cine imaging free of motion artifacts for evaluating the dynamic motion of fetuses in utero. The information obtained is used to predict postnatal outcome. Twenty-five fetuses with anomalies were studied. Ultrasonography demonstrated severe abnormalities in five of the fetuses; the other 20 fetuses constituted a control group. The cine fetal MR imaging demonstrated fetal head, neck, trunk, extremity, and finger as well as swallowing motions. Imaging findings were evaluated and compared in fetuses with major central nervous system (CNS) anomalies in five cases and minor CNS, non-CNS, or no anomalies in 20 cases. Normal motility was observed in the latter group. For fetuses in the former group, those with abnormal motility failed to survive after delivery, whereas those with normal motility survived with functioning preserved. The power deposition of radiofrequency, presented as specific absorption rate (SAR), was calculated. The SAR of FIESTA was approximately 13 times lower than that of conventional MR imaging of fetuses obtained using single-shot fast spin echo sequences. The following conclusions are drawn: 1) Fetal motion is no longer a limitation for prenatal imaging after the implementation of parallel imaging with 2D FIESTA, 2) Cine MR imaging illustrates fetal motion in utero with high clinical reliability, 3) For cases involving major CNS anomalies, cine MR imaging provides information on extremity motility in fetuses and serves as a prognostic indicator of postnatal outcome, and 4) The cine MR used to observe fetal activity is technically 2D and conceptually three-dimensional. It provides four-dimensional information for making proper and timely obstetrical and/or postnatal management

  19. Parallel computing in genomic research: advances and applications.

    Science.gov (United States)

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  20. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  1. Interdependencies of acquisition, detection, and reconstruction techniques on the accuracy of iodine quantification in varying patient sizes employing dual-energy CT

    Energy Technology Data Exchange (ETDEWEB)

    Marin, Daniele; Pratts-Emanuelli, Jose J.; Mileto, Achille; Bashir, Mustafa R.; Nelson, Rendon C.; Boll, Daniel T. [Duke University Medical Center, Department of Radiology, Durham, NC (United States); Husarik, Daniela B. [University Hospital Zurich, Diagnostic and Interventional Radiology, Zurich (Switzerland)

    2014-10-03

    To assess the impact of patient habitus, acquisition parameters, detector efficiencies, and reconstruction techniques on the accuracy of iodine quantification using dual-source dual-energy CT (DECT). Two phantoms simulating small and large patients contained 20 iodine solutions mimicking vascular and parenchymal enhancement from saline isodensity to 400 HU and 30 iodine solutions simulating enhancement of the urinary collecting system from 400 to 2,000 HU. DECT acquisition (80/140 kVp and 100/140 kVp) was performed using two DECT systems equipped with standard and integrated electronics detector technologies. DECT raw datasets were reconstructed using filtered backprojection (FBP), and iterative reconstruction (SAFIRE I/V). Accuracy for iodine quantification was significantly higher for the small compared to the large phantoms (9.2 % ± 7.5 vs. 24.3 % ± 26.1, P = 0.0001), the integrated compared to the conventional detectors (14.8 % ± 20.6 vs. 18.8 % ± 20.4, respectively; P = 0.006), and SAFIRE V compared to SAFIRE I and FBP reconstructions (15.2 % ± 18.1 vs. 16.1 % ± 17.6 and 18.9 % ± 20.4, respectively; P ≤ 0.003). A significant synergism was observed when the most effective detector and reconstruction techniques were combined with habitus-adapted dual-energy pairs. In a second-generation dual-source DECT system, the accuracy of iodine quantification can be substantially improved by an optimal choice and combination of acquisition parameters, detector, and reconstruction techniques. (orig.)

  2. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  3. Microcomputer data acquisition and control.

    Science.gov (United States)

    East, T D

    1986-01-01

    In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence.

  4. Three-dimensional SPECT [single photon emission computed tomography] reconstruction of combined cone beam and parallel beam data

    International Nuclear Information System (INIS)

    Jaszczak, R.J.; Jianying Li; Huili Wang; Coleman, R.E.

    1992-01-01

    Single photon emission computed tomography (SPECT) using cone beam (CB) collimation exhibits increased sensitivity compared with acquisition geometries using parallel (P) hole collimation. However, CB collimation has a smaller field-of-view which may result in truncated projections and image artifacts. A primary objective of this work is to investigate maximum likelihood-expectation maximization (ML-EM) methods to reconstruct simultaneously acquired parallel and cone beam (P and CB) SPECT data. Simultaneous P and CB acquisition can be performed with commercially available triple camera systems by using two cone-beam collimators and a single parallel-hole collimator. The loss in overall sensitivity (relative to the use of three CB collimators) is about 15 to 20%. The authors have developed three methods to combine P and CB data using modified ML-EM algorithms. (author)

  5. An embedded control and acquisition system for multichannel detectors

    International Nuclear Information System (INIS)

    Gori, L.; Tommasini, R.; Cautero, G.; Giuressi, D.; Barnaba, M.; Accardo, A.; Carrato, S.; Paolucci, G.

    1999-01-01

    We present a pulse counting multichannel data acquisition system, characterized by the high number of high speed acquisition channels, and by the modular, embedded system architecture. The former leads to very fast acquisitions and allows to obtain sequences of snapshots, for the study of time dependent phenomena. The latter, thanks to the integration of a CPU into the system, provides high computational capabilities, so that the interfacing with the user computer is very simple and user friendly. Moreover, the user computer is free from control and acquisition tasks. The system has been developed for one of the beamlines of the third generation synchrotron radiation sources ELETTRA, and because of the modular architecture can be useful in various other kinds of experiments, where parallel acquisition, high data rates, and user friendliness are required. First experimental results on a double pass hemispherical electron analyser provided with a 96 channel detector confirm the validity of the approach. (author)

  6. Pediatric bowel MRI - accelerated parallel imaging in a single breathhold

    International Nuclear Information System (INIS)

    Hohl, C.; Honnef, D.; Krombach, G.; Muehlenbruch, G.; Guenther, R.W.; Niendorf, T.; Ocklenburg, C.; Wenzl, T.G.

    2008-01-01

    Purpose: to compare highly accelerated parallel MRI of the bowel with conventional balanced FFE sequences in children with inflammatory bowel disease (IBD). Materials and methods: 20 children with suspected or proven IBD underwent MRI using a 1.5 T scanner after oral administration of 700-1000 ml of a Mannitol solution and an additional enema. The examination started with a 4-channel receiver coil and a conventional balanced FFE sequence in axial (2.5 s/slice) and coronal (4.7 s/slice) planes. Afterwards highly accelerated (R = 5) balanced FFE sequences in axial (0.5 s/slice) and coronal (0.9 s/slice) were performed using a 32-channel receiver coil and parallel imaging (SENSE). Both receiver coils achieved a resolution of 0.88 x 0.88 mm with a slice thickness of 5 mm (coronal) and 6 mm (axial) respectively. Using the conventional imaging technique, 4 - 8 breathholds were needed to cover the whole abdomen, while parallel imaging shortened the acquisition time down to a single breathhold. Two blinded radiologists did a consensus reading of the images regarding pathological findings, image quality, susceptibility to artifacts and bowel distension. The results for both coil systems were compared using the kappa-(κ)-coefficient, differences in the susceptibility to artifacts were checked with the Wilcoxon signed rank test. Statistical significance was assumed for p = 0.05. Results: 13 of the 20 children had inflammatory bowel wall changes at the time of the examination, which could be correctly diagnosed with both coil systems in 12 of 13 cases (92%). The comparison of both coil systems showed a good agreement for pathological findings (κ = 0.74 - 1.0) and the image quality. Using parallel imaging significantly more artifacts could be observed (κ = 0.47)

  7. Comparison of continuous with step and shoot acquisition in SPECT scanning

    International Nuclear Information System (INIS)

    McCarthy, L.; Cotterill, T.; Chu, J.M.G.

    1998-01-01

    Full text: Following the recent advent of continuous acquisition for performing SPECT scanning, it was decided to compare the commonly used Step and Shoot mode of acquisition with the new continuous acquisition mode. The aim of the study is to assess any difference in resolution from the resulting images acquired using the two modes of acquisition. Sequential series of studies were performed on a SPECT phantom using both modes of acquisition. Separate sets of data were collected for both high resolution parallel hole and ultra high resolution fan beam collimators. Clinical data was collected on patients undergoing routine gallium, 99m Tc-MDP bone and 99m Tc-HMPAO brain studies. Separate sequential acquisition in both modes were collected for each patient. The sequence of collection was also alternated. Reconstruction was performed utilising the same parameters for each acquisition. The reconstructed data were assessed visually by blinded observers to detect differences in resolution and image quality. No significant difference in the studies collected by either acquisition modes were detected. The time saved by continuous acquisition could be an advantage

  8. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  9. Non-Stationary Rician Noise Estimation in Parallel MRI Using a Single Image: A Variance-Stabilizing Approach.

    Science.gov (United States)

    Pieciak, Tomasz; Aja-Fernandez, Santiago; Vegas-Sanchez-Ferrero, Gonzalo

    2017-10-01

    Parallel magnetic resonance imaging (pMRI) techniques have gained a great importance both in research and clinical communities recently since they considerably accelerate the image acquisition process. However, the image reconstruction algorithms needed to correct the subsampling artifacts affect the nature of noise, i.e., it becomes non-stationary. Some methods have been proposed in the literature dealing with the non-stationary noise in pMRI. However, their performance depends on information not usually available such as multiple acquisitions, receiver noise matrices, sensitivity coil profiles, reconstruction coefficients, or even biophysical models of the data. Besides, some methods show an undesirable granular pattern on the estimates as a side effect of local estimation. Finally, some methods make strong assumptions that just hold in the case of high signal-to-noise ratio (SNR), which limits their usability in real scenarios. We propose a new automatic noise estimation technique for non-stationary Rician noise that overcomes the aforementioned drawbacks. Its effectiveness is due to the derivation of a variance-stabilizing transformation designed to deal with any SNR. The method was compared to the main state-of-the-art methods in synthetic and real scenarios. Numerical results confirm the robustness of the method and its better performance for the whole range of SNRs.

  10. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  11. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  12. Parallel computing in genomic research: advances and applications

    Directory of Open Access Journals (Sweden)

    Ocaña K

    2015-11-01

    Full Text Available Kary Ocaña,1 Daniel de Oliveira2 1National Laboratory of Scientific Computing, Petrópolis, Rio de Janeiro, 2Institute of Computing, Fluminense Federal University, Niterói, Brazil Abstract: Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. Keywords: high-performance computing, genomic research, cloud computing, grid computing, cluster computing, parallel computing

  13. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  14. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  15. Simple smoothing technique to reduce data scattering in physics experiments

    International Nuclear Information System (INIS)

    Levesque, L

    2008-01-01

    This paper describes an experiment involving motorized motion and a method to reduce data scattering from data acquisition. Jitter or minute instrumental vibrations add noise to a detected signal, which often renders small modulations of a graph very difficult to interpret. Here we describe a method to reduce scattering amongst data points from the signal measured by a photodetector that is motorized and scanned in a direction parallel to the plane of a rectangular slit during a computer-controlled diffraction experiment. The smoothing technique is investigated using subsets of many data points from the data acquisition. A limit for the number of data points in a subset is determined from the results based on the trend of the small measured signal to avoid severe changes in the shape of the signal from the averaging procedure. This simple smoothing method can be achieved using any type of spreadsheet software

  16. Management of Transjugular Intrahepatic Portosystemic Shunt (TIPS)-associated Refractory Hepatic Encephalopathy by Shunt Reduction Using the Parallel Technique: Outcomes of a Retrospective Case Series

    International Nuclear Information System (INIS)

    Cookson, Daniel T.; Zaman, Zubayr; Gordon-Smith, James; Ireland, Hamish M.; Hayes, Peter C.

    2011-01-01

    Purpose: To investigate the reproducibility and technical and clinical success of the parallel technique of transjugular intrahepatic portosystemic shunt (TIPS) reduction in the management of refractory hepatic encephalopathy (HE). Materials and Methods: A 10-mm-diameter self-expanding stent graft and a 5–6-mm-diameter balloon-expandable stent were placed in parallel inside the existing TIPS in 8 patients via a dual unilateral transjugular approach. Changes in portosystemic pressure gradient and HE grade were used as primary end points. Results: TIPS reduction was technically successful in all patients. Mean ± standard deviation portosystemic pressure gradient before and after shunt reduction was 4.9 ± 3.6 mmHg (range, 0–12 mmHg) and 10.5 ± 3.9 mmHg (range, 6–18 mmHg). Duration of follow-up was 137 ± 117.8 days (range, 18–326 days). Clinical improvement of HE occurred in 5 patients (62.5%) with resolution of HE in 4 patients (50%). Single episodes of recurrent gastrointestinal hemorrhage occurred in 3 patients (37.5%). These were self-limiting in 2 cases and successfully managed in 1 case by correction of coagulopathy and blood transfusion. Two of these patients (25%) died, one each of renal failure and hepatorenal failure. Conclusion: The parallel technique of TIPS reduction is reproducible and has a high technical success rate. A dual unilateral transjugular approach is advantageous when performing this procedure. The parallel technique allows repeat bidirectional TIPS adjustment and may be of significant clinical benefit in the management of refractory HE.

  17. Parallelism at Cern: real-time and off-line applications in the GP-MIMD2 project

    International Nuclear Information System (INIS)

    Calafiura, P.

    1997-01-01

    A wide range of general purpose high-energy physics applications, ranging from Monte Carlo simulation to data acquisition, from interactive data analysis to on-line filtering, have been ported, or developed, and run in parallel on IBM SP-2 and Meiko CS-2 CERN large multi-processor machines. The ESPRIT project GP-MIMD2 has been a catalyst for the interest in parallel computing at CERN. The project provided the 128 processor Meiko CS-2 system that is now succesfully integrated in the CERN computing environment. The CERN experiment NA48 was involved in the GP-MIMD2 project since the beginning. NA48 physicists run, as part of their day-to-day work, simulation and analysis programs parallelized using the message passing interface MPI. The CS-2 is also a vital component of the experiment data acquisition system and will be used to calibrate in real-time the 13000 channels liquid krypton calorimeter. (orig.)

  18. A progress report of the switch-based data acquisition system prototype project and the application of switches from industry to high-energy physics event building

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.

    1990-01-01

    A prototype of a data acquisition system based on a new scalable, highly-parallel, open-system architecture is being developed at Fermilab. The major component of the new architecture, the parallel event builder, is based on a telecommunications industry technique used in the implementation of switching systems, a barrel-shift switch. The architecture is scalable both in the expandability of the number of input channels and in the throughput of the system. Because of its scalability, the system is well suited for low to high-rate experiments, test beams and all SSC detectors. The architecture is open in that as new technologies are developed and made into commercial products (e.g., arrays of processors and workstations and standard data links), these new products can be easily integrated into the system with minimal system modifications and no modifications to the system's basic architecture. Scalability and openness should guarantee that the data acquisition system does not become obsolete during the lifetime of the experiment. The paper first gives a description of the architecture and the prototype project and then details both the prototype project's software and hardware status including details of some architecture simulation studies. Suggestions for future R and D work on the new data acquisition system architecture are then described. The paper concludes by examining interconnection networks from industry and their application to event building and to other areas of high-energy physics data acquisition systems

  19. An inherently parallel method for solving discretized diffusion equations

    International Nuclear Information System (INIS)

    Eccleston, B.R.; Palmer, T.S.

    1999-01-01

    A Monte Carlo approach to solving linear systems of equations is being investigated in the context of the solution of discretized diffusion equations. While the technique was originally devised decades ago, changes in computer architectures (namely, massively parallel machines) have driven the authors to revisit this technique. There are a number of potential advantages to this approach: (1) Analog Monte Carlo techniques are inherently parallel; this is not necessarily true to today's more advanced linear equation solvers (multigrid, conjugate gradient, etc.); (2) Some forms of this technique are adaptive in that they allow the user to specify locations in the problem where resolution is of particular importance and to concentrate the work at those locations; and (3) These techniques permit the solution of very large systems of equations in that matrix elements need not be stored. The user could trade calculational speed for storage if elements of the matrix are calculated on the fly. The goal of this study is to compare the parallel performance of Monte Carlo linear solvers to that of a more traditional parallelized linear solver. The authors observe the linear speedup that they expect from the Monte Carlo algorithm, given that there is no domain decomposition to cause significant communication overhead. Overall, PETSc outperforms the Monte Carlo solver for the test problem. The PETSc parallel performance improves with larger numbers of unknowns for a given number of processors. Parallel performance of the Monte Carlo technique is independent of the size of the matrix and the number of processes. They are investigating modifications to the scheme to accommodate matrix problems with positive off-diagonal elements. They are also currently coding an on-the-fly version of the algorithm to investigate the solution of very large linear systems

  20. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  1. [Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].

    Science.gov (United States)

    Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan

    2016-03-01

    To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.

  2. Automatic data-acquisition and communications computer network for fusion experiments

    International Nuclear Information System (INIS)

    Kemper, C.O.

    1981-01-01

    A network of more than twenty computers serves the data acquisition, archiving, and analysis requirements of the ISX, EBT, and beam-line test facilities at the Fusion Division of Oak Ridge National Laboratory. The network includes PDP-8, PDP-12, PDP-11, PDP-10, and Interdata 8-32 processors, and is unified by a variety of high-speed serial and parallel communications channels. While some processors are dedicated to experimental data acquisition, and others are dedicated to later analysis and theoretical work, many processors perform a combination of acquisition, real-time analysis and display, and archiving and communications functions. A network software system has been developed which runs in each processor and automatically transports data files from point of acquisition to point or points of analysis, display, and storage, providing conversion and formatting functions are required

  3. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  4. Disentangling value creation mechanism in cross-border acquisitions

    DEFF Research Database (Denmark)

    Wang, Daojuan; Sørensen, Olav Jull; Moini, Hamid

    2016-01-01

    This study investigates the value creation mechanism in cross-border acquisitions ( CBAs ) by employing a structural equation modeling technique and surveying 103 CBAs performed by Nordic firms. The results reveal that resource possession, resource picking, and resource utilization are three impo...... in this study, is an important step forward in merger and acquisition (M&A) research. Moreover, numerous research findings offer tactical implications for international acquirers.......This study investigates the value creation mechanism in cross-border acquisitions ( CBAs ) by employing a structural equation modeling technique and surveying 103 CBAs performed by Nordic firms. The results reveal that resource possession, resource picking, and resource utilization are three...... important strategic dimensions for realizing synergy and creating value in CBAs . Furthermore, mediation analysis shows that the two acquisition-based dynamic capabilities—value identification and resource reconfiguration—act as important mediators in how the joining firms’ resource base impacts acquisition...

  5. Data acquisition system for a proton imaging apparatus

    CERN Document Server

    Sipala, V; Bruzzi, M; Bucciolini, M; Candiano, G; Capineri, L; Cirrone, G A P; Civinini, C; Cuttone, G; Lo Presti, D; Marrazzo, L; Mazzaglia, E; Menichelli, D; Randazzo, N; Talamonti, C; Tesi, M; Valentini, S

    2009-01-01

    New developments in the proton-therapy field for cancer treatments, leaded Italian physics researchers to realize a proton imaging apparatus consisting of a silicon microstrip tracker to reconstruct the proton trajectories and a calorimeter to measure their residual energy. For clinical requirements, the detectors used and the data acquisition system should be able to sustain about 1 MHz proton rate. The tracker read-out, using an ASICs developed by the collaboration, acquires the signals detector and sends data in parallel to an FPGA. The YAG:Ce calorimeter generates also the global trigger. The data acquisition system and the results obtained in the calibration phase are presented and discussed.

  6. Computed tomography: acquisition process, technology and current state

    Directory of Open Access Journals (Sweden)

    Óscar Javier Espitia Mendoza

    2016-02-01

    Full Text Available Computed tomography is a noninvasive scan technique widely applied in areas such as medicine, industry, and geology. This technique allows the three-dimensional reconstruction of the internal structure of an object which is lighted with an X-rays source. The reconstruction is formed with two-dimensional cross-sectional images of the object. Each cross-sectional is obtained from measurements of physical phenomena, such as attenuation, dispersion, and diffraction of X-rays, as result of their interaction with the object. In general, measurements acquisition is performed with methods based on any of these phenomena and according to various architectures classified in generations. Furthermore, in response to the need to simulate acquisition systems for CT, software dedicated to this task has been developed. The objective of this research is to determine the current state of CT techniques, for this, a review of methods, different architectures used for the acquisition and some of its applications is presented. Additionally, results of simulations are presented. The main contributions of this work are the detailed description of acquisition methods and the presentation of the possible trends of the technique.

  7. MR angiography of the carotid arteries in 3 D TOF-technique with sagittal ''double-slab'' acquisition using a new head-neck coil

    International Nuclear Information System (INIS)

    Link, J.; Mueller-Huelsbeck, S.; Heller, M.

    1996-01-01

    Purpose: The aim of the study was to assess the value of MR angiography (MRA) in sagittal technique compared to DSA in the evaluation of carotid artery stenosis. Methods: 80 Carotid arteries in 40 symptomatic patients were prospectively studied with DSA and MRA. MRA was carried out by means of 3D time-of-flight technique with a FISP sequence (T E 6 ms/T R 80 ms, flip angle 25 , FOV 240x210 mm, matrix 157x256 mm, in-plane resolution 1.34x0.94 mm, partition thickness 1.32 mm, slab thickness 45 mm, acquisition time 7 min) using a new head-neck coil. Data acquisition was performed in sagittal orientation with the 'double-slab' technique. Imaging quality of the extracranial carotid arteries and correctness of quantification of stenosis was performed. Results: Imaging quality was good at the origin of the carotid arteries in 65%, at the bifurcation region in 98% and near the skull base in 81%. The agreement of DSA and MRA was 96% of the normal arteries (24/25), 90% of the severe stenoses (28/31) and 100% of the occluded arteries (9/9). Conclusion: MRA in sagittal 'double-slab' technique is a noninvasive technique allowing to detect normal arteries and candidates for surgery with high degree of certainity. (orig.) [de

  8. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  9. Pattern recognition with parallel associative memory

    Science.gov (United States)

    Toth, Charles K.; Schenk, Toni

    1990-01-01

    An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.

  10. Hybrid parallel execution model for logic-based specification languages

    CERN Document Server

    Tsai, Jeffrey J P

    2001-01-01

    Parallel processing is a very important technique for improving the performance of various software development and maintenance activities. The purpose of this book is to introduce important techniques for parallel executation of high-level specifications of software systems. These techniques are very useful for the construction, analysis, and transformation of reliable large-scale and complex software systems. Contents: Current Approaches; Overview of the New Approach; FRORL Requirements Specification Language and Its Decomposition; Rewriting and Data Dependency, Control Flow Analysis of a Lo

  11. Parallel 3-D method of characteristics in MPACT

    International Nuclear Information System (INIS)

    Kochunas, B.; Dovvnar, T. J.; Liu, Z.

    2013-01-01

    A new parallel 3-D MOC kernel has been developed and implemented in MPACT which makes use of the modular ray tracing technique to reduce computational requirements and to facilitate parallel decomposition. The parallel model makes use of both distributed and shared memory parallelism which are implemented with the MPI and OpenMP standards, respectively. The kernel is capable of parallel decomposition of problems in space, angle, and by characteristic rays up to 0(104) processors. Initial verification of the parallel 3-D MOC kernel was performed using the Takeda 3-D transport benchmark problems. The eigenvalues computed by MPACT are within the statistical uncertainty of the benchmark reference and agree well with the averages of other participants. The MPACT k eff differs from the benchmark results for rodded and un-rodded cases by 11 and -40 pcm, respectively. The calculations were performed for various numbers of processors and parallel decompositions up to 15625 processors; all producing the same result at convergence. The parallel efficiency of the worst case was 60%, while very good efficiency (>95%) was observed for cases using 500 processors. The overall run time for the 500 processor case was 231 seconds and 19 seconds for the case with 15625 processors. Ongoing work is focused on developing theoretical performance models and the implementation of acceleration techniques to minimize the number of iterations to converge. (authors)

  12. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  13. An original approach to data acquisition CHADAC

    CERN Document Server

    CERN. Geneva

    1981-01-01

    Many labs try to boost existing data acquisition systems by inserting high performance intelligent devices in the important nodes of the system's structure. This strategy finds its limits in the system's architecture. The CHADAC project proposes a simple and efficient solution to this problem, using a multiprocessor modular architecture. CHADAC main features are: parallel acquisition of data; CHADAC is fast, it dedicates one processor per branch and each processor can read and store one 16 bit word in 800 ns; original structure; each processor can work in its own private memory, in its own shared memory (double access) and in the shared memory of any other processor. Simple and fast communications between processors are also provided by local DMAs; flexibility; each processor is autonomous and may be used as an independent acquisition system for a branch, by connecting local peripherals to it. Adjunction of fast trigger logic is possible. By its architecture and performances, CHADAC is designed to provide a g...

  14. Dynamic surface-pressure instrumentation for rods in parallel flow

    International Nuclear Information System (INIS)

    Mulcahy, T.M.; Lawrence, W.

    1979-01-01

    Methods employed and experience gained in measuring random fluid boundary layer pressures on the surface of a small diameter cylindrical rod subject to dense, nonhomogeneous, turbulent, parallel flow in a relatively noise-contaminated flow loop are described. Emphasis is placed on identification of instrumentation problems; description of transducer construction, mounting, and waterproofing; and the pretest calibration required to achieve instrumentation capable of reliable data acquisition

  15. Massively parallel whole genome amplification for single-cell sequencing using droplet microfluidics.

    Science.gov (United States)

    Hosokawa, Masahito; Nishikawa, Yohei; Kogawa, Masato; Takeyama, Haruko

    2017-07-12

    Massively parallel single-cell genome sequencing is required to further understand genetic diversities in complex biological systems. Whole genome amplification (WGA) is the first step for single-cell sequencing, but its throughput and accuracy are insufficient in conventional reaction platforms. Here, we introduce single droplet multiple displacement amplification (sd-MDA), a method that enables massively parallel amplification of single cell genomes while maintaining sequence accuracy and specificity. Tens of thousands of single cells are compartmentalized in millions of picoliter droplets and then subjected to lysis and WGA by passive droplet fusion in microfluidic channels. Because single cells are isolated in compartments, their genomes are amplified to saturation without contamination. This enables the high-throughput acquisition of contamination-free and cell specific sequence reads from single cells (21,000 single-cells/h), resulting in enhancement of the sequence data quality compared to conventional methods. This method allowed WGA of both single bacterial cells and human cancer cells. The obtained sequencing coverage rivals those of conventional techniques with superior sequence quality. In addition, we also demonstrate de novo assembly of uncultured soil bacteria and obtain draft genomes from single cell sequencing. This sd-MDA is promising for flexible and scalable use in single-cell sequencing.

  16. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  17. Expanded Understanding of IS/IT Related Challenges in Mergers and Acquisitions

    DEFF Research Database (Denmark)

    Toppenberg, Gustav

    2015-01-01

    Organizational Mergers and Acquisitions (M&As) occur at an increasingly frequent pace in today’s business life. Paralleling this development, M&As has increasingly attracted attention from the Information Systems (IS) domain. This emerging line of research has started form an understanding...

  18. An original approach to data acquisition: CHADAC

    International Nuclear Information System (INIS)

    Huppert, M.; Nayman, P.; Rivoal, M.

    1981-01-01

    Many labs try to boost existing data acquisition systems by inserting high performance intelligent devices in the important nodes of the system's structure. This strategy finds its limits in the system's architecture. The CHADAC project proposes a simple and efficient solution to this problem, using a multiprocessor modular architecture. CHADAC main features are: a) Parallel acquisition of data: CHADAC is fast; it dedicates one processor per branch; each processor can read and store one 16 bit word in 800 ns. b) Original structure: each processor can work in its own private memory, in its own shared memory (double access) and in the shared memory of any other processor (this feature being particulary useful to avoid wasteful data transfers). Simple and fast communications between processors are also provided by local DMA'S. c) Flexibility: each processor is autonomous and may be used as an independent acquisition system for a branch, by connecting local peripherals to it. Adjunction of fast trigger logic is possible. By its architecture and performances, CHADAC is designed to provide a good support for local intelligent devices and transfer operators developped elsewhere, providing a way to implement systems well fitted to various types of data acquisition. (orig.)

  19. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  20. Data acquisition for the D0 experiment

    International Nuclear Information System (INIS)

    Cutts, D.; Hoftun, J.S.; Johnson, C.R.; Zeller, R.T.; Trojak, T.; Van Berg, R.

    1985-01-01

    We describe the acquisition system for the D0 experiment at Fermilab, focusing primarily on the second level, which is based on a large parallel array of MicroVAX-II's. In this design data flows from the detector readout crates at a maximum rate of 320 Mbytes/sec into dual-port memories associated with one selected processor in which a VAXELIN based program performs the filter analysis of a complete event

  1. The UA1 VME data acquisition system

    International Nuclear Information System (INIS)

    Cittolin, S.

    1988-01-01

    The data acquisition system of a large-scale experiment such as UA1, running at the CERN proton-antiproton collider, has to cope with very high data rates and to perform sophisticated triggering and filtering in order to analyze interesting events. These functions are performed by a variety of programmable units organized in a parallel multiprocessor system whose central architecture is based on the industry-standard VME/VMXbus. (orig.)

  2. Application of particle image velocimetry measurement techniques to study turbulence characteristics of oscillatory flows around parallel-plate structures in thermoacoustic devices

    International Nuclear Information System (INIS)

    Mao, Xiaoan; Jaworski, Artur J

    2010-01-01

    This paper describes the development of the experimental setup and measurement methodologies to study the physics of oscillatory flows in the vicinity of parallel-plate stacks by using the particle image velocimetry (PIV) techniques. Parallel-plate configurations often appear as internal structures in thermoacoustic devices and are responsible for the hydrodynamic energy transfer processes. The flow around selected stack configurations is induced by a standing acoustic wave, whose amplitude can be varied. Depending on the direction of the flow within the acoustic cycle, relative to the stack, it can be treated as an entrance flow or a wake flow. The insight into the flow behaviour, its kinematics, dynamics and scales of turbulence, is obtained using the classical Reynolds decomposition to separate the instantaneous velocity fields into ensemble-averaged mean velocity fields and fluctuations in a set of predetermined phases within an oscillation cycle. The mean velocity field and the fluctuation intensity distributions are investigated over the acoustic oscillation cycle. The velocity fluctuation is further divided into large- and small-scale fluctuations by using fast Fourier transform (FFT) spatial filtering techniques

  3. User-friendly parallelization of GAUDI applications with Python

    International Nuclear Information System (INIS)

    Mato, Pere; Smith, Eoin

    2010-01-01

    GAUDI is a software framework in C++ used to build event data processing applications using a set of standard components with well-defined interfaces. Simulation, high-level trigger, reconstruction, and analysis programs used by several experiments are developed using GAUDI. These applications can be configured and driven by simple Python scripts. Given the fact that a considerable amount of existing software has been developed using serial methodology, and has existed in some cases for many years, implementation of parallelisation techniques at the framework level may offer a way of exploiting current multi-core technologies to maximize performance and reduce latencies without re-writing thousands/millions of lines of code. In the solution we have developed, the parallelization techniques are introduced to the high level Python scripts which configure and drive the applications, such that the core C++ application code requires no modification, and that end users need make only minimal changes to their scripts. The developed solution leverages from existing generic Python modules that support parallel processing. Naturally, the parallel version of a given program should produce results consistent with its serial execution. The evaluation of several prototypes incorporating various parallelization techniques are presented and discussed.

  4. User-friendly parallelization of GAUDI applications with Python

    Energy Technology Data Exchange (ETDEWEB)

    Mato, Pere; Smith, Eoin, E-mail: pere.mato@cern.c [PH Department, CERN, 1211 Geneva 23 (Switzerland)

    2010-04-01

    GAUDI is a software framework in C++ used to build event data processing applications using a set of standard components with well-defined interfaces. Simulation, high-level trigger, reconstruction, and analysis programs used by several experiments are developed using GAUDI. These applications can be configured and driven by simple Python scripts. Given the fact that a considerable amount of existing software has been developed using serial methodology, and has existed in some cases for many years, implementation of parallelisation techniques at the framework level may offer a way of exploiting current multi-core technologies to maximize performance and reduce latencies without re-writing thousands/millions of lines of code. In the solution we have developed, the parallelization techniques are introduced to the high level Python scripts which configure and drive the applications, such that the core C++ application code requires no modification, and that end users need make only minimal changes to their scripts. The developed solution leverages from existing generic Python modules that support parallel processing. Naturally, the parallel version of a given program should produce results consistent with its serial execution. The evaluation of several prototypes incorporating various parallelization techniques are presented and discussed.

  5. The Performance of an Object-Oriented, Parallel Operating System

    Directory of Open Access Journals (Sweden)

    David R. Kohr, Jr.

    1994-01-01

    Full Text Available The nascent and rapidly evolving state of parallel systems often leaves parallel application developers at the mercy of inefficient, inflexible operating system software. Given the relatively primitive state of parallel systems software, maximizing the performance of parallel applications not only requires judicious tuning of the application software, but occasionally, the replacement of specific system software modules with others that can more readily respond to the imposed pattern of resource demands. To assess the feasibility of application and performance tuning via malleable system software and to understand the performance penalties for detailed operating system performance data capture, we describe a set of performance instrumentation techniques for parallel, object-oriented operating systems and a set of performance experiments with Choices, an experimental, object-oriented operating system designed for use with parallel sys- tems. These performance experiments show that (a the performance overhead for operating system data capture is modest, (b the penalty for malleable, object-oriented operating systems is negligible, but (c techniques are needed to strictly enforce adherence of implementation to design if operating system modules are to be replaced.

  6. Estimation of organ-absorbed radiation doses during 64-detector CT coronary angiography using different acquisition techniques and heart rates: a phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Matsubara, Kosuke; Koshida, Kichiro; Kawashima, Hiroko (Dept. of Quantum Medical Technology, Faculty of Health Sciences, Kanazawa Univ., Kanazawa (Japan)), email: matsuk@mhs.mp.kanazawa-u.ac.jp; Noto, Kimiya; Takata, Tadanori; Yamamoto, Tomoyuki (Dept. of Radiological Technology, Kanazawa Univ. Hospital, Kanazawa (Japan)); Shimono, Tetsunori (Dept. of Radiology, Hoshigaoka Koseinenkin Hospital, Hirakata (Japan)); Matsui, Osamu (Dept. of Radiology, Faculty of Medicine, Kanazawa Univ., Kanazawa (Japan))

    2011-07-15

    Background: Though appropriate image acquisition parameters allow an effective dose below 1 mSv for CT coronary angiography (CTCA) performed with the latest dual-source CT scanners, a single-source 64-detector CT procedure results in a significant radiation dose due to its technical limitations. Therefore, estimating the radiation doses absorbed by an organ during 64-detector CTCA is important. Purpose: To estimate the radiation doses absorbed by organs located in the chest region during 64-detector CTCA using different acquisition techniques and heart rates. Material and Methods: Absorbed doses for breast, heart, lung, red bone marrow, thymus, and skin were evaluated using an anthropomorphic phantom and radiophotoluminescence glass dosimeters (RPLDs). Electrocardiogram (ECG)-gated helical and ECG-triggered non-helical acquisitions were performed by applying a simulated heart rate of 60 beats per minute (bpm) and ECG-gated helical acquisitions using ECG modulation (ECGM) of the tube current were performed by applying simulated heart rates of 40, 60, and 90 bpm after placing RPLDs on the anatomic location of each organ. The absorbed dose for each organ was calculated by multiplying the calibrated mean dose values of RPLDs with the mass energy coefficient ratio. Results: For all acquisitions, the highest absorbed dose was observed for the heart. When the helical and non-helical acquisitions were performed by applying a simulated heart rate of 60 bpm, the absorbed doses for heart were 215.5, 202.2, and 66.8 mGy for helical, helical with ECGM, and non-helical acquisitions, respectively. When the helical acquisitions using ECGM were performed by applying simulated heart rates of 40, 60, and 90 bpm, the absorbed doses for heart were 178.6, 139.1, and 159.3 mGy, respectively. Conclusion: ECG-triggered non-helical acquisition is recommended to reduce the radiation dose. Also, controlling the patients' heart rate appropriately during ECG-gated helical acquisition with

  7. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  8. Non-Cartesian Parallel Imaging Reconstruction of Undersampled IDEAL Spiral 13C CSI Data

    DEFF Research Database (Denmark)

    Hansen, Rie Beck; Hanson, Lars G.; Ardenkjær-Larsen, Jan Henrik

    scan times based on spatial information inherent to each coil element. In this work, we explored the combination of non-cartesian parallel imaging reconstruction and spatially undersampled IDEAL spiral CSI1 acquisition for efficient encoding of multiple chemical shifts within a large FOV with high...

  9. Multi spectral scaling data acquisition system

    International Nuclear Information System (INIS)

    Behere, Anita; Patil, R.D.; Ghodgaonkar, M.D.; Gopalakrishnan, K.R.

    1997-01-01

    In nuclear spectroscopy applications, it is often desired to acquire data at high rate with high resolution. With the availability of low cost computers, it is possible to make a powerful data acquisition system with minimum hardware and software development, by designing a PC plug-in acquisition board. But in using the PC processor for data acquisition, the PC can not be used as a multitasking node. Keeping this in view, PC plug-in acquisition boards with on-board processor find tremendous applications. Transputer based data acquisition board has been designed which can be configured as a high count rate pulse height MCA or as a Multi Spectral Scaler. Multi Spectral Scaling (MSS) is a new technique, in which multiple spectra are acquired in small time frames and are then analyzed. This paper describes the details of this multi spectral scaling data acquisition system. 2 figs

  10. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  11. A simple low cost speed log interface for oceanographic data acquisition system

    Digital Repository Service at National Institute of Oceanography (India)

    Khedekar, V.D.; Phadte, G.M.

    A speed log interface is designed with parallel Binary Coded Decimal output. This design was mainly required for the oceanographic data acquisition system as an interface between the speed log and the computer. However, this can also be used as a...

  12. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  13. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  14. Continued Data Acquisition Development

    Energy Technology Data Exchange (ETDEWEB)

    Schwellenbach, David [National Security Technologies, LLC. (NSTec), Mercury, NV (United States)

    2017-11-27

    This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muon tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.

  15. High speed data acquisition

    International Nuclear Information System (INIS)

    Cooper, P.S.

    1997-07-01

    A general introduction to high speed data acquisition system techniques in modern particle physics experiments is given. Examples are drawn from the SELEX(E78 1) high statistics charmed baryon production and decay experiment now taking data at Fermilab

  16. Screening crops for efficient phosphorus acquisition in a low phosphorus soil using radiotracer technique

    International Nuclear Information System (INIS)

    Meena, S.; Malarvizhi, P.; Rajeswari, R.

    2017-01-01

    Deficiency of phosphorus (P) is the major limitation to agricultural production. Identification of cultivars with greater capacity to grow in soils having low P availability (phosphorus efficiency) will help in P management in a sustainable way. Green house experiment with maize (CO 6) and cotton (MCU 13) as test crops with four levels of phosphorus (0, 3.75, 7.50 and 15 mg P kg -1 soil) was conducted in a P deficient soil (7.2 kg ha -1 ) to study the phosphorus acquisition characteristics and to select efficient crop using 32 P radiotracer technique. Carrier free 32 P obtained as orthophosphoric acid in dilute hydrochloric acid medium from the Board of Radiation and Isotope Technology, Mumbai was used for labeling the soil @ 3200 kBq pot -1 . After 60 days the crops were harvested and the radioactivity was measured in the plant samples using Liquid scintillation counter (PerkinElmer - Tricarb 2810 TR). Different values of specific radioactivity and Isotopically Exchangeable Phosphorus for maize and cotton indicated that chemically different pools of soil P were utilized and maize accessing a larger pool than cotton. Maize having recorded high Phosphorus Use Efficiency, Phosphorus Efficiency and low Phosphorus Stress Factor values, it is a better choice for P deficient soils. Higher Phosphorus Acquisition Efficiency of maize (59 %) than cotton (48%) can be related to the ability of maize to take up P from insoluble inorganic P forms. (author)

  17. Effects of the frame acquisition rate on the sensitivity of gastro-oesophageal reflux scintigraphy

    Science.gov (United States)

    Codreanu, I; Chamroonrat, W; Edwards, K

    2013-01-01

    Objective: To compare the sensitivity of gastro-oesophageal reflux (GOR) scintigraphy at 5-s and 60-s frame acquisition rates. Methods: GOR scintigraphy of 50 subjects (1 month–20 years old, mean 42 months) were analysed concurrently using 5-s and 60-s acquisition frames. Reflux episodes were graded as low if activity was detected in the distal half of the oesophagus and high if activity was detected in its upper half or in the oral cavity. For comparison purposes, detected GOR in any number of 5-s frames corresponding to one 60-s frame was counted as one episode. Results: A total of 679 episodes of GOR to the upper oesophagus were counted using a 5-s acquisition technique. Only 183 of such episodes were detected on 60-s acquisition images. To the lower oesophagus, a total of 1749 GOR episodes were detected using a 5-s acquisition technique and only 1045 episodes using 60-s acquisition frames (these also included the high-level GOR on 5-s frames counted as low level on 60-s acquisition frames). 10 patients had high-level GOR episodes that were detected only using a 5-s acquisition technique, leading to a different diagnosis in these patients. No correlation between the number of reflux episodes and the gastric emptying rates was noted. Conclusion: The 5-s frame acquisition technique is more sensitive than the 60-s frame acquisition technique for detecting both high- and low-level GOR. Advances in knowledge: Brief GOR episodes with a relatively low number of radioactive counts are frequently indistinguishable from intense background activity on 60-s acquisition frames. PMID:23520226

  18. Extended data acquisition support at GSI

    International Nuclear Information System (INIS)

    Marinescu, D.C.; Busch, F.; Hultzsch, H.; Lowsky, J.; Richter, M.

    1984-01-01

    The Experiment Data Acquisition and Analysis System (EDAS) of GSI, designed to support the data processing associated with nuclear physics experiments, provides three modes of operation: real-time, interactive replay and batch replay. The real-time mode is used for data acquisition and data analysis during an experiment performed at the heavy ion accelerator at GSI. An experiment may be performed either in Stand Alone Mode, using only the Experiment Computers, or in Extended Mode using all computing resources available. The Extended Mode combines the advantages of the real-time response of a dedicated minicomputer with the availability of computing resources in a large computing environment. This paper first gives an overview of EDAS and presents the GSI High Speed Data Acquisition Network. Data Acquisition Modes and the Extended Mode are then introduced. The structure of the system components, their implementation and the functions pertinent to the Extended Mode are presented. The control functions of the Experiment Computer sub-system are discussed in detail. Two aspects of the design of the sub-system running on the mainframe are stressed, namely the use of a multi-user installation for real-time processing and the use of a high level programming language, PL/I, as an implementation language for a system which uses parallel processing. The experience accumulated is summarized in a number of conclusions

  19. Using Motivational Interviewing Techniques to Address Parallel Process in Supervision

    Science.gov (United States)

    Giordano, Amanda; Clarke, Philip; Borders, L. DiAnne

    2013-01-01

    Supervision offers a distinct opportunity to experience the interconnection of counselor-client and counselor-supervisor interactions. One product of this network of interactions is parallel process, a phenomenon by which counselors unconsciously identify with their clients and subsequently present to their supervisors in a similar fashion…

  20. An environment for parallel structuring of Fortran programs

    International Nuclear Information System (INIS)

    Sridharan, K.; McShea, M.; Denton, C.; Eventoff, B.; Browne, J.C.; Newton, P.; Ellis, M.; Grossbard, D.; Wise, T.; Clemmer, D.

    1990-01-01

    The paper describes and illustrates an environment for interactive support of the detection and implementation of macro-level parallelism in Fortran programs. The approach couples algorithms for dependence analysis with both innovative techniques for complexity management and capabilities for the measurement and analysis of the parallel computation structures generated through use of the environment. The resulting environment is complementary to the more common approach of seeking local parallelism by loop unrolling, either by an automatic compiler or manually. (orig.)

  1. Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

    International Nuclear Information System (INIS)

    Pedron, Antoine

    2013-01-01

    This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterise possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform. Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purpose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms. The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed. (author) [fr

  2. Calibrationless Parallel Magnetic Resonance Imaging: A Joint Sparsity Model

    Directory of Open Access Journals (Sweden)

    Angshul Majumdar

    2013-12-01

    Full Text Available State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets—eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used—Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods—CS SENSE and l1SPIRiT and two calibration free techniques—Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  3. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  4. Development of a parallel zoomed EVI sequence for high temporal resolution analysis of the BOLD response

    International Nuclear Information System (INIS)

    Rabrait, C.

    2006-01-01

    The hemodynamic impulse response to any short stimulus typically lasts around 20 seconds. Thus, the detection of the Blood Oxygenation Level Dependent (BOLD) effect is usually performed using a 2D Echo Planar Imaging (EPI) sequence, with repetition times on the order of 1 or 2 seconds. This temporal resolution is generally enough for detection purposes. Nevertheless, when trying to accurately estimate the hemodynamic response functions (HRF), higher scanning rates represent a real advantage. Thus, in order to reach a temporal resolution around 200 ms, we developed a new acquisition method, based on Echo Volumar Imaging and 2D parallel acquisition (1). Echo Volumar Imaging (EVI) has been proposed in 1977 by Mansfield (2). EVI intrinsically possesses a lot of advantages for functional neuroimaging, as a 3 D single shot acquisition method. Nevertheless, to date, only a few applications have been reported (3, 4). Actually, very restricting hardware requirements make EVI difficult to perform in satisfactory experimental conditions, even today. The critical point in EVI is the echo train duration, which is longer than in EPI, due to 3D acquisition. Indeed, at equal field of view and spatial resolutions, EVI echo train duration must be approximately equal to EPI echo train duration multiplied by the number of slices acquired in EPI. Consequently, EVI is much more sensitive than EPI to geometric distortions, which are related to phase errors, and also to signal losses, which are due to long echo times (TE). Thus, a first improvement has been brought by 'zoomed' or 'localized' EVI (5), which allows to focus on a small volume of interest and thus limit echo train durations compared to full FOV acquisitions.To reduce echo train durations, we chose to apply parallel acquisition. Moreover, since EVI is a 3D acquisition method, we are able to perform parallel acquisition and SENSE reconstruction along the two phase directions (6). The R = 4 under-sampling consists in the

  5. Integrative Dynamic Reconfiguration in a Parallel Stream Processing Engine

    DEFF Research Database (Denmark)

    Madsen, Kasper Grud Skat; Zhou, Yongluan; Cao, Jianneng

    2017-01-01

    Load balancing, operator instance collocations and horizontal scaling are critical issues in Parallel Stream Processing Engines to achieve low data processing latency, optimized cluster utilization and minimized communication cost respectively. In previous work, these issues are typically tackled...... solution called ALBIC, which support general jobs. We implement the proposed techniques on top of Apache Storm, an open-source Parallel Stream Processing Engine. The extensive experimental results over both synthetic and real datasets show that our techniques clearly outperform existing approaches....

  6. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  7. Simultaneous acquisition of three NMR spectra in a single ...

    Indian Academy of Sciences (India)

    Simultaneous acquisition of three NMR spectra in a single experiment ... set, which is based on a combination of different fast data acquisition techniques such as G-matrix ..... The sign and intensity of the CHn resonance depends on the delay.

  8. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  9. Parallel optoelectronic trinary signed-digit division

    Science.gov (United States)

    Alam, Mohammad S.

    1999-03-01

    The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.

  10. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  11. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  12. Factors affecting the Long-Term Post-Acquisition Performance of BRICS Firms Engaging in Cross-Border Mergers and Acquisitions

    Directory of Open Access Journals (Sweden)

    Damilola Oyetade

    2017-04-01

    Full Text Available The purpose of the paper is to examine factors that affect the long-term performance of listed firms from Brazil, Russia, India, China and South Africa (BRICS that engage in cross-border mergers and acquisitions. This paper adds to the existing literature on the performance of mergers and acquisitions from emerging economies by examining the performance of mergers and acquisitions activities on acquirers from individual BRICS countries and examining whether intra-BRICS acquisitions are more beneficial than non-BRICS acquisitions. The system generalised method of moments estimation technique was employed in order to control for unobservable heterogeneity and potential endogeneity problems using accounting data and merger deal information collected from the Bloomberg online database for the period January 2000 to December 2012.The results obtained indicate that there is persistence in the profits, suggesting that BRICS acquirers continue to profit as they engage in mergers and acquisitions, and firm size significantly impacts the profits of acquirers.

  13. War-gaming application for future space systems acquisition: MATLAB implementation of war-gaming acquisition models and simulation results

    Science.gov (United States)

    Vienhage, Paul; Barcomb, Heather; Marshall, Karel; Black, William A.; Coons, Amanda; Tran, Hien T.; Nguyen, Tien M.; Guillen, Andy T.; Yoh, James; Kizer, Justin; Rogers, Blake A.

    2017-05-01

    The paper describes the MATLAB (MathWorks) programs that were developed during the REU workshop1 to implement The Aerospace Corporation developed Unified Game-based Acquisition Framework and Advanced Game - based Mathematical Framework (UGAF-AGMF) and its associated War-Gaming Engine (WGE) models. Each game can be played from the perspectives of the Department of Defense Acquisition Authority (DAA) or of an individual contractor (KTR). The programs also implement Aerospace's optimum "Program and Technical Baseline (PTB) and associated acquisition" strategy that combines low Total Ownership Cost (TOC) with innovative designs while still meeting warfighter needs. The paper also describes the Bayesian Acquisition War-Gaming approach using Monte Carlo simulations, a numerical analysis technique to account for uncertainty in decision making, which simulate the PTB development and acquisition processes and will detail the procedure of the implementation and the interactions between the games.

  14. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  15. Compiling Scientific Programs for Scalable Parallel Systems

    National Research Council Canada - National Science Library

    Kennedy, Ken

    2001-01-01

    ...). The research performed in this project included new techniques for recognizing implicit parallelism in sequential programs, a powerful and precise set-based framework for analysis and transformation...

  16. Z-buffer image assembly processing in high parallel visualization processing

    International Nuclear Information System (INIS)

    Kaneko, Isamu; Muramatsu, Kazuhiro

    2000-03-01

    On the platform of the parallel computer with many processors, the domain decomposition method is used as a popular means of parallel processing. In these days when the simulation scale becomes much larger and takes a lot of time, the simultaneous visualization processing with the actual computation is much more needed, and especially in case of a real-time visualization, the domain decomposition technique is indispensable. In case of parallel rendering processing, the rendered results must be gathered to one processor to compose the integrated picture in the last stage. This integration is usually conducted by the method using Z-buffer values. This process, however, induces the crucial problems of much lower speed processing and local memory shortage in case of parallel processing exceeding more than several tens of processors. In this report, the two new solutions are proposed. The one is the adoption of a special operator (Reduce operator) in the parallelization process, and the other is a buffer compression by deleting the background informations. This report includes the performance results of these new techniques to investigate their effect with use of the parallel computer Paragon. (author)

  17. Implementation of the neutron noise technique for subcritical reactors using a new data acquisition system

    International Nuclear Information System (INIS)

    Bellino, Pablo A.; Gomez, Angel

    2009-01-01

    A new data acquisition system was designed and programmed for nuclear kinetics parameter estimations in subcritical reactors. The system allows using any of the neutron noise techniques, since it could store the whole information available in the neutron detection system. The α Rossi, α Feynman and spectral analysis methods were performed in order to estimate the prompt neutron decay constant (and hence the reactivity). The measurements were done in the nuclear research reactor RA-1, where introducing the control rods, different reactivity levels where reached (until -7 dollars). With the three methods used, agreement was found between the estimations and the reference reactivities in each level, even when the detector efficiency was low. All the measurements were performed with a high gamma flux, although the results were found to be satisfactory. (author)

  18. Experience from Tore Supra acquisition system and evolutions

    International Nuclear Information System (INIS)

    Guillerminet, B.; Buravand, Y.; Chatelier, E.; Leroux, F.

    2004-01-01

    The Tore Supra tokamak has been upgraded to explore long duration plasma discharges up to 1000s. Since summer 2001, the acquisition system operates in continuous mode apart of the data processing which is still done after the pulse. In the first part, we explore a few solutions to process continuously the data during the pulse, based on parallel processing on a Linux farm and then on a CONDOR system. The second part is devoted to the Web service exposing the Tore Supra operation. In the last part, the VME acquisition system has been redesigned to keep up with the high data rates required by a few diagnostics. The workflow is now distributed among a few computers. At the end, we give the current status of the realisation and the future planning

  19. Simulation and modeling of data acquisition systems for future high energy physics experiments

    International Nuclear Information System (INIS)

    Booth, A.; Black, D.; Walsh, D.; Bowden, M.; Barsotti, E.

    1991-01-01

    With the ever-increasing complexity of detectors and their associated data acquisition (DAQ) systems, it is important to bring together a set of tools to enable system designers, both hardware and software, to understand the behavioral aspects of the system was a whole, as well as the interaction between different functional units within the system. For complex systems, human intuition is inadequate since there are simply too many variables for system designers to begin to predict how varying any subset of them affects the total system. On the other hand, exact analysis, even to the extent of investing in disposable hardware prototypes, is much too time consuming and costly. Simulation bridges the gap between physical intuition and exact analysis by providing a learning vehicle in which the effects of varying many parameters can be analyzed and understood. Simulation techniques are being used in the development of the Scalable Parallel Open Architecture Data Acquisition System at Fermilab in which several sophisticated tools have been brought together to provide an integrated systems engineering environment specifically aimed at designing, DAQ systems. Also presented are results of simulation experiments in which the effects of varying trigger rates, event sizes and event distribution over processors, are clearly seen in terms of throughput and buffer usage in an event-building switch

  20. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  1. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  2. Detection and compensation of organ/lesion motion using 4D-PET/CT respiratory gated acquisition techniques

    International Nuclear Information System (INIS)

    Bettinardi, Valentino; Picchio, Maria; Di Muzio, Nadia; Gianolli, Luigi; Gilardi, Maria Carla; Messa, Cristina

    2010-01-01

    Purpose: To describe the degradation effects produced by respiratory organ and lesion motion on PET/CT images and to define the role of respiratory gated (RG) 4D-PET/CT techniques to compensate for such effects. Methods: Based on the literature and on our own experience, technical recommendations and clinical indications for the use of RG 4D PET/CT have been outlined. Results: RG 4D-PET/CT techniques require a state of the art PET/CT scanner, a respiratory monitoring system and dedicated acquisition and processing protocols. Patient training is particularly important to obtain a regular breathing pattern. An adequate number of phases has to be selected to balance motion compensation and statistical noise. RG 4D PET/CT motion free images may be clinically useful for tumour tissue characterization, monitoring patient treatment and target definition in radiation therapy planning. Conclusions: RG 4D PET/CT is a valuable tool to improve image quality and quantitative accuracy and to assess and measure organ and lesion motion for radiotherapy planning.

  3. A fast data acquisition system for PHA and MCS measurements

    International Nuclear Information System (INIS)

    Eijk, P.J.A. van; Keyser, C.J.; Rigterink, B.J.; Hasper, H.

    1985-01-01

    A microprocessor controlled data acquisition system for pulse height analysis and multichannel scaling is described. A 4K x 24 bit static memory is used to obtain a fast data acquisition rate. The system can store 12 bit ADC or TDC data within 150 ns. Operating commands can be entered via a small keyboard or by a RS-232-C interface. An oscilloscope is used to display a spectrum. The display of a spectrum or the transmission of spectrum data to an external computer causes only a short interruption of a measurement in progress and is accomplished by using a DMA circuit. The program is written in Modular Pascal and is divided into 15 modules. These implement 9 parallel processes which are synchronized by using semaphores. Hardware interrupts from the data acquisition, DMA, keyboard and RS-232-C circuits are used to signal these processes. (orig.)

  4. Fast-Acquisition/Weak-Signal-Tracking GPS Receiver for HEO

    Science.gov (United States)

    Wintemitz, Luke; Boegner, Greg; Sirotzky, Steve

    2004-01-01

    A report discusses the technical background and design of the Navigator Global Positioning System (GPS) receiver -- . a radiation-hardened receiver intended for use aboard spacecraft. Navigator is capable of weak signal acquisition and tracking as well as much faster acquisition of strong or weak signals with no a priori knowledge or external aiding. Weak-signal acquisition and tracking enables GPS use in high Earth orbits (HEO), and fast acquisition allows for the receiver to remain without power until needed in any orbit. Signal acquisition and signal tracking are, respectively, the processes of finding and demodulating a signal. Acquisition is the more computationally difficult process. Previous GPS receivers employ the method of sequentially searching the two-dimensional signal parameter space (code phase and Doppler). Navigator exploits properties of the Fourier transform in a massively parallel search for the GPS signal. This method results in far faster acquisition times [in the lab, 12 GPS satellites have been acquired with no a priori knowledge in a Low-Earth-Orbit (LEO) scenario in less than one second]. Modeling has shown that Navigator will be capable of acquiring signals down to 25 dB-Hz, appropriate for HEO missions. Navigator is built using the radiation-hardened ColdFire microprocessor and housing the most computationally intense functions in dedicated field-programmable gate arrays. The high performance of the algorithm and of the receiver as a whole are made possible by optimizing computational efficiency and carefully weighing tradeoffs among the sampling rate, data format, and data-path bit width.

  5. On the Automatic Parallelization of Sparse and Irregular Fortran Programs

    Directory of Open Access Journals (Sweden)

    Yuan Lin

    1999-01-01

    Full Text Available Automatic parallelization is usually believed to be less effective at exploiting implicit parallelism in sparse/irregular programs than in their dense/regular counterparts. However, not much is really known because there have been few research reports on this topic. In this work, we have studied the possibility of using an automatic parallelizing compiler to detect the parallelism in sparse/irregular programs. The study with a collection of sparse/irregular programs led us to some common loop patterns. Based on these patterns new techniques were derived that produced good speedups when manually applied to our benchmark codes. More importantly, these parallelization methods can be implemented in a parallelizing compiler and can be applied automatically.

  6. Performance assessment of the SIMFAP parallel cluster at IFIN-HH Bucharest

    International Nuclear Information System (INIS)

    Adam, Gh.; Adam, S.; Ayriyan, A.; Dushanov, E.; Hayryan, E.; Korenkov, V.; Lutsenko, A.; Mitsyn, V.; Sapozhnikova, T.; Sapozhnikov, A; Streltsova, O.; Buzatu, F.; Dulea, M.; Vasile, I.; Sima, A.; Visan, C.; Busa, J.; Pokorny, I.

    2008-01-01

    Performance assessment and case study outputs of the parallel SIMFAP cluster at IFIN-HH Bucharest point to its effective and reliable operation. A comparison with results on the supercomputing system in LIT-JINR Dubna adds insight on resource allocation for problem solving by parallel computing. The solution of models asking for very large numbers of knots in the discretization mesh needs the migration to high performance computing based on parallel cluster architectures. The acquisition of ready-to-use parallel computing facilities being beyond limited budgetary resources, the solution at IFIN-HH was to buy the hardware and the inter-processor network, and to implement by own efforts the open software concerning both the operating system and the parallel computing standard. The present paper provides a report demonstrating the successful solution of these tasks. The implementation of the well-known HPL (High Performance LINPACK) Benchmark points to the effective and reliable operation of the cluster. The comparison of HPL outputs obtained on parallel clusters of different magnitudes shows that there is an optimum range of the order N of the linear algebraic system over which a given parallel cluster provides optimum parallel solutions. For the SIMFAP cluster, this range can be inferred to correspond to about 1 to 2 x 10 4 linear algebraic equations. For an algorithm of polynomial complexity N α the task sharing among p processors within a parallel solution mainly follows an (N/p)α behaviour under peak performance achievement. Thus, while the problem complexity remains the same, a substantial decrease of the coefficient of the leading order of the polynomial complexity is achieved. (authors)

  7. Automatic parallelization of while-Loops using speculative execution

    International Nuclear Information System (INIS)

    Collard, J.F.

    1995-01-01

    Automatic parallelization of imperative sequential programs has focused on nests of for-loops. The most recent of them consist in finding an affine mapping with respect to the loop indices to simultaneously capture the temporal and spatial properties of the parallelized program. Such a mapping is usually called a open-quotes space-time transformation.close quotes This work describes an extension of these techniques to while-loops using speculative execution. We show that space-time transformations are a good framework for summing up previous restructuration techniques of while-loop, such as pipelining. Moreover, we show that these transformations can be derived and applied automatically

  8. Improving quality of arterial spin labeling MR imaging at 3 Tesla with a 32-channel coil and parallel imaging.

    Science.gov (United States)

    Ferré, Jean-Christophe; Petr, Jan; Bannier, Elise; Barillot, Christian; Gauvrit, Jean-Yves

    2012-05-01

    To compare 12-channel and 32-channel phased-array coils and to determine the optimal parallel imaging (PI) technique and factor for brain perfusion imaging using Pulsed Arterial Spin labeling (PASL) at 3 Tesla (T). Twenty-seven healthy volunteers underwent 10 different PASL perfusion PICORE Q2TIPS scans at 3T using 12-channel and 32-channel coils without PI and with GRAPPA or mSENSE using factor 2. PI with factor 3 and 4 were used only with the 32-channel coil. Visual quality was assessed using four parameters. Quantitative analyses were performed using temporal noise, contrast-to-noise and signal-to-noise ratios (CNR, SNR). Compared with 12-channel acquisition, the scores for 32-channel acquisition were significantly higher for overall visual quality, lower for noise and higher for SNR and CNR. With the 32-channel coil, artifact compromise achieved the best score with PI factor 2. Noise increased, SNR and CNR decreased with PI factor. However mSENSE 2 scores were not always significantly different from acquisition without PI. For PASL at 3T, the 32-channel coil at 3T provided better quality than the 12-channel coil. With the 32-channel coil, mSENSE 2 seemed to offer the best compromise for decreasing artifacts without significantly reducing SNR, CNR. Copyright © 2012 Wiley Periodicals, Inc.

  9. Data acquisition system for SLD

    International Nuclear Information System (INIS)

    Sherden, D.J.

    1985-05-01

    This paper describes the data acquisition system planned for the SLD detector which is being constructed for use with the SLAC Linear Collider (SLC). An exclusively FASTBUS front-end system is used together with a VAX-based host system. While the volume of data transferred does not challenge the band-width capabilities of FASTBUS, extensive use is made of the parallel processing capabilities allowed by FASTBUS to reduce the data to a size which can be handled by the host system. The low repetition rate of the SLC allows a relatively simple software-based trigger. The principal components and overall architecture of the hardware and software are described

  10. Reliability of contemporary data-acquisition techniques for LEED analysis

    International Nuclear Information System (INIS)

    Noonan, J.R.; Davis, H.L.

    1980-10-01

    It is becoming clear that one of the principal limitations in LEED structure analysis is the quality of the experimental I-V profiles. This limitation is discussed, and data acquisition procedures described, which for simple systems, seem to enhance the quality of agreement between the results of theoretical model calculations and experimental LEED spectra. By employing such procedures to obtain data from Cu(100), excellent agreement between computed and measured profiles has been achieved. 7 figures

  11. Physics Structure Analysis of Parallel Waves Concept of Physics Teacher Candidate

    International Nuclear Information System (INIS)

    Sarwi, S; Linuwih, S; Supardi, K I

    2017-01-01

    The aim of this research was to find a parallel structure concept of wave physics and the factors that influence on the formation of parallel conceptions of physics teacher candidates. The method used qualitative research which types of cross-sectional design. These subjects were five of the third semester of basic physics and six of the fifth semester of wave course students. Data collection techniques used think aloud and written tests. Quantitative data were analysed with descriptive technique-percentage. The data analysis technique for belief and be aware of answers uses an explanatory analysis. Results of the research include: 1) the structure of the concept can be displayed through the illustration of a map containing the theoretical core, supplements the theory and phenomena that occur daily; 2) the trend of parallel conception of wave physics have been identified on the stationary waves, resonance of the sound and the propagation of transverse electromagnetic waves; 3) the influence on the parallel conception that reading textbooks less comprehensive and knowledge is partial understanding as forming the structure of the theory. (paper)

  12. Keldysh formalism for multiple parallel worlds

    International Nuclear Information System (INIS)

    Ansari, M.; Nazarov, Y. V.

    2016-01-01

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  13. Keldysh formalism for multiple parallel worlds

    Science.gov (United States)

    Ansari, M.; Nazarov, Y. V.

    2016-03-01

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  14. Keldysh formalism for multiple parallel worlds

    Energy Technology Data Exchange (ETDEWEB)

    Ansari, M.; Nazarov, Y. V., E-mail: y.v.nazarov@tudelft.nl [Delft University of Technology, Kavli Institute of Nanoscience (Netherlands)

    2016-03-15

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  15. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  16. The Design of Wireless Data Acquisition and Remote Transmission Interface in Micro-seismic Signals

    Directory of Open Access Journals (Sweden)

    Huan-Huan BIAN

    2014-02-01

    Full Text Available The micro-seismic signal acquisition and transmission is an important key part in geological prospecting. This paper describes a bran-new solution of micro-seismic signal acquisition and remote transmission using Zigbee technique and wireless data transmission technique. The hardware such as front-end data acquisition interface made up by Zigbee wireless networking technique, remote data transmission solution composed of general packet radio service (or GPRS for short technique and interface between Zigbee and GPRS is designed in detail. Meanwhile the corresponding software of the system is given out. The solution solves the numerous practical problems nagged by complex and terrible environment faced using micro-seismic prospecting. The experimental results demonstrate that the method using Zigbee wireless network communication technique GPRS wireless packet switching technique is efficient, reliable and flexible.

  17. Data Acquisition and Flux Calculations

    DEFF Research Database (Denmark)

    Rebmann, C.; Kolle, O; Heinesch, B

    2012-01-01

    In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation....

  18. Parallel processing method for high-speed real time digital pulse processing for gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Fernandes, A.M.; Pereira, R.C.; Sousa, J.; Neto, A.; Carvalho, P.; Batista, A.J.N.; Carvalho, B.B.; Varandas, C.A.F.; Tardocchi, M.; Gorini, G.

    2010-01-01

    A new data acquisition (DAQ) system was developed to fulfil the requirements of the gamma-ray spectrometer (GRS) JET-EP2 (joint European Torus enhancement project 2), providing high-resolution spectroscopy at very high-count rate (up to few MHz). The system is based on the Advanced Telecommunications Computing Architecture TM (ATCA TM ) and includes a transient record (TR) module with 8 channels of 14 bits resolution at 400 MSamples/s (MSPS) sampling rate, 4 GB of local memory, and 2 field programmable gate array (FPGA) able to perform real time algorithms for data reduction and digital pulse processing. Although at 400 MSPS only fast programmable devices such as FPGAs can be used either for data processing and data transfer, FPGA resources also present speed limitation at some specific tasks, leading to an unavoidable data lost when demanding algorithms are applied. To overcome this problem and foreseeing an increase of the algorithm complexity, a new digital parallel filter was developed, aiming to perform real time pulse processing in the FPGAs of the TR module at the presented sampling rate. The filter is based on the conventional digital time-invariant trapezoidal shaper operating with parallelized data while performing pulse height analysis (PHA) and pile up rejection (PUR). The incoming sampled data is successively parallelized and fed into the processing algorithm block at one fourth of the sampling rate. The following data processing and data transfer is also performed at one fourth of the sampling rate. The algorithm based on data parallelization technique was implemented and tested at JET facilities, where a spectrum was obtained. Attending to the observed results, the PHA algorithm will be improved by implementing the pulse pile up discrimination.

  19. Acquisition of Dental Skills in Preclinical Technique Courses: Influence of Spatial and Manual Abilities

    Science.gov (United States)

    Schwibbe, Anja; Kothe, Christian; Hampe, Wolfgang; Konradt, Udo

    2016-01-01

    Sixty years of research have not added up to a concordant evaluation of the influence of spatial and manual abilities on dental skill acquisition. We used Ackerman's theory of ability determinants of skill acquisition to explain the influence of spatial visualization and manual dexterity on the task performance of dental students in two…

  20. Research in Parallel Algorithms and Software for Computational Aerosciences

    Science.gov (United States)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  1. D0 experiment: its trigger, data acquisition, and computers

    International Nuclear Information System (INIS)

    Cutts, D.; Zeller, R.; Schamberger, D.; Van Berg, R.

    1984-05-01

    The new collider facility to be built at Fermilab's Tevatron-I D0 region is described. The data acquisition requirements are discussed, as well as the hardware and software triggers designed to meet these needs. An array of MicroVAX computers running VAXELN will filter in parallel (a complete event in each microcomputer) and transmit accepted events via Ethernet to a host. This system, together with its subsequent offline needs, is briefly presented

  2. MINARET: Towards a time-dependent neutron transport parallel solver

    International Nuclear Information System (INIS)

    Baudron, A.M.; Lautard, J.J.; Maday, Y.; Mula, O.

    2013-01-01

    We present the newly developed time-dependent 3D multigroup discrete ordinates neutron transport solver that has recently been implemented in the MINARET code. The solver is the support for a study about computing acceleration techniques that involve parallel architectures. In this work, we will focus on the parallelization of two of the variables involved in our equation: the angular directions and the time. This last variable has been parallelized by a (time) domain decomposition method called the para-real in time algorithm. (authors)

  3. Comparison of two single-breath-held 3-D acquisitions with multi-breath-held 2-D cine steady-state free precession MRI acquisition in children with single ventricles

    Energy Technology Data Exchange (ETDEWEB)

    Atweh, Lamya A.; Dodd, Nicholas A.; Krishnamurthy, Ramkumar; Chu, Zili D. [Texas Children' s Hospital, EB Singleton Department of Pediatric Radiology, Cardiovascular Imaging, Houston, TX (United States); Pednekar, Amol [Philips Healthcare, Houston, TX (United States); Krishnamurthy, Rajesh [Texas Children' s Hospital, EB Singleton Department of Pediatric Radiology, Cardiovascular Imaging, Houston, TX (United States); Baylor College of Medicine, Department of Radiology, Houston, TX (United States); Baylor College of Medicine, Department of Pediatrics, Houston, TX (United States)

    2016-05-15

    Breath-held two-dimensional balanced steady-state free precession cine acquisition (2-D breath-held SSFP), accelerated with parallel imaging, is the method of choice for evaluating ventricular function due to its superior blood-to-myocardial contrast, edge definition and high intrinsic signal-to-noise ratio throughout the cardiac cycle. The purpose of this study is to qualitatively and quantitatively compare the two different single-breath-hold 3-D cine SSFP acquisitions using 1) multidirectional sensitivity encoding (SENSE) acceleration factors (3-D multiple SENSE SSFP), and 2) k-t broad-use linear acceleration speed-up technique (3-D k-t SSFP) with the conventional 2-D breath-held SSFP in non-sedated asymptomatic volunteers and children with single ventricle congenital heart disease. Our prospective study was performed on 30 non-sedated subjects (9 healthy volunteers and 21 functional single ventricle patients), ages 12.5 +/- 2.8 years. Two-dimensional breath-held SSFP with SENSE acceleration factor of 2, eight-fold accelerated 3-D k-t SSFP, and 3-D multiple SENSE SSFP with total parallel imaging factor of 4 were performed to evaluate ventricular volumes and mass in the short-axis orientation. Image quality scores (blood myocardial contrast, edge definition and interslice alignment) and volumetric analysis (end systolic volume, end diastolic volume and ejection fraction) were performed on the data sets by experienced users. Paired t-test was performed to compare each of the 3-D k-t SSFP and 3-D multiple SENSE SSFP clinical scores against 2-D breath-held SSFP. Bland-Altman analysis was performed on left ventricle (LV) and single ventricle volumetry. Interobserver and intraobserver variability in volumetric measurements were determined using intraclass coefficients. The clinical scores were highest for the 2-D breath-held SSFP images. Between the two 3-D sequences, 3-D multiple SENSE SSFP performed better than 3-D k-t SSFP. Bland-Altman analysis for volumes

  4. Machine-assisted verification of latent fingerprints: first results for nondestructive contact-less optical acquisition techniques with a CWL sensor

    Science.gov (United States)

    Hildebrandt, Mario; Kiltz, Stefan; Krapyvskyy, Dmytro; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus

    2011-11-01

    A machine-assisted analysis of traces from crime scenes might be possible with the advent of new high-resolution non-destructive contact-less acquisition techniques for latent fingerprints. This requires reliable techniques for the automatic extraction of fingerprint features from latent and exemplar fingerprints for matching purposes using pattern recognition approaches. Therefore, we evaluate the NIST Biometric Image Software for the feature extraction and verification of contact-lessly acquired latent fingerprints to determine potential error rates. Our exemplary test setup includes 30 latent fingerprints from 5 people in two test sets that are acquired from different surfaces using a chromatic white light sensor. The first test set includes 20 fingerprints on two different surfaces. It is used to determine the feature extraction performance. The second test set includes one latent fingerprint on 10 different surfaces and an exemplar fingerprint to determine the verification performance. This utilized sensing technique does not require a physical or chemical visibility enhancement of the fingerprint residue, thus the original trace remains unaltered for further investigations. No particular feature extraction and verification techniques have been applied to such data, yet. Hence, we see the need for appropriate algorithms that are suitable to support forensic investigations.

  5. Construction of a FASTBUS data-acquisition system for the ELAN experiment

    International Nuclear Information System (INIS)

    Noel, A.

    1992-06-01

    To use the FASTBUS data acquisition system for the experiment ELAN at the electron stretcher accelerator ELSA a new software tool has been developed. This tool manages to readout parallel CAMAC with a VME front-end-processor and FASTBUS with the special FASTBUS processor segment AEB. Both processors are connected by a 32 bit high speed VSB data bus. (orig.) [de

  6. Cross-border Mergers and Acquisitions

    DEFF Research Database (Denmark)

    Wang, Daojuan

    This paper focuses on three topics in cross-border mergers and acquisitions (CBM&As) field: motivations for CBM&As, valuation techniques and CBM&A performance (assessment and the determinants). By taking an overview of what have been found so far in academic field and investigating...

  7. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  8. Compensation of spatial system response in SPECT with conjugate gradient reconstruction technique

    International Nuclear Information System (INIS)

    Formiconi, A.R.; Pupi, A.; Passeri, A.

    1989-01-01

    A procedure for determination of the system matrix in single photon emission tomography (SPECT) is described which use a conjugate gradient reconstruction technique to take into account the variable system resolution of a camera equipped with parallel-hole collimators. The procedure involves acquisition of system line spread functions (LSF) in the region occupied by the object studied. Those data are used to generate a set of weighting factors based on the assumption that the LSFs of the collimated camera are of Gaussian shape with full width at half maximum (FWHM) linearly dependent on source depth in the span of image space. Factors are stored on a disc file for subsequent use in reconstruction. Afterwards reconstruction is performed using the conjugate gradient method with the system matrix modified by incorporation of these precalculated factors to take into account variable geometrical system response. The set of weighting factors is regenerated whenever acquisition conditions are changed (collimator, radius of rotation) with an ultra high resolution (UHR) collimator 2000 weighting factors need to be calculated. (author)

  9. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor.

    Science.gov (United States)

    Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng

    2016-09-15

    In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor.

  10. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  11. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.; He, Z.; Xiao, M.; Zhang, Z.

    2014-01-01

    is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI

  12. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  13. Acoustic window planning for ultrasound acquisition.

    Science.gov (United States)

    Göbl, Rüdiger; Virga, Salvatore; Rackerseder, Julia; Frisch, Benjamin; Navab, Nassir; Hennersperger, Christoph

    2017-06-01

    Autonomous robotic ultrasound has recently gained considerable interest, especially for collaborative applications. Existing methods for acquisition trajectory planning are solely based on geometrical considerations, such as the pose of the transducer with respect to the patient surface. This work aims at establishing acoustic window planning to enable autonomous ultrasound acquisitions of anatomies with restricted acoustic windows, such as the liver or the heart. We propose a fully automatic approach for the planning of acquisition trajectories, which only requires information about the target region as well as existing tomographic imaging data, such as X-ray computed tomography. The framework integrates both geometrical and physics-based constraints to estimate the best ultrasound acquisition trajectories with respect to the available acoustic windows. We evaluate the developed method using virtual planning scenarios based on real patient data as well as for real robotic ultrasound acquisitions on a tissue-mimicking phantom. The proposed method yields superior image quality in comparison with a naive planning approach, while maintaining the necessary coverage of the target. We demonstrate that by taking image formation properties into account acquisition planning methods can outperform naive plannings. Furthermore, we show the need for such planning techniques, since naive approaches are not sufficient as they do not take the expected image quality into account.

  14. Algorithms for computational fluid dynamics n parallel processors

    International Nuclear Information System (INIS)

    Van de Velde, E.F.

    1986-01-01

    A study of parallel algorithms for the numerical solution of partial differential equations arising in computational fluid dynamics is presented. The actual implementation on parallel processors of shared and nonshared memory design is discussed. The performance of these algorithms is analyzed in terms of machine efficiency, communication time, bottlenecks and software development costs. For elliptic equations, a parallel preconditioned conjugate gradient method is described, which has been used to solve pressure equations discretized with high order finite elements on irregular grids. A parallel full multigrid method and a parallel fast Poisson solver are also presented. Hyperbolic conservation laws were discretized with parallel versions of finite difference methods like the Lax-Wendroff scheme and with the Random Choice method. Techniques are developed for comparing the behavior of an algorithm on different architectures as a function of problem size and local computational effort. Effective use of these advanced architecture machines requires the use of machine dependent programming. It is shown that the portability problems can be minimized by introducing high level operations on vectors and matrices structured into program libraries

  15. Parallel multispot smFRET analysis using an 8-pixel SPAD array

    Science.gov (United States)

    Ingargiola, A.; Colyer, R. A.; Kim, D.; Panzeri, F.; Lin, R.; Gulinatti, A.; Rech, I.; Ghioni, M.; Weiss, S.; Michalet, X.

    2012-02-01

    Single-molecule Förster resonance energy transfer (smFRET) is a powerful tool for extracting distance information between two fluorophores (a donor and acceptor dye) on a nanometer scale. This method is commonly used to monitor binding interactions or intra- and intermolecular conformations in biomolecules freely diffusing through a focal volume or immobilized on a surface. The diffusing geometry has the advantage to not interfere with the molecules and to give access to fast time scales. However, separating photon bursts from individual molecules requires low sample concentrations. This results in long acquisition time (several minutes to an hour) to obtain sufficient statistics. It also prevents studying dynamic phenomena happening on time scales larger than the burst duration and smaller than the acquisition time. Parallelization of acquisition overcomes this limit by increasing the acquisition rate using the same low concentrations required for individual molecule burst identification. In this work we present a new two-color smFRET approach using multispot excitation and detection. The donor excitation pattern is composed of 4 spots arranged in a linear pattern. The fluorescent emission of donor and acceptor dyes is then collected and refocused on two separate areas of a custom 8-pixel SPAD array. We report smFRET measurements performed on various DNA samples synthesized with various distances between the donor and acceptor fluorophores. We demonstrate that our approach provides identical FRET efficiency values to a conventional single-spot acquisition approach, but with a reduced acquisition time. Our work thus opens the way to high-throughput smFRET analysis on freely diffusing molecules.

  16. Limited angle tomographic breast imaging: A comparison of parallel beam and pinhole collimation

    International Nuclear Information System (INIS)

    Wessell, D.E.; Kadrmas, D.J.; Frey, E.C.

    1996-01-01

    Results from clinical trials have suggested no improvement in lesion detection with parallel hole SPECT scintimammography (SM) with Tc-99m over parallel hole planar SM. In this initial investigation, we have elucidated some of the unique requirements of SPECT SM. With these requirements in mind, we have begun to develop practical data acquisition and reconstruction strategies that can reduce image artifacts and improve image quality. In this paper we investigate limited angle orbits for both parallel hole and pinhole SPECT SM. Singular Value Decomposition (SVD) is used to analyze the artifacts associated with the limited angle orbits. Maximum likelihood expectation maximization (MLEM) reconstructions are then used to examine the effects of attenuation compensation on the quality of the reconstructed image. All simulations are performed using the 3D-MCAT breast phantom. The results of these simulation studies demonstrate that limited angle SPECT SM is feasible, that attenuation correction is needed for accurate reconstructions, and that pinhole SPECT SM may have an advantage over parallel hole SPECT SM in terms of improved image quality and reduced image artifacts

  17. Monitoring and Acquisition Real-time System (MARS)

    Science.gov (United States)

    Holland, Corbin

    2013-01-01

    MARS is a graphical user interface (GUI) written in MATLAB and Java, allowing the user to configure and control the Scalable Parallel Architecture for Real-Time Acquisition and Analysis (SPARTAA) data acquisition system. SPARTAA not only acquires data, but also allows for complex algorithms to be applied to the acquired data in real time. The MARS client allows the user to set up and configure all settings regarding the data channels attached to the system, as well as have complete control over starting and stopping data acquisition. It provides a unique "Test" programming environment, allowing the user to create tests consisting of a series of alarms, each of which contains any number of data channels. Each alarm is configured with a particular algorithm, determining the type of processing that will be applied on each data channel and tested against a defined threshold. Tests can be uploaded to SPARTAA, thereby teaching it how to process the data. The uniqueness of MARS is in its capability to be adaptable easily to many test configurations. MARS sends and receives protocols via TCP/IP, which allows for quick integration into almost any test environment. The use of MATLAB and Java as the programming languages allows for developers to integrate the software across multiple operating platforms.

  18. High-Speed Data Acquisition and Digital Signal Processing System for PET Imaging Techniques Applied to Mammography

    Science.gov (United States)

    Martinez, J. D.; Benlloch, J. M.; Cerda, J.; Lerche, Ch. W.; Pavon, N.; Sebastia, A.

    2004-06-01

    This paper is framed into the Positron Emission Mammography (PEM) project, whose aim is to develop an innovative gamma ray sensor for early breast cancer diagnosis. Currently, breast cancer is detected using low-energy X-ray screening. However, functional imaging techniques such as PET/FDG could be employed to detect breast cancer and track disease changes with greater sensitivity. Furthermore, a small and less expensive PET camera can be utilized minimizing main problems of whole body PET. To accomplish these objectives, we are developing a new gamma ray sensor based on a newly released photodetector. However, a dedicated PEM detector requires an adequate data acquisition (DAQ) and processing system. The characterization of gamma events needs a free-running analog-to-digital converter (ADC) with sampling rates of more than 50 Ms/s and must achieve event count rates up to 10 MHz. Moreover, comprehensive data processing must be carried out to obtain event parameters necessary for performing the image reconstruction. A new generation digital signal processor (DSP) has been used to comply with these requirements. This device enables us to manage the DAQ system at up to 80 Ms/s and to execute intensive calculi over the detector signals. This paper describes our designed DAQ and processing architecture whose main features are: very high-speed data conversion, multichannel synchronized acquisition with zero dead time, a digital triggering scheme, and high throughput of data with an extensive optimization of the signal processing algorithms.

  19. Parallelization of a spherical Sn transport theory algorithm

    International Nuclear Information System (INIS)

    Haghighat, A.

    1989-01-01

    The work described in this paper derives a parallel algorithm for an R-dependent spherical S N transport theory algorithm and studies its performance by testing different sample problems. The S N transport method is one of the most accurate techniques used to solve the linear Boltzmann equation. Several studies have been done on the vectorization of the S N algorithms; however, very few studies have been performed on the parallelization of this algorithm. Weinke and Hommoto have looked at the parallel processing of the different energy groups, and Azmy recently studied the parallel processing of the inner iterations of an X-Y S N nodal transport theory method. Both studies have reported very encouraging results, which have prompted us to look at the parallel processing of an R-dependent S N spherical geometry algorithm. This geometry was chosen because, in spite of its simplicity, it contains the complications of the curvilinear geometries (i.e., redistribution of neutrons over the discretized angular bins)

  20. Soudan 2 data acquisition and trigger electronics

    International Nuclear Information System (INIS)

    Dawson, J.; Laird, R.; May, E.; Mondal, N.; Schlereth, J.; Solomey, N.; Thron, J.; Heppelmann, S.

    1985-01-01

    The 1.1 kton Soudan 2 detector is read out by 16K anode wires and 3 2K cathode strips. Preamps from each wire or strip are bussed together in groups of 8 to reduce the number of ADC channels. The resulting 6144 channels of ionization signal are flash-digitized every 150 ns and stored in RAM. The raw data hit patterns are continually compared with programmable trigger multiplicity and adjacency conditions. The data acquisition process is managed in a system of 24 parallel crates each containing an Intel 8086 microprocessors, which supervises a pipe-lined data compactors, and allows transfer of the compacted data via CAMAC to the host computer. The 8086's also manage the local trigger conditions and can perform some parallel processing of the data. Due to the scale of the system and multiplicity of identical channels, semi-custom gate array chips are used for much of the logic, utilizing 2.5 micron CMOS technology

  1. Soudan 2 data acquisition and trigger electronics

    International Nuclear Information System (INIS)

    Dawson, J.; Haberichter, W.; Laird, R.

    1985-01-01

    The 1.1 kton Soudan 2 calorimetric drift-chamber detector is read out by 16K anode wires and 32K cathode strips. Preamps from each wire or strip are bussed together in groups of 8 to reduce the number of ADC channels. The resulting 6144 channels of ionization signal are flash-digitized every 200 ns and stored in RAM. The raw data hit patterns are continually compared with programmable trigger multiplicity and adjacency conditions. The data acquisition process is managed in a system of 24 parallel crates each containing an Intel 80C86 microprocessor, which supervises a pipe-lined data compactor, and allows transfer of the compacted data via CAMAC to the host computer. The 80C86's also manage the local trigger conditions and can perform some parallel processing of the data. Due to the scale of the system and multiplicity of identical channels, semi-custom gate array chips are used for much of the logic, utilizing 2.5 micron CMOS technology

  2. Soudan 2 data acquisition and trigger electronics

    International Nuclear Information System (INIS)

    Dawson, J.; Heppelmann, S.; Laird, R.; May, E.; Mondal, N.; Schlereth, J.; Solomey, N.; Thron, J.

    1985-01-01

    The 1.1 kton Soudan 2 detector is read out by 16K anode wires and 32K cathode strips. Preamps from each wire or strip are bussed together in groups of 8 to reduce the number of ADC channels. The resulting 6144 channels of ionization signal are flash-digitized every 150 ns and stored in RAM. The raw data hit patterns are continually compared with programmable trigger multiplicity and adjacency conditions. The data acquisition process is managed in a system of 24 parallel crates each containing an Intel 8086 microprocessors, which supervises a pipe-lined data compactors, and allows transfer of the compacted data via CAMAC to the host computer. The 8086's also manage the local trigger conditions and can perform some parallel processing of the data. Due to the scale of the system and multiplicity of identical channels, semi-custom gate array chips are used for much of the logic, utilizing 2.5 micron CMOS technology

  3. A multi-parameter, acquisition system positron annihilation lifetime spectrometer

    International Nuclear Information System (INIS)

    Sharshar, T.

    2004-01-01

    A positron annihilation lifetime spectrometer employing a multi-parameter acquisition system has been prepared for various purposes such as the investigation and characterization of solid-state materials. The fast-fast coincidence technique was used in the present spectrometer with a pair of plastic scintillation detectors. The acquisition system is based on the Kmax software and on CAMAC modules. The data are acquired in event-by-event list mode. The time spectrum for the desired energy windows can be obtained by off-line data sorting and analysis. The spectrometer for event-by-event data acquisition is an important step to construct a positron age-momentum correlation (AMOC) spectrometer. The AMOC technique is especially suited for the observation of positron transitions between different states during their lifetime. The system performance was tested and the results were presented and discussed

  4. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    Energy Technology Data Exchange (ETDEWEB)

    Shimada, Kotaro, E-mail: kotaro@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Isoda, Hiroyoshi, E-mail: sayuki@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Okada, Tomohisa, E-mail: tomokada@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Kamae, Toshikazu, E-mail: toshi13@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Arizono, Shigeki, E-mail: arizono@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Hirokawa, Yuusuke, E-mail: yuusuke@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Shibata, Toshiya, E-mail: ksj@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Togashi, Kaori, E-mail: ktogashi@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan)

    2011-01-15

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 {+-} 1.0 min (mean {+-} standard deviation), 5.9 {+-} 0.8 min, and 5.8 {+-} 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  5. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    International Nuclear Information System (INIS)

    Shimada, Kotaro; Isoda, Hiroyoshi; Okada, Tomohisa; Kamae, Toshikazu; Arizono, Shigeki; Hirokawa, Yuusuke; Shibata, Toshiya; Togashi, Kaori

    2011-01-01

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 ± 1.0 min (mean ± standard deviation), 5.9 ± 0.8 min, and 5.8 ± 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  6. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  7. Simulation and modeling of data acquisition systems for future high energy physics experiments

    International Nuclear Information System (INIS)

    Booth, A.; Black, D.; Walsh, D.; Bowden, M.; Barsotti, E.

    1990-01-01

    With the ever-increasing complexity of detectors and their associated data acquisition (DAQ) systems, it is important to bring together a set of tools to enable system designers, both hardware and software, to understand the behavorial aspects of the system as a whole, as well as the interaction between different functional units within the system. For complex systems, human intuition is inadequate since there are simply too many variables for system designers to begin to predict how varying any subset of them affects the total system. On the other hand, exact analysis, even to the extent of investing in disposable hardware prototypes, is much too time consuming and costly. Simulation bridges the gap between physical intuition and exact analysis by providing a learning vehicle in which the effects of varying many parameters can be analyzed and understood. Simulation techniques are being used in the development of the Scalable Parallel Open Architecture Data Acquisition System at Fermilab. This paper describes the work undertaken at Fermilab in which several sophisticated tools have been brought together to provide an integrated systems engineering environment specifically aimed at designing DAQ systems. Also presented are results of simulation experiments in which the effects of varying trigger rates, event sizes and event distribution over processors, are clearly seen in terms of throughput and buffer usage in an event-building switch

  8. Diagnostic accuracy of dynamic contrast-enhanced MR imaging of renal masses with rapid-acquisition spin-echo technique

    International Nuclear Information System (INIS)

    Eilenberg, S.S.; Lee, J.K.T.; Brown, J.J.; Heiken, J.P.; Mirowitz, S.A.

    1990-01-01

    This paper compares the diagnostic accuracy of Gd-DTPA-enhanced rapid-acquisition spin-echo (RASE) imaging with standard spin-echo techniques for detecting renal cysts and solid renal neoplasms. RASE imaging combines a short TR (275 msec)/short TE (10 msec), single excitation pulse sequence with half-Fourier data sampling. Eighteen patients with CT evidence of renal masses were first evaluated with standard T1-and T2-weighted SE sequences. Pre- and serial postcontrast (Cd-DTPA, 0.1 mmol./kg) RASE sequences were then performed during suspended respiration. A final set of postcontrast images was obtained with the standard T1-weighted SE sequence. Each set of MR images was first reviewed separately (ie, T1, T2, pre- and post-contrast RASE, etc)

  9. A novel sorting algorithm and its application to a gamma-ray telescope asynchronous data acquisition system

    International Nuclear Information System (INIS)

    Colavita, A.; Capello, G.

    1997-01-01

    In this paper we present a novel parallel sorting algorithm, which works through a cascade of elementary sorting units and leads to a scalable architecture. The algorithm's complexity is analyzed and compared with a classical parallel algorithm. It comes out that, although it may be less efficient than classical approaches, the proposed algorithm is highly suited for VLSI implementation for its simplicity and scalability. The paper describes the applications of such device to the asynchronous data acquisition for a gamma-ray telescope. (orig.)

  10. Parallelization and checkpointing of GPU applications through program transformation

    Energy Technology Data Exchange (ETDEWEB)

    Solano-Quinde, Lizandro Damian [Iowa State Univ., Ames, IA (United States)

    2012-01-01

    GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating general purpose applications. Among the areas that have benefited from GPU acceleration are: signal and image processing, computational fluid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solve the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural differences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and

  11. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    Science.gov (United States)

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  12. Parallel finite elements with domain decomposition and its pre-processing

    International Nuclear Information System (INIS)

    Yoshida, A.; Yagawa, G.; Hamada, S.

    1993-01-01

    This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)

  13. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  14. The acknowledge project: toward improved efficiency in the knowledge acquisition process

    International Nuclear Information System (INIS)

    Marty, J.C.; Ramparany, F.

    1990-01-01

    This paper presents a general overview of the ACKnowledge Project (Acquisition of Knowledge). Knowledge Acquisition is a critical and time-consuming phase in the development of expert systems. The ACKnowledge project aims at improving the efficiency of knowledge acquisition by analyzing and evaluating knowledge acquisition techniques, and developing a Knowledge Engineering Workbench that supports the Knowledge Engineer from the early stage of knowledge aquisition up to the implementation of the knowledge base in large and complex application domains such as the diagnosis of dynamic computer networks

  15. On the Performance of the Python Programming Language for Serial and Parallel Scientific Computations

    Directory of Open Access Journals (Sweden)

    Xing Cai

    2005-01-01

    Full Text Available This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.

  16. Analysis of the SIAM Infrared Acquisition System

    Energy Technology Data Exchange (ETDEWEB)

    Varnado, S.G.

    1974-02-01

    This report describes and presents the results of an analysis of the performance of the infrared acquisition system for a Self-Initiated Antiaircraft Missile (SIAM). A description of the optical system is included, and models of target radiant intensity, atmospheric transmission, and background radiance are given. Acquisition probabilities are expressed in terms of the system signal-to-noise ratio. System performance against aircraft and helicopter targets is analyzed, and background discrimination techniques are discussed. 17 refs., 22 figs., 6 tabs.

  17. Five channel data acquisition system for tracer studies

    International Nuclear Information System (INIS)

    Narender Reddy, J.; Dhananjay Reddy, Y.; Dheeraj Reddy, J.

    2001-01-01

    Radioactive tracers are being used by many modern industries for trouble shooting, process control/quality control and optimization in the process plants. A five channel data acquisition system which has five independent scintillation detector based channels for data acquisition has been developed and made available. This system can be used for tracer studies involving Mean residence time, Resident time distribution and other similar parameters involving tracer movement. System developed can acquire data with dwell times ranging from 10 m sec to 100 sec into each channel and has a capacity to acquire data into 10K channels. Each channel electronics, has a 1x1 NaI Scintillation Detector probe, HV, AMP SCA, micro-controller based data acquisition card with independent dot matrix LCD display for visualization. Extensive use of serial bus (I 2 C, microwire) compatible devices has been incorporated in the design. Data acquisition is initiated simultaneously into all the channels. System design permits delayed/prompt data acquisition selectively. Dual counter switching technique has been employed to achieve faster dwell times for data acquisition. (author)

  18. Improved detection and mapping of deepwater hydrocarbon seeps: optimizing multibeam echosounder seafloor backscatter acquisition and processing techniques

    Science.gov (United States)

    Mitchell, Garrett A.; Orange, Daniel L.; Gharib, Jamshid J.; Kennedy, Paul

    2018-02-01

    Marine seep hunting surveys are a current focus of hydrocarbon exploration surveys due to recent advances in offshore geophysical surveying, geochemical sampling, and analytical technologies. Hydrocarbon seeps are ephemeral, small, discrete, and therefore difficult to sample on the deep seafloor. Multibeam echosounders are an efficient seafloor exploration tool to remotely locate and map seep features. Geophysical signatures from hydrocarbon seeps are acoustically-evident in bathymetric, seafloor backscatter, midwater backscatter datasets. Interpretation of these signatures in backscatter datasets is a fundamental component of commercial seep hunting campaigns. Degradation of backscatter datasets resulting from environmental, geometric, and system noise can interfere with the detection and delineation of seeps. We present a relative backscatter intensity normalization method and an oversampling acquisition technique that can improve the geological resolvability of hydrocarbon seeps. We use Green Canyon (GC) Block 600 in the Northern Gulf of Mexico as a seep calibration site for a Kongsberg EM302 30 kHz MBES prior to the start of the Gigante seep hunting program to analyze these techniques. At GC600, we evaluate the results of a backscatter intensity normalization, assess the effectiveness of 2X seafloor coverage in resolving seep-related features in backscatter data, and determine the off-nadir detection limits of bubble plumes using the EM302. Incorporating these techniques into seep hunting surveys can improve the detectability and sampling of seafloor seeps.

  19. Improved detection and mapping of deepwater hydrocarbon seeps: optimizing multibeam echosounder seafloor backscatter acquisition and processing techniques

    Science.gov (United States)

    Mitchell, Garrett A.; Orange, Daniel L.; Gharib, Jamshid J.; Kennedy, Paul

    2018-06-01

    Marine seep hunting surveys are a current focus of hydrocarbon exploration surveys due to recent advances in offshore geophysical surveying, geochemical sampling, and analytical technologies. Hydrocarbon seeps are ephemeral, small, discrete, and therefore difficult to sample on the deep seafloor. Multibeam echosounders are an efficient seafloor exploration tool to remotely locate and map seep features. Geophysical signatures from hydrocarbon seeps are acoustically-evident in bathymetric, seafloor backscatter, midwater backscatter datasets. Interpretation of these signatures in backscatter datasets is a fundamental component of commercial seep hunting campaigns. Degradation of backscatter datasets resulting from environmental, geometric, and system noise can interfere with the detection and delineation of seeps. We present a relative backscatter intensity normalization method and an oversampling acquisition technique that can improve the geological resolvability of hydrocarbon seeps. We use Green Canyon (GC) Block 600 in the Northern Gulf of Mexico as a seep calibration site for a Kongsberg EM302 30 kHz MBES prior to the start of the Gigante seep hunting program to analyze these techniques. At GC600, we evaluate the results of a backscatter intensity normalization, assess the effectiveness of 2X seafloor coverage in resolving seep-related features in backscatter data, and determine the off-nadir detection limits of bubble plumes using the EM302. Incorporating these techniques into seep hunting surveys can improve the detectability and sampling of seafloor seeps.

  20. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  1. High speed, locally controlled data acquisition system for TFTR

    International Nuclear Information System (INIS)

    Feng, H.K.; Bradish, G.J.

    1983-01-01

    A high speed, locally controlled, data acquisition and transmission system has been developed by the CICADA (Central Instrumentation Control and Data Acquisition) Group for extracting certain timecritical data during a TFTR pulse and passing it to the control room, 1000 feet distant, to satisfy realtime requirements of frequently sampled variables. The system is designed to utilize any or all of the standard CAMAC (Computer Automated Measurement and Control) modules now employed on the CAMAC links for retrieval of the main body of data, but to operate them in a much faster manner than in a standard CAMAC system. To do this, a pre-programmable ROM sequencer is employed as a controller to transmit commands to the modules at intervals down to one microsecond, replacing the usual CAMAC dedicated computer, and increasing the command rate by an order of magnitude over what could be sent down a Branch Highway. Data coming from any number of channels originating within a single CAMAC ''crate'' is then time-multiplexed and transmitted over a single conductor pair in bi-phase at a 2.5 MHz bit rate using Manchester coding techniques. Benefits gained from this approach include: Reduction in the number of conductors required, elimination of line-to-line skew found in parallel transmission systems, and the capability of being transformer coupled or transmitted over a fiber optic cable to avoid safety hazards and ground loops. The main application for this system so far has been as the feedback path in this closed loop control of currents through the Tokamak's field coils. The paper will treat the system's various applications

  2. 48 CFR 852.273-71 - Alternative negotiation techniques.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Alternative negotiation techniques. 852.273-71 Section 852.273-71 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS CLAUSES AND FORMS SOLICITATION PROVISIONS AND CONTRACT CLAUSES Texts of Provisions and Clauses 852.273-71 Alternative negotiation technique...

  3. Feed-forward volume rendering algorithm for moderately parallel MIMD machines

    Science.gov (United States)

    Yagel, Roni

    1993-01-01

    Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.

  4. Parallel processing approach to transform-based image coding

    Science.gov (United States)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  5. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  6. On Shaft Data Acquisition System (OSDAS)

    Science.gov (United States)

    Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles

    2012-01-01

    On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test

  7. Amplitudes, acquisition and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Bloor, Robert

    1998-12-31

    Accurate seismic amplitude information is important for the successful evaluation of many prospects and the importance of such amplitude information is increasing with the advent of time lapse seismic techniques. It is now widely accepted that the proper treatment of amplitudes requires seismic imaging in the form of either time or depth migration. A key factor in seismic imaging is the spatial sampling of the data and its relationship to the imaging algorithms. This presentation demonstrates that acquisition caused spatial sampling irregularity can affect the seismic imaging and perturb amplitudes. Equalization helps to balance the amplitudes, and the dealing strategy improves the imaging further when there are azimuth variations. Equalization and dealiasing can also help with the acquisition irregularities caused by shot and receiver dislocation or missing traces. 2 refs., 2 figs.

  8. Applications of parallel computer architectures to the real-time simulation of nuclear power systems

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1988-01-01

    In this paper the authors report on efforts to utilize parallel computer architectures for the thermal-hydraulic simulation of nuclear power systems and current research efforts toward the development of advanced reactor operator aids and control systems based on this new technology. Many aspects of reactor thermal-hydraulic calculations are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on modern computers. Timing studies indicate faster-than-real-time, high-fidelity physics models can be developed when the computational algorithms are designed to take advantage of the computer's architecture. These capabilities allow for the development of novel control systems and advanced reactor operator aids. Coupled with an integral real-time data acquisition system, evolving parallel computer architectures can provide operators and control room designers improved control and protection capabilities. Current research efforts are currently under way in this area

  9. 8-Channel acquisition system for Time-Correlated Single-Photon Counting.

    Science.gov (United States)

    Antonioli, S; Miari, L; Cuccato, A; Crotti, M; Rech, I; Ghioni, M

    2013-06-01

    Nowadays, an increasing number of applications require high-performance analytical instruments capable to detect the temporal trend of weak and fast light signals with picosecond time resolution. The Time-Correlated Single-Photon Counting (TCSPC) technique is currently one of the preferable solutions when such critical optical signals have to be analyzed and it is fully exploited in biomedical and chemical research fields, as well as in security and space applications. Recent progress in the field of single-photon detector arrays is pushing research towards the development of high performance multichannel TCSPC systems, opening the way to modern time-resolved multi-dimensional optical analysis. In this paper we describe a new 8-channel high-performance TCSPC acquisition system, designed to be compact and versatile, to be used in modern TCSPC measurement setups. We designed a novel integrated circuit including a multichannel Time-to-Amplitude Converter with variable full-scale range, a D∕A converter, and a parallel adder stage. The latter is used to adapt each converter output to the input dynamic range of a commercial 8-channel Analog-to-Digital Converter, while the integrated DAC implements the dithering technique with as small as possible area occupation. The use of this monolithic circuit made the design of a scalable system of very small dimensions (95 × 40 mm) and low power consumption (6 W) possible. Data acquired from the TCSPC measurement are digitally processed and stored inside an FPGA (Field-Programmable Gate Array), while a USB transceiver allows real-time transmission of up to eight TCSPC histograms to a remote PC. Eventually, the experimental results demonstrate that the acquisition system performs TCSPC measurements with high conversion rate (up to 5 MHz/channel), extremely low differential nonlinearity (<0.04 peak-to-peak of the time bin width), high time resolution (down to 20 ps Full-Width Half-Maximum), and very low crosstalk between channels.

  10. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    Science.gov (United States)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  11. 1H Spectroscopic Imaging of Human Brain at 3T: Comparison of Fast 3D-MRSI Techniques

    Science.gov (United States)

    Zierhut, Matthew L.; Ozturk-Isik, Esin; Chen, Albert P.; Park, Ilwoo; Vigneron, Daniel B.; Nelson, Sarah J.

    2011-01-01

    Purpose To investigate the signal-to-noise-ratio (SNR) and data quality of time-reduced 1H 3D-MRSI techniques in the human brain at 3T. Materials and Methods Techniques that were investigated included ellipsoidal k-space sampling, parallel imaging, and EPSI. The SNR values for NAA, Cho, Cre, and lactate or lipid peaks were compared after correcting for effective spatial resolution and acquisition time in a phantom and in the brains of human volunteers. Other factors considered were linewidths, metabolite ratios, partial volume effects, and subcutaneous lipid contamination. Results In volunteers, the median normalized SNR for parallel imaging data decreased by 34–42%, but could be significantly improved using regularization. The normalized signal to noise loss in flyback EPSI data was 11–18%. The effective spatial resolutions of the traditional, ellipsoidal, SENSE, and EPSI data were 1.02, 2.43, 1.03, and 1.01cm3, respectively. As expected, lipid contamination was variable between subjects but was highest for the SENSE data. Patient data obtained using the flyback EPSI method were of excellent quality. Conclusions Data from all 1H 3D-MRSI techniques were qualitatively acceptable, based upon SNR, linewidths, and metabolite ratios. The larger FOV obtained with the EPSI methods showed negligible lipid aliasing with acceptable SNR values in less than 9.5 minutes without compromising the PSF. PMID:19711396

  12. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    Science.gov (United States)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  13. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence

    2014-01-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  14. Parallel computing in experimental mechanics and optical measurement: A review (II)

    Science.gov (United States)

    Wang, Tianyi; Kemao, Qian

    2018-05-01

    With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.

  15. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam

    2014-05-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  16. Design strategies for irregularly adapting parallel applications

    International Nuclear Information System (INIS)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Sing, Jaswinder Pal

    2000-01-01

    Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance of dynamically adapting computations. In this work, we examine two major classes of adaptive applications, under five competing programming methodologies and four leading parallel architectures. Results indicate that it is possible to achieve message-passing performance using shared-memory programming techniques by carefully following the same high level strategies. Adaptive applications have computational work loads and communication patterns which change unpredictably at runtime, requiring dynamic load balancing to achieve scalable performance on parallel machines. Efficient parallel implementations of such adaptive applications are therefore a challenging task. This work examines the implementation of two typical adaptive applications, Dynamic Remeshing and N-Body, across various programming paradigms and architectural platforms. We compare several critical factors of the parallel code development, including performance, programmability, scalability, algorithmic development, and portability

  17. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  18. 48 CFR 15.404-1 - Proposal analysis techniques.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Proposal analysis techniques. 15.404-1 Section 15.404-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... assistance of other experts to ensure that an appropriate analysis is performed. (6) Recommendations or...

  19. Parallel analysis tools and new visualization techniques for ultra-large climate data set

    Energy Technology Data Exchange (ETDEWEB)

    Middleton, Don [National Center for Atmospheric Research, Boulder, CO (United States); Haley, Mary [National Center for Atmospheric Research, Boulder, CO (United States)

    2014-12-10

    ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

  20. Parallel discrete ordinates algorithms on distributed and common memory systems

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.; Brickner, R.G.

    1987-01-01

    The S/sub n/ algorithm employs iterative techniques in solving the linear Boltzmann equation. These methods, both ordered and chaotic, were compared on both the Denelcor HEP and the Intel hypercube. Strategies are linked to the organization and accessibility of memory (common memory versus distributed memory architectures), with common concern for acquisition of global information. Apart from this, the inherent parallelism of the algorithm maps directly onto the two architectures. Results comparing execution times, speedup, and efficiency are based on a representative 16-group (full upscatter and downscatter) sample problem. Calculations were performed on both the Los Alamos National Laboratory (LANL) Denelcor HEP and the LANL Intel hypercube. The Denelcor HEP is a 64-bit multi-instruction, multidate MIMD machine consisting of up to 16 process execution modules (PEMs), each capable of executing 64 processes concurrently. Each PEM can cooperate on a job, or run several unrelated jobs, and share a common global memory through a crossbar switch. The Intel hypercube, on the other hand, is a distributed memory system composed of 128 processing elements, each with its own local memory. Processing elements are connected in a nearest-neighbor hypercube configuration and sharing of data among processors requires execution of explicit message-passing constructs

  1. A 32-channel photon counting module with embedded auto/cross-correlators for real-time parallel fluorescence correlation spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Gong, S.; Labanca, I.; Rech, I.; Ghioni, M. [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2014-10-15

    Fluorescence correlation spectroscopy (FCS) is a well-established technique to study binding interactions or the diffusion of fluorescently labeled biomolecules in vitro and in vivo. Fast FCS experiments require parallel data acquisition and analysis which can be achieved by exploiting a multi-channel Single Photon Avalanche Diode (SPAD) array and a corresponding multi-input correlator. This paper reports a 32-channel FPGA based correlator able to perform 32 auto/cross-correlations simultaneously over a lag-time ranging from 10 ns up to 150 ms. The correlator is included in a 32 × 1 SPAD array module, providing a compact and flexible instrument for high throughput FCS experiments. However, some inherent features of SPAD arrays, namely afterpulsing and optical crosstalk effects, may introduce distortions in the measurement of auto- and cross-correlation functions. We investigated these limitations to assess their impact on the module and evaluate possible workarounds.

  2. A 32-channel photon counting module with embedded auto/cross-correlators for real-time parallel fluorescence correlation spectroscopy

    International Nuclear Information System (INIS)

    Gong, S.; Labanca, I.; Rech, I.; Ghioni, M.

    2014-01-01

    Fluorescence correlation spectroscopy (FCS) is a well-established technique to study binding interactions or the diffusion of fluorescently labeled biomolecules in vitro and in vivo. Fast FCS experiments require parallel data acquisition and analysis which can be achieved by exploiting a multi-channel Single Photon Avalanche Diode (SPAD) array and a corresponding multi-input correlator. This paper reports a 32-channel FPGA based correlator able to perform 32 auto/cross-correlations simultaneously over a lag-time ranging from 10 ns up to 150 ms. The correlator is included in a 32 × 1 SPAD array module, providing a compact and flexible instrument for high throughput FCS experiments. However, some inherent features of SPAD arrays, namely afterpulsing and optical crosstalk effects, may introduce distortions in the measurement of auto- and cross-correlation functions. We investigated these limitations to assess their impact on the module and evaluate possible workarounds

  3. Trimble LaserAce 1000 Accuracy Evaluation for Indoor Data Acquisition

    DEFF Research Database (Denmark)

    Jamali, Ali; Antón Castro, Francesc/François; Boguslawski, Pawel

    2014-01-01

    Surveying can be done using several sciences and techniques for outdoor and indoor data acquisition like photogrammetry, land surveying, remote sensing, Global Positioning System (GPS) and laser scanning. Electronic Distance Measurement (EDM) is a reliable and frequently used technique. Laser sca...

  4. Conceptual design of the CEM design of the data acquisition system

    International Nuclear Information System (INIS)

    Bowden, M.; Dorenbosch, J.; Kapoor, V.

    1993-06-01

    The design of a large scale data acquisition system for the GEM detector at the Superconducting Super Collider (SSC) is presented. This architecture supports high-bandwidth data transfer using parallel point-to-point links and a scalable switching network. Substantial buffering enables the use of high latency, selective triggering based on either hardware or software implementations. The system throughput can be expanded to greater than 40 Gbps per second at trigger rates of 100 KHz

  5. Image acquisition, transmission and assignment in 60Co container inspection system

    International Nuclear Information System (INIS)

    Wu Zhifang; Zhou Liye; Liu Ximing; Wang Liqiang

    1999-01-01

    The author describes the data acquisition mode and image reconstruction method in 60 Co container inspection system, analyzes the relationship between line pick period and geometry distortion, makes clear the demand to data transmitting rate. It discusses several data communication methods, draws up a plan for network, realizes automatic direction and reasonable assignment of data in the system, cooperation of multi-computer and parallel processing, thus greatly improves the systems inspection efficiency

  6. [Parallel virtual reality visualization of extreme large medical datasets].

    Science.gov (United States)

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  7. Development of an operator's mental model acquisition system. 1. Estimation of a physical mental model acquisition system

    International Nuclear Information System (INIS)

    Ikeda, Mitsuru; Mizoguchi, Riichirou; Yoshikawa, Shinji; Ozawa, Kenji

    1997-03-01

    This report describes a technical survey of acquisition method of an operator's understanding for functions and structures of his target nuclear plant. This method is to play a key role in the information processing framework to support on-training operators in forming their knowledge of the nuclear plants. This kind of technical framework is aiming at enhancing human operator's ability to cope with anomaly plant situations which are difficult to expect from preceding experiences or engineering surveillance. In these cases, cause identifications and responding operation selections are desired to made not only empirically but also based on thoughts about possible phenomena to take place within the nuclear plant. This report focuses on a particular element technique, defined as 'explanation-based knowledge acquisition', as the candidate technique to potentially be extended to meet the requirement written above, and discusses about applicability to the learning support system and about necessary improvements, to identify future technical developments. (author)

  8. Scheduling Parallel Jobs Using Migration and Consolidation in the Cloud

    Directory of Open Access Journals (Sweden)

    Xiaocheng Liu

    2012-01-01

    Full Text Available An increasing number of high performance computing parallel applications leverages the power of the cloud for parallel processing. How to schedule the parallel applications to improve the quality of service is the key to the successful host of parallel applications in the cloud. The large scale of the cloud makes the parallel job scheduling more complicated as even simple parallel job scheduling problem is NP-complete. In this paper, we propose a parallel job scheduling algorithm named MEASY. MEASY adopts migration and consolidation to enhance the most popular EASY scheduling algorithm. Our extensive experiments on well-known workloads show that our algorithm takes very good care of the quality of service. For two common parallel job scheduling objectives, our algorithm produces an up to 41.1% and an average of 23.1% improvement on the average response time; an up to 82.9% and an average of 69.3% improvement on the average slowdown. Our algorithm is robust even in terms that it allows inaccurate CPU usage estimation and high migration cost. Our approach involves trivial modification on EASY and requires no additional technique; it is practical and effective in the cloud environment.

  9. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  10. The blood-pool technique of radionuclide ventriculography: Data acquisition and evaluation

    International Nuclear Information System (INIS)

    Mueller-Schauenburg, W.

    1986-01-01

    For gated heart studies an in-vitro-labelling of erythrocytes is commonly used. Rest and exercise studies are acquired from LAO. Complementary studies may have different views. Besides the most common direct frame mode acquisition, there are the more flexible list mode and a hybrid mode. Concerning evaluation the ejection fraction is the leading parameter of global ventricular analysis. In local analysis a pixelwise evaluation generates functional images of phases and amplitudes (the Fourier approach developed by the Ulm group) or Noelep's trend images. Special attention has to be paid to the varying cycle length when a sine or cosine fitting (Fourier) is used for curve smoothing or phase and amplitude images. There are two opposed problems: If there are undetected QRS-complexes, the end of the representative cycle will contain early phases of subsequent cycles which must be cut off. In the case of really varying cycle length, the last images of the representative cycle must be corrected for acquisition time per frame. The total count curve may help to discriminate both cases and supplies suitable correction factors in the latter case. (orig.) [de

  11. Software aspects of designing an online data acquisition system

    International Nuclear Information System (INIS)

    Bandyopadhyay, A.

    1989-01-01

    The design aspect of a data acquisition system software for experimental nuclear physics applications is discussed. The features of a good data acquisition system and the techniques which are used to meet the requirements are also discussed. The suitability of different programming languages for different applications have been outlined. The operating system requirements and the difficulties encountered by the programmer for non-ideal operating system environment is also highlighted. (author)

  12. Mammogram synthesis using a 3D simulation. I. Breast tissue model and image acquisition simulation

    International Nuclear Information System (INIS)

    Bakic, Predrag R.; Albert, Michael; Brzakovic, Dragana; Maidment, Andrew D. A.

    2002-01-01

    A method is proposed for generating synthetic mammograms based upon simulations of breast tissue and the mammographic imaging process. A computer breast model has been designed with a realistic distribution of large and medium scale tissue structures. Parameters controlling the size and placement of simulated structures (adipose compartments and ducts) provide a method for consistently modeling images of the same simulated breast with modified position or acquisition parameters. The mammographic imaging process is simulated using a compression model and a model of the x-ray image acquisition process. The compression model estimates breast deformation using tissue elasticity parameters found in the literature and clinical force values. The synthetic mammograms were generated by a mammogram acquisition model using a monoenergetic parallel beam approximation applied to the synthetically compressed breast phantom

  13. Parallel Algorithms for Graph Optimization using Tree Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL; Groer, Christopher S [ORNL

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  14. 77 FR 58817 - Information Collection Requirement; Defense Federal Acquisition Regulation Supplement (DFARS...

    Science.gov (United States)

    2012-09-24

    ... automated collection techniques or other forms of information technology. The Office of Management and... 252.232-7002, Progress Payments for Foreign Military Sales Acquisitions; OMB Control Number 0704-0321.... The clause at 252.232- 7002, Progress Payments for Foreign Military Sales Acquisitions, requires each...

  15. Parallel workflow tools to facilitate human brain MRI post-processing

    Directory of Open Access Journals (Sweden)

    Zaixu eCui

    2015-05-01

    Full Text Available Multi-modal magnetic resonance imaging (MRI techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues.

  16. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  17. Architecture of an acquisition system-multiprocessors

    International Nuclear Information System (INIS)

    Postec, H.

    1987-07-01

    To follow the huge increasing of concerned parameters in nuclear detection systems, acquisition systems become bigger and have to present very good rapidity performance. At Ganil, four detection systems have been set in Nautilus reaction chamber, that lead to experiment configurations with 700 parameters to process. In front of present acquisition system limitation, a device more relevant to lecture of a large number of channels show off necessary. Functionalities already operating in other systems and hardware already used have been chosen; specific technical solutions were aldo developed to use the most recent techniques and to take in account the four detection system structure of the device [fr

  18. Design of parallel dual-energy X-ray beam and its performance for security radiography

    International Nuclear Information System (INIS)

    Kim, Kwang Hyun; Myoung, Sung Min; Chung, Yong Hyun

    2011-01-01

    A new concept of dual-energy X-ray beam generation and acquisition of dual-energy security radiography is proposed. Erbium (Er) and rhodium (Rh) with a copper filter were positioned in front of X-ray tube to generate low- and high-energy X-ray spectra. Low- and high-energy X-rays were guided to separately enter into two parallel detectors. Monte Carlo code of MCNPX was used to derive an optimum thickness of each filter for improved dual X-ray image quality. It was desired to provide separation ability between organic and inorganic matters for the condition of 140 kVp/0.8 mA as used in the security application. Acquired dual-energy X-ray beams were evaluated by the dual-energy Z-map yielding enhanced performance compared with a commercial dual-energy detector. A collimator for the parallel dual-energy X-ray beam was designed to minimize X-ray beam interference between low- and high-energy parallel beams for 500 mm source-to-detector distance.

  19. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    International Nuclear Information System (INIS)

    Toshimitsu, Fujisawa; Genki, Yagawa

    2003-01-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  20. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    Energy Technology Data Exchange (ETDEWEB)

    Toshimitsu, Fujisawa [Tokyo Univ., Collaborative Research Center of Frontier Simulation Software for Industrial Science, Institute of Industrial Science (Japan); Genki, Yagawa [Tokyo Univ., Department of Quantum Engineering and Systems Science (Japan)

    2003-07-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  1. Development of an Integrated Data Acquisition System for a Small Flight Probe

    Science.gov (United States)

    Swanson, Gregory T.; Empey, Daniel M.; Skokova, Kristina A.; Venkatapathy, Ethiraj

    2012-01-01

    In support of the SPRITE concept, an integrated data acquisition system has been developed and fabricated for preliminary testing. The data acquisition system has been designed to condition traditional thermal protection system sensors, store their data to an on-board memory card, and in parallel, telemeter to an external system. In the fall of 2010, this system was integrated into a 14 in. diameter, 45 degree sphere cone probe instrumented with thermal protection system sensors. This system was then tested at the NASA Ames Research Center Aerodynamic Heating Facility's arc jet at approximately 170 W/sq. cm. The first test in December 2010 highlighted hardware design issues that were redesigned and implemented leading to a successful test in February 2011.

  2. Word-final stops in Brazilian Portuguese English: acquisition and pronunciation instruction

    Directory of Open Access Journals (Sweden)

    Walcir Cardoso

    2010-11-01

    Full Text Available This paper presents current research on the second language acquisition of English phonology and its implication for (and applications to pronunciation instruction in the language classroom. More specifically, the paper follows the development of English word-final consonants by Brazilian Portuguese speakers learning English as a foreign language. The findings of two parallel studies reveal that the acquisition of these constituents is motivated by both extralinguistic (proficiency, style and linguistic (word size, place of articulation factors, and that the process is mediated by an intermediate stage characterized by consonant lengthening or aspiration (Onset-Nucleus sharing. Based on these results, I propose that the segments and environments that seem to delay coda production (i.e., monosyllabic words, labial and dorsal consonants should be given priority in pronunciation instruction. Along the lines of Dickerson (1975, this paper proposes (what we believe is a more effective and socially realistic pedagogy for the teaching of English pronunciation within an approach that recognizes that "variability is the norm rather than the exception" in second language acquisition.

  3. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    Science.gov (United States)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  4. Parallel computers and three-dimensional computational electromagnetics

    International Nuclear Information System (INIS)

    Madsen, N.K.

    1994-01-01

    The authors have continued to enhance their ability to use new massively parallel processing computers to solve time-domain electromagnetic problems. New vectorization techniques have improved the performance of their code DSI3D by factors of 5 to 15, depending on the computer used. New radiation boundary conditions and far-field transformations now allow the computation of radar cross-section values for complex objects. A new parallel-data extraction code has been developed that allows the extraction of data subsets from large problems, which have been run on parallel computers, for subsequent post-processing on workstations with enhanced graphics capabilities. A new charged-particle-pushing version of DSI3D is under development. Finally, DSI3D has become a focal point for several new Cooperative Research and Development Agreement activities with industrial companies such as Lockheed Advanced Development Company, Varian, Hughes Electron Dynamics Division, General Atomic, and Cray

  5. Data driven parallelism in experimental high energy physics applications

    International Nuclear Information System (INIS)

    Pohl, M.

    1987-01-01

    I present global design principles for the implementation of high energy physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of high energy physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordiate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms). (orig.)

  6. Data driven parallelism in experimental high energy physics applications

    Science.gov (United States)

    Pohl, Martin

    1987-08-01

    I present global design principles for the implementation of High Energy Physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of High Energy Physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The Task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordinate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms).

  7. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  8. Triple Arterial Phase MR Imaging with Gadoxetic Acid Using a Combination of Contrast Enhanced Time Robust Angiography, Keyhole, and Viewsharing Techniques and Two-Dimensional Parallel Imaging in Comparison with Conventional Single Arterial Phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Lee, Jeong Min [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of); Yu, Mi Hye [Department of Radiology, Konkuk University Medical Center, Seoul 05030 (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul 04342 (Korea, Republic of); Han, Joon Koo [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of)

    2016-11-01

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ{sup 2} test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  9. Triple arterial phase MR imaging with gadoxetic acid using a combination of contrast enhanced time robust angiography, keyhole, and viewsharing techniques and two-dimensional parallel imaging in comparison with conventional single arterial phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee; Lee, Jeong Min; Han, Joon Koo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Yu, Mi Hye [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul (Korea, Republic of)

    2016-07-15

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ2 test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  10. Advancements on Radar Polarization Information Acquisition and Processing

    Directory of Open Access Journals (Sweden)

    Dai Dahai

    2016-04-01

    Full Text Available The study on radar polarization information acquisition and processing has currently been one important part of radar techniques. The development of the polarization theory is simply reviewed firstly. Subsequently, some key techniques which include polarization measurement, polarization anti-jamming, polarization recognition, imaging and parameters inversion using radar polarimetry are emphatically analyzed in this paper. The basic theories, the present states and the development trends of these key techniques are presented and some meaningful conclusions are derived.

  11. Feasibility and Diagnostic Accuracy of Whole Heart Coronary MR Angiography Using Free-Breathing 3D Balanced Turbo-Field-Echo with SENSE and the Half-Fourier Acquisition Technique

    International Nuclear Information System (INIS)

    Kim, Young Jin; Seo, Jae Seung; Choi, Byoung Wook; Choe, Kyu Ok; Jang, Yang Soo; Ko, Young Guk

    2006-01-01

    We wanted to assess the feasibility and diagnostic accuracy of whole heart coronary magnetic resonance angiography (MRA) with using 3D balanced turbo-field-echo (b-TFE) with SENSE and the half-Fourier acquisition technique for identifying stenoses of the coronary artery. Twenty-one patients who underwent both whole heart coronary MRA examinations and conventional catheter coronary angiography examinations were enrolled in the study. The whole heart coronary MRA images were acquired using a navigator gated 3D b-TFE sequence with SENSE and the half-Fourier acquisition technique to reduce the acquisition time. The imaging slab covered the whole heart (80 contiguous slices with a reconstructed slice thickness of 1.5 mm) along the transverse axis. The quality of the images was evaluated by using a 5-point scale (0 - uninterpretable, 1 - poor, 2 - fair, 3 - good, 4 - excellent). Ten coronary segments of the heart were evaluated in each case; the left main coronary artery (LM), and the proximal, middle and distal segments of the left anterior descending (LAD), the left circumflex (LCX) and the right coronary artery (RCA). The diagnostic accuracy of whole heart coronary MRA for detecting significant coronary artery stenosis was determined on the segment-bysegment basis, and it was compared with the results obtained by conventional catheter angiography, which is the gold standard. The mean image quality was 3.7 in the LM, 3.2 in the LAD, 2.5 in the LCX, and 3.3 in the RCA, respectively (the overall image quality was 3.0 ± 0.1). 168 (84%) of the 201 segments had an acceptable image quality (≥ grade 2). The sensitivity, specificity, accuracy, negative predictive value and positive predictive value of the whole heart coronary MRA images for detecting significant stenosis were 81.3%, 92.1%, 91.1%, 97.9%, and 52.0%, respectively. The mean coronary MRA acquisition time was 9 min 22 sec (± 125 sec). Whole heart coronary MRA is a feasible technique, and it has good potential to

  12. Heuristic framework for parallel sorting computations | Nwanze ...

    African Journals Online (AJOL)

    Parallel sorting techniques have become of practical interest with the advent of new multiprocessor architectures. The decreasing cost of these processors will probably in the future, make the solutions that are derived thereof to be more appealing. Efficient algorithms for sorting scheme that are encountered in a number of ...

  13. Solving the Stokes problem on a massively parallel computer

    DEFF Research Database (Denmark)

    Axelsson, Owe; Barker, Vincent A.; Neytcheva, Maya

    2001-01-01

    boundary value problem for each velocity component, are solved by the conjugate gradient method with a preconditioning based on the algebraic multi‐level iteration (AMLI) technique. The velocity is found from the computed pressure. The method is optimal in the sense that the computational work...... is proportional to the number of unknowns. Further, it is designed to exploit a massively parallel computer with distributed memory architecture. Numerical experiments on a Cray T3E computer illustrate the parallel performance of the method....

  14. Fourth Data Challenge for the ALICE data acquisition system

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    The ALICE experiment will study quark-gluon plasma using beams of heavy ions, such as those of lead. The particles in the beams will collide thousands of times per second in the detector and each collision will generate an event containing thousands of charged particles. Every second, the characteristics of tens of thousands of particles will have to be recorded. Thus, to be effective, the data acquisition system (DAQ) must meet extremely strict performance criteria. To this end, the ALICE Data Challenges entail step-by-step testing of the DAQ with existing equipment that is sufficiently close to the final equipment to provide a reliable indication of performance. During the fourth challenge, in 2002, a data acquisition rate of 1800 megabytes per second was achieved by using some thirty parallel-linked PCs running the specially developed DATE software. During the final week of tests in December 2002, the team also tested the Storage Tek linear magnetic tape drives. Their bandwidth is 30 megabytes per second a...

  15. A New Tool for Intelligent Parallel Processing of Radar/SAR Remotely Sensed Imagery

    Directory of Open Access Journals (Sweden)

    A. Castillo Atoche

    2013-01-01

    Full Text Available A novel parallel tool for large-scale image enhancement/reconstruction and postprocessing of radar/SAR sensor systems is addressed. The proposed parallel tool performs the following intelligent processing steps: image formation, for the application of different system-level effects of image degradation with a particular remote sensing (RS system and simulation of random noising effects, enhancement/reconstruction by employing nonparametric robust high-resolution techniques, and image postprocessing using the fuzzy anisotropic diffusion technique which incorporates a better edge-preserving noise removal effect and faster diffusion process. This innovative tool allows the processing of high-resolution images provided with different radar/SAR sensor systems as required by RS endusers for environmental monitoring, risk prevention, and resource management. To verify the performance implementation of the proposed parallel framework, the processing steps are developed and specifically tested on graphic processing units (GPU, achieving considerable speedups compared to the serial version of the same techniques implemented in C language.

  16. Development of an operator`s mental model acquisition system. 1. Estimation of a physical mental model acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Ikeda, Mitsuru; Mizoguchi, Riichirou [Inst. of Scientific and Industrial Research, Osaka Univ., Ibaraki (Japan); Yoshikawa, Shinji; Ozawa, Kenji

    1997-03-01

    This report describes a technical survey of acquisition method of an operator`s understanding for functions and structures of his target nuclear plant. This method is to play a key role in the information processing framework to support on-training operators in forming their knowledge of the nuclear plants. This kind of technical framework is aiming at enhancing human operator`s ability to cope with anomaly plant situations which are difficult to expect from preceding experiences or engineering surveillance. In these cases, cause identifications and responding operation selections are desired to made not only empirically but also based on thoughts about possible phenomena to take place within the nuclear plant. This report focuses on a particular element technique, defined as `explanation-based knowledge acquisition`, as the candidate technique to potentially be extended to meet the requirement written above, and discusses about applicability to the learning support system and about necessary improvements, to identify future technical developments. (author)

  17. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  18. COMPUTER-AIDED DATA ACQUISITION FOR COMBUSTION EXPERIMENTS

    Science.gov (United States)

    The article describes the use of computer-aided data acquisition techniques to aid the research program of the Combustion Research Branch (CRB) of the U.S. EPA's Air and Energy Engineering Research Laboratory (AEERL) in Research Triangle Park, NC, in particular on CRB's bench-sca...

  19. Embedded systems design for high-speed data acquisition and control

    CERN Document Server

    Di Paolo Emilio, Maurizio

    2015-01-01

    This book serves as a practical guide for practicing engineers who need to design embedded systems for high-speed data acquisition and control systems. A minimum amount of theory is presented, along with a review of analog and digital electronics, followed by detailed explanations of essential topics in hardware design and software development. The discussion of hardware focuses on microcontroller design (ARM microcontrollers and FPGAs), techniques of embedded design, high speed data acquisition (DAQ) and control systems. Coverage of software development includes main programming techniques, culminating in the study of real-time operating systems. All concepts are introduced in a manner to be highly-accessible to practicing engineers and lead to the practical implementation of an embedded board that can be used in various industrial fields as a control system and high speed data acquisition system.   • Describes fundamentals of embedded systems design in an accessible manner; • Takes a problem-solving ...

  20. Xyce parallel electronic simulator : users' guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is

  1. Mergers and acquisitions: new arrangements in health care. Part 1.

    Science.gov (United States)

    Grant, E A

    1988-02-01

    Mergers and acquisitions are assuming a more important role in the healthcare industry today. These transactions require various issues be considered, such as valuation, capital planning, and so forth. In this article, the first in a five-part series on mergers and acquisitions, the fundamental methods and techniques of valuation are discussed. Some of these valuation methods, including comparative market transactions and free cash flow, are explained and examples are used to help potential purchasers and sellers to determine an organization's true value. Other articles in this series will include legal issues, tax implications, purchase investigations, and capital planning for mergers and acquisitions.

  2. The Effects of Acquisitions on Firm Value, Evidence from Turkey

    Directory of Open Access Journals (Sweden)

    Fatma Büşra GÜNAY BENDAŞ

    2015-01-01

    Full Text Available Acquisitions are assumed to create value for both the target and the acquiring firm. This paper analyzes the sources of value creation in acquisitions and examines the domestic acquisitions that took place in Turkey in 2013. By taking the overall market considerations into account, I measure the degree of value creation over different periods of time. I use the standard market value technique to calculate abnormal returns in stock prices of the acquiring firms and find that the increase in firm value is statistically significant in the long run but not in the short run.

  3. Static and dynamic load-balancing strategies for parallel reservoir simulation

    International Nuclear Information System (INIS)

    Anguille, L.; Killough, J.E.; Li, T.M.C.; Toepfer, J.L.

    1995-01-01

    Accurate simulation of the complex phenomena that occur in flow in porous media can tax even the most powerful serial computers. Emergence of new parallel computer architectures as a future efficient tool in reservoir simulation may overcome this difficulty. Unfortunately, major problems remain to be solved before using parallel computers commercially: production serial programs must be rewritten to be efficient in parallel environments and load balancing methods must be explored to evenly distribute the workload on each processor during the simulation. This study implements both a static load-balancing algorithm and a receiver-initiated dynamic load-sharing algorithm to achieve high parallel efficiencies on both the IBM SP2 and Intel IPSC/860 parallel computers. Significant speedup improvement was recorded for both methods. Further optimization of these algorithms yielded a technique with efficiencies as high as 90% and 70% on 8 and 32 nodes, respectively. The increased performance was the result of the minimization of message-passing overhead

  4. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    Directory of Open Access Journals (Sweden)

    Kanokmon Rujirakul

    2014-01-01

    Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  5. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    Science.gov (United States)

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  6. PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS

    KAUST Repository

    Prudencio, Ernesto; Cheung, Sai Hung

    2012-01-01

    In recent years, Bayesian model updating techniques based on measured data have been applied to many engineering and applied science problems. At the same time, parallel computational platforms are becoming increasingly more powerful and are being used more frequently by the engineering and scientific communities. Bayesian techniques usually require the evaluation of multi-dimensional integrals related to the posterior probability density function (PDF) of uncertain model parameters. The fact that such integrals cannot be computed analytically motivates the research of stochastic simulation methods for sampling posterior PDFs. One such algorithm is the adaptive multilevel stochastic simulation algorithm (AMSSA). In this paper we discuss the parallelization of AMSSA, formulating the necessary load balancing step as a binary integer programming problem. We present a variety of results showing the effectiveness of load balancing on the overall performance of AMSSA in a parallel computational environment.

  7. Leveraging Non-Uniform Resources for Parallel Query Processing

    DEFF Research Database (Denmark)

    Mayr, Tobias; Bonnet, Philippe; Gehrke, Johannes

    2003-01-01

    Modular clusters are now composed of non- uniform nodes with different CPUs, disks or network cards so that customers can adapt the cluster configuration to the changing technologies and to their changing needs. This challenges dataflow parallelism as the primary load balancing technique of exist...

  8. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  9. Frames of reference in spatial language acquisition.

    Science.gov (United States)

    Shusterman, Anna; Li, Peggy

    2016-08-01

    Languages differ in how they encode spatial frames of reference. It is unknown how children acquire the particular frame-of-reference terms in their language (e.g., left/right, north/south). The present paper uses a word-learning paradigm to investigate 4-year-old English-speaking children's acquisition of such terms. In Part I, with five experiments, we contrasted children's acquisition of novel word pairs meaning left-right and north-south to examine their initial hypotheses and the relative ease of learning the meanings of these terms. Children interpreted ambiguous spatial terms as having environment-based meanings akin to north and south, and they readily learned and generalized north-south meanings. These studies provide the first direct evidence that children invoke geocentric representations in spatial language acquisition. However, the studies leave unanswered how children ultimately acquire "left" and "right." In Part II, with three more experiments, we investigated why children struggle to master body-based frame-of-reference words. Children successfully learned "left" and "right" when the novel words were systematically introduced on their own bodies and extended these words to novel (intrinsic and relative) uses; however, they had difficulty learning to talk about the left and right sides of a doll. This difficulty was paralleled in identifying the left and right sides of the doll in a non-linguistic memory task. In contrast, children had no difficulties learning to label the front and back sides of a doll. These studies begin to paint a detailed account of the acquisition of spatial terms in English, and provide insights into the origins of diverse spatial reference frames in the world's languages. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  11. Parallel processing algorithms for hydrocodes on a computer with MIMD architecture (DENELCOR's HEP)

    International Nuclear Information System (INIS)

    Hicks, D.L.

    1983-11-01

    In real time simulation/prediction of complex systems such as water-cooled nuclear reactors, if reactor operators had fast simulator/predictors to check the consequences of their operations before implementing them, events such as the incident at Three Mile Island might be avoided. However, existing simulator/predictors such as RELAP run slower than real time on serial computers. It appears that the only way to overcome the barrier to higher computing rates is to use computers with architectures that allow concurrent computations or parallel processing. The computer architecture with the greatest degree of parallelism is labeled Multiple Instruction Stream, Multiple Data Stream (MIMD). An example of a machine of this type is the HEP computer by DENELCOR. It appears that hydrocodes are very well suited for parallelization on the HEP. It is a straightforward exercise to parallelize explicit, one-dimensional Lagrangean hydrocodes in a zone-by-zone parallelization. Similarly, implicit schemes can be parallelized in a zone-by-zone fashion via an a priori, symbolic inversion of the tridiagonal matrix that arises in an implicit scheme. These techniques are extended to Eulerian hydrocodes by using Harlow's rezone technique. The extension from single-phase Eulerian to two-phase Eulerian is straightforward. This step-by-step extension leads to hydrocodes with zone-by-zone parallelization that are capable of two-phase flow simulation. Extensions to two and three spatial dimensions can be achieved by operator splitting. It appears that a zone-by-zone parallelization is the best way to utilize the capabilities of an MIMD machine. 40 references

  12. Real-time control and data-acquisition system for high-energy neutral-beam injectors

    International Nuclear Information System (INIS)

    Glad, A.S.; Jacobson, V.

    1981-12-01

    The need for a real-time control system and a data acquisition, processing and archiving system operating in parallel on the same computer became a requirement on General Atomic's Doublet III fusion energy project with the addition of high energy neutral beam injectors. The data acquisition processing and archiving system is driven from external events and is sequenced through each experimental shot utilizing ModComp's intertask message service. This system processes, archives and displays on operator console CRTs all physics diagnostic data related to the neutral beam injectores such as temperature, beam alignment, etc. The real-time control system is data base driven and provides periodic monitoring and control of the numerous dynamic subsystems of the neutral beam injectors such as power supplies, timing, water cooling, etc

  13. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  14. Impact analysis on a massively parallel computer

    International Nuclear Information System (INIS)

    Zacharia, T.; Aramayo, G.A.

    1994-01-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper

  15. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    Science.gov (United States)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  16. A new parallel molecular dynamics algorithm for organic systems

    International Nuclear Information System (INIS)

    Plimpton, S.; Hendrickson, B.; Heffelfinger, G.

    1993-01-01

    A new parallel algorithm for simulating bonded molecular systems such as polymers and proteins by molecular dynamics (MD) is presented. In contrast to methods that extract parallelism by breaking the spatial domain into sub-pieces, the new method does not require regular geometries or uniform particle densities to achieve high parallel efficiency. For very large, regular systems spatial methods are often the best choice, but in practice the new method is faster for systems with tens-of-thousands of atoms simulated on large numbers of processors. It is also several times faster than the techniques commonly used for parallelizing bonded MD that assign a subset of atoms to each processor and require all-to-all communication. Implementation of the algorithm in a CHARMm-like MD model with many body forces and constraint dynamics is discussed and timings on the Intel Delta and Paragon machines are given. Example calculations using the algorithm in simulations of polymers and liquid-crystal molecules will also be briefly discussed

  17. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  18. A FIFO based neutron arrival time collection technique for assay of plutonium

    International Nuclear Information System (INIS)

    Parthasarathy, R.; Saisubalakshmi, D.; Venkatasubramani, C.R.

    2004-01-01

    The system assays plutonium by counting the time correlated neutrons emitted by the spontaneous fissions of the even-even Pu isotopes in the presence of random neutron background, originating principally from (a,n) reactions in the material. The correlation technique discussed in this paper utilizes twofold neutron coincidence counting but the system is proposed to be enhanced for neutron multiplicity counting. A microcontroller based data acquisition system has been developed using a couple of fast FIFO 2kX9 bit memory ICs and a 16 bit counter for identifying time-correlated neutrons. Since the neutron pulses are arriving at a rapid rate, the incoming pulses are buffered in the FIFO and then transferred to PC by the microcontroller through the parallel port. The correlation analysis based on this time arrival information is done in the PC off-line. (author)

  19. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  20. Development of fast parallel multi-technique scanning X-ray imaging at Synchrotron Soleil

    Science.gov (United States)

    Medjoubi, K.; Leclercq, N.; Langlois, F.; Buteau, A.; Lé, S.; Poirier, S.; Mercère, P.; Kewish, C. M.; Somogyi, A.

    2013-10-01

    A fast multimodal scanning X-ray imaging scheme is prototyped at Soleil Synchrotron. It permits the simultaneous acquisition of complementary information on the sample structure, composition and chemistry by measuring transmission, differential phase contrast, small-angle scattering, and X-ray fluorescence by dedicated detectors with ms dwell time per pixel. The results of the proof of principle experiments are presented in this paper.

  1. High spatial resolution CT image reconstruction using parallel computing

    International Nuclear Information System (INIS)

    Yin Yin; Liu Li; Sun Gongxing

    2003-01-01

    Using the PC cluster system with 16 dual CPU nodes, we accelerate the FBP and OR-OSEM reconstruction of high spatial resolution image (2048 x 2048). Based on the number of projections, we rewrite the reconstruction algorithms into parallel format and dispatch the tasks to each CPU. By parallel computing, the speedup factor is roughly equal to the number of CPUs, which can be up to about 25 times when 25 CPUs used. This technique is very suitable for real-time high spatial resolution CT image reconstruction. (authors)

  2. Eigenvalues calculation algorithms for {lambda}-modes determination. Parallelization approach

    Energy Technology Data Exchange (ETDEWEB)

    Vidal, V. [Universidad Politecnica de Valencia (Spain). Departamento de Sistemas Informaticos y Computacion; Verdu, G.; Munoz-Cobo, J.L. [Universidad Politecnica de Valencia (Spain). Departamento de Ingenieria Quimica y Nuclear; Ginestart, D. [Universidad Politecnica de Valencia (Spain). Departamento de Matematica Aplicada

    1997-03-01

    In this paper, we review two methods to obtain the {lambda}-modes of a nuclear reactor, Subspace Iteration method and Arnoldi`s method, which are popular methods to solve the partial eigenvalue problem for a given matrix. In the developed application for the neutron diffusion equation we include improved acceleration techniques for both methods. Also, we propose two parallelization approaches for these methods, a coarse grain parallelization and a fine grain one. We have tested the developed algorithms with two realistic problems, focusing on the efficiency of the methods according to the CPU times. (author).

  3. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  4. AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System

    Science.gov (United States)

    Wang, R.; Harris, C.; Wicenec, A.

    2016-07-01

    In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.

  5. Future data acquisition at ISIS

    International Nuclear Information System (INIS)

    Pulford, W.C.A.; Quinton, S.P.H.; Johnson, M.W.; Norris, J.

    1989-01-01

    Over the past year ISIS beam intensity has increased steadily to 100 microamps during periods of good running. With the instrument users finding it comparatively easy to set up data-collection runs, we are facing an ever increasing volume of incoming data. Greatly improved detector technology, mainly involving large areas of zinc sulfide phosphor, are expected to contribute much to the capacity of new diffractometers as well as provide an enhancement path for many of the existing ones. It is clear that we are fast reaching the point where if we continue to use our current technology data collection techniques, our computer systems will no longer be able to migrate the data to long-term storage, let alone enable their analysis at a speed compatible with continuous use of the ISIS instruments. The most effect method to improve this situation is to reduce the volume of data flowing between the data acquisition electronics and the front end minicomputers, and to provide facilities to monitor data acquisition within the data acquisition electronics. Processing power must be incorporated closer to the point of data collection. Ways of doing this are discussed and evaluated. (author)

  6. Inductive acquisition of expert knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Muggleton, S.H.

    1986-01-01

    Expert systems divide neatly into two categories: those in which (1) the expert decisions result in changes to some external environment (control systems), and (2) the expert decisions merely seek to describe the environment (classification systems). Both the explanation of computer-based reasoning and the bottleneck (Feigenbaum, 1979) of knowledge acquisition are major issues in expert-systems research. The author contributed to these areas of research in two ways: 1. He implemented an expert-system shell, the Mugol environment, which facilitates knowledge acquisition by inductive inference and provides automatic explanation of run-time reasoning on demand. RuleMaster, a commercial version of this environment, was used to advantage industrially in the construction and testing of two large classification systems. 2. He investigated a new techniques called 'sequence induction' that can be used in construction of control systems. Sequence induction is based on theoretical work in grammatical learning. He improved existing grammatical learning algorithms as well as suggesting and theoretically characterizing new ones. These algorithms were successfully applied to acquisition of knowledge for a diverse set of control systems, including inductive construction of robot plans and chess end-gam strategies.

  7. Dual-volume excitation and parallel reconstruction for J-difference-edited MR spectroscopy

    DEFF Research Database (Denmark)

    Oeltzschner, Georg; Puts, Nicolaas A J; Chan, Kimberly L

    2017-01-01

    successfully reconstructed with a mean in vivo g-factor of 1.025 (typical voxel-center separation: 7-8 cm). MEGA-PRIAM experiments showed higher signal-to-noise ratio than sequential single-voxel experiments of the same total duration (mean improvement 1.38 ± 0.24). CONCLUSIONS: Simultaneous acquisition of J......PURPOSE: To develop J-difference editing with parallel reconstruction in accelerated multivoxel (PRIAM) for simultaneous measurement in two separate brain regions of γ-aminobutyric acid (GABA) or glutathione. METHODS: PRIAM separates signals from two simultaneously excited voxels using receiver...

  8. Speeding Up the String Comparison of the IDS Snort using Parallel Programming: A Systematic Literature Review on the Parallelized Aho-Corasick Algorithm

    Directory of Open Access Journals (Sweden)

    SILVA JUNIOR,J. B.

    2016-12-01

    Full Text Available The Intrusion Detection System (IDS needs to compare the contents of all packets arriving at the network interface with a set of signatures for indicating possible attacks, a task that consumes much CPU processing time. In order to alleviate this problem, some researchers have tried to parallelize the IDS's comparison engine, transferring execution from the CPU to GPU. This paper identifies and maps the parallelization features of the Aho-Corasick algorithm, which is used in Snort to compare patterns, in order to show this algorithm's implementation and execution issues, as well as optimization techniques for the Aho-Corasick machine. We have found 147 papers from important computer science publications databases, and have mapped them. We selected 22 and analyzed them in order to find our results. Our analysis of the papers showed, among other results, that parallelization of the AC algorithm is a new task and the authors have focused on the State Transition Table as the most common way to implement the algorithm on the GPU. Furthermore, we found that some techniques speed up the algorithm and reduce the required machine storage space are highly used, such as the algorithm running on the fastest memories and mechanisms for reducing the number of nodes and bit maping.

  9. PERFORMANCE ANALYSIS BETWEEN EXPLICIT SCHEDULING AND IMPLICIT SCHEDULING OF PARALLEL ARRAY-BASED DOMAIN DECOMPOSITION USING OPENMP

    Directory of Open Access Journals (Sweden)

    MOHAMMED FAIZ ABOALMAALY

    2014-10-01

    Full Text Available With the continuous revolution of multicore architecture, several parallel programming platforms have been introduced in order to pave the way for fast and efficient development of parallel algorithms. Back into its categories, parallel computing can be done through two forms: Data-Level Parallelism (DLP or Task-Level Parallelism (TLP. The former can be done by the distribution of data among the available processing elements while the latter is based on executing independent tasks concurrently. Most of the parallel programming platforms have built-in techniques to distribute the data among processors, these techniques are technically known as automatic distribution (scheduling. However, due to their wide range of purposes, variation of data types, amount of distributed data, possibility of extra computational overhead and other hardware-dependent factors, manual distribution could achieve better outcomes in terms of performance when compared to the automatic distribution. In this paper, this assumption is investigated by conducting a comparison between automatic and our newly proposed manual distribution of data among threads in parallel. Empirical results of matrix addition and matrix multiplication show a considerable performance gain when manual distribution is applied against automatic distribution.

  10. Word-final stops in Brazilian Portuguese English: acquisition and pronunciation instruction

    Directory of Open Access Journals (Sweden)

    Walcir Cardoso

    2008-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2008n55p153 This paper presents current research on the second language acquisition of English phonology and its implication for (and applications to pronunciation instruction in the language classroom. More specifically, the paper follows the development of English word-final consonants by Brazilian Portuguese speakers learning English as a foreign language. The findings of two parallel studies reveal that the acquisition of these constituents is motivated by both extralinguistic (proficiency, style and linguistic (word size, place of articulation factors, and that the process is mediated by an intermediate stage characterized by consonant lengthening or aspiration (Onset-Nucleus sharing. Based on these results, I propose that the segments and environments that seem to delay coda production (i.e., monosyllabic words, labial and dorsal consonants should be given priority in pronunciation instruction. Along the lines of Dickerson (1975, this paper proposes (what we believe is a more effective and socially realistic pedagogy for the teaching of English pronunciation within an approach that recognizes that "variability is the norm rather than the exception" in second language acquisition.

  11. PC based 8-parameter data acquisition system

    International Nuclear Information System (INIS)

    Gupta, J.D.; Naik, K.V.; Jain, S.K.; Pathak, R.V.; Suman, B.

    1989-01-01

    Multiparameter data acquisition (MPA) systems which analyse nuclear events with respect to more than one property of the event are essential tools for the study of some complex nuclear phenomena requiring analysis of time coincident spectra. For better throughput and accuracy each parameter is digitized by its own ADC. A stand alone low cost IBM PC based 8-parameter data acquisition system developed by the authors makes use of Address Recording technique for acquiring data from eight 12 bit ADC's in the PC Memory. Two memory buffers in the PC memory are used in ping-pong fashion so that data acquisition in one bank and dumping of data onto PC disk from the other bank can proceed simultaneously. Data is acquired in the PC memory through DMA mode for realising high throughput and hardware interrupt is used for switching banks for data acquisition. A comprehensive software package developed in Turbo-Pascal offers a set of menu-driven interactive commands to the user for setting-up system parameters and control of the system. The system is to be used with pelletron accelerator. (author). 5 figs

  12. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  13. Data acquisition and test system software

    International Nuclear Information System (INIS)

    Bourgeois, N.A. Jr.

    1979-03-01

    Sandia Laboratories has been assigned the task by the Base and Installation Security Systems (BISS) Program Office to develop various aspects of perimeter security systems. One part of this effort involves the development of advanced signal processing techniques to reduce the false and nuisance alarms from sensor systems while improving the probability of intrusion detection. The need existed for both data acquisition hardware and software. Also, the hardware is used to implement and test the signal processing algorithms in real time. The hardware developed for this signal processing task is the Data Acquisition and Test System (DATS). The programs developed for use on DATS are described. The descriptions are taken directly from the documentation included within the source programs themselves

  14. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    Science.gov (United States)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  15. Speed in Acquisitions

    DEFF Research Database (Denmark)

    Meglio, Olimpia; King, David R.; Risberg, Annette

    2017-01-01

    The advantage of speed is often invoked by academics and practitioners as an essential condition during post-acquisition integration, frequently without consideration of the impact earlier decisions have on acquisition speed. In this article, we examine the role speed plays in acquisitions across...... the acquisition process using research organized around characteristics that display complexity with respect to acquisition speed. We incorporate existing research with a process perspective of acquisitions in order to present trade-offs, and consider the influence of both stakeholders and the pre......-deal-completion context on acquisition speed, as well as the organization’s capabilities to facilitating that speed. Observed trade-offs suggest both that acquisition speed often requires longer planning time before an acquisition and that associated decisions require managerial judgement. A framework for improving...

  16. Parallel k-means++ for Multiple Shared-Memory Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Mackey, Patrick S.; Lewis, Robert R.

    2016-09-22

    In recent years k-means++ has become a popular initialization technique for improved k-means clustering. To date, most of the work done to improve its performance has involved parallelizing algorithms that are only approximations of k-means++. In this paper we present a parallelization of the exact k-means++ algorithm, with a proof of its correctness. We develop implementations for three distinct shared-memory architectures: multicore CPU, high performance GPU, and the massively multithreaded Cray XMT platform. We demonstrate the scalability of the algorithm on each platform. In addition we present a visual approach for showing which platform performed k-means++ the fastest for varying data sizes.

  17. Parallel Harmony Search Based Distributed Energy Resource Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  18. Parallel computing for homogeneous diffusion and transport equations in neutronics; Calcul parallele pour les equations de diffusion et de transport homogenes en neutronique

    Energy Technology Data Exchange (ETDEWEB)

    Pinchedez, K

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  19. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  20. Operating system design of parallel computer for on-line management of nuclear pressurised water reactor cores

    International Nuclear Information System (INIS)

    Gougam, F.

    1991-04-01

    This study is part of the PHAETON project which aims at increasing the knowledge of safety parameters of PWR core and reducing operating margins during the reactor cycle. The on-line system associates a simulator process to compute the three dimensional flux distribution and an acquisition process of reactor core parameters from the central instrumentation. The 3D flux calculation is the most time consuming. So, for cost and safety reasons, the PHAETON project proposes an approach which is to parallelize the 3D diffusion calculation and to use a computer based on parallel processor architecture. This paper presents the design of the operating system on which the application is executed. The routine interface proposed, includes the main operations necessary for programming a real time and parallel application. The primitives include: task management, data transfer, synchronisation by event signalling and by using the rendez-vous mechanisms. The primitives which are proposed use standard softwares like real-time kernel and UNIX operating system [fr

  1. Parallel GPU implementation of iterative PCA algorithms.

    Science.gov (United States)

    Andrecut, M

    2009-11-01

    Principal component analysis (PCA) is a key statistical technique for multivariate data analysis. For large data sets, the common approach to PCA computation is based on the standard NIPALS-PCA algorithm, which unfortunately suffers from loss of orthogonality, and therefore its applicability is usually limited to the estimation of the first few components. Here we present an algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA algorithms. The numerical results show that the GPU parallel optimized versions, based on CUBLAS (NVIDIA), are substantially faster (up to 12 times) than the CPU optimized versions based on CBLAS (GNU Scientific Library).

  2. Examination of Speed Contribution of Parallelization for Several Fingerprint Pre-Processing Algorithms

    Directory of Open Access Journals (Sweden)

    GORGUNOGLU, S.

    2014-05-01

    Full Text Available In analysis of minutiae based fingerprint systems, fingerprints needs to be pre-processed. The pre-processing is carried out to enhance the quality of the fingerprint and to obtain more accurate minutiae points. Reducing the pre-processing time is important for identification and verification in real time systems and especially for databases holding large fingerprints information. Parallel processing and parallel CPU computing can be considered as distribution of processes over multi core processor. This is done by using parallel programming techniques. Reducing the execution time is the main objective in parallel processing. In this study, pre-processing of minutiae based fingerprint system is implemented by parallel processing on multi core computers using OpenMP and on graphics processor using CUDA to improve execution time. The execution times and speedup ratios are compared with the one that of single core processor. The results show that by using parallel processing, execution time is substantially improved. The improvement ratios obtained for different pre-processing algorithms allowed us to make suggestions on the more suitable approaches for parallelization.

  3. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    National Research Council Canada - National Science Library

    Steed, Chad A; Fitzpatrick, Patrick J; Jankun-Kelly, T. J; Swan II, J. E

    2008-01-01

    ... for a particular dependent variable. These capabilities are combined into a unique visualization system that is demonstrated via a North Atlantic hurricane climate study using a systematic workflow. This research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets.

  4. Arthroscopically assisted stabilization of acute high-grade acromioclavicular joint separations in a coracoclavicular Double-TightRope technique: V-shaped versus parallel drill hole orientation.

    Science.gov (United States)

    Kraus, Natascha; Haas, Norbert P; Scheibel, Markus; Gerhardt, Christian

    2013-10-01

    The arthroscopically assisted Double-TightRope technique has recently been reported to yield good to excellent clinical results in the treatment of acute, high-grade acromioclavicular dislocation. However, the orientation of the transclavicular-transcoracoidal drill holes remains a matter of debate. A V-shaped drill hole orientation leads to better clinical and radiologic results and provides a higher vertical and horizontal stability compared to parallel drill hole placement. This was a cohort study; level of evidence, 2b. Two groups of patients with acute high-grade acromioclavicular joint instability (Rockwood type V) were included in this prospective, non-randomized cohort study. 15 patients (1 female/14 male) with a mean age of 37.7 (18-66) years were treated with a Double-TightRope technique using a V-shaped orientation of the drill holes (group 1). 13 patients (1 female/12 male) with a mean age of 40.9 (21-59) years were treated with a Double-TightRope technique with a parallel drill hole placement (group 2). After 2 years, the final evaluation consisted of a complete physical examination of both shoulders, evaluation of the Subjective Shoulder Value (SSV), Constant Score (CS), Taft Score (TF) and Acromioclavicular Joint Instability Score (ACJI) as well as a radiologic examination including bilateral anteroposterior stress views and bilateral Alexander views. After a mean follow-up of 2 years, all patients were free of shoulder pain at rest and during daily activities. Range of motion did not differ significantly between both groups (p > 0.05). Patients in group 1 reached on average 92.4 points in the CS, 96.2 % in the SSV, 10.5 points in the TF and 75.9 points in the ACJI. Patients in group 2 scored 90.5 points in the CS, 93.9 % in the SSV, 10.5 points in the TF and 84.5 points in the ACJI (p > 0.05). Radiographically, the coracoclavicular distance was found to be 13.9 mm (group 1) and 13.4 mm (group 2) on the affected side and 9.3 mm (group 1

  5. Robot-assisted ultrasound imaging: overview and development of a parallel telerobotic system.

    Science.gov (United States)

    Monfaredi, Reza; Wilson, Emmanuel; Azizi Koutenaei, Bamshad; Labrecque, Brendan; Leroy, Kristen; Goldie, James; Louis, Eric; Swerdlow, Daniel; Cleary, Kevin

    2015-02-01

    Ultrasound imaging is frequently used in medicine. The quality of ultrasound images is often dependent on the skill of the sonographer. Several researchers have proposed robotic systems to aid in ultrasound image acquisition. In this paper we first provide a short overview of robot-assisted ultrasound imaging (US). We categorize robot-assisted US imaging systems into three approaches: autonomous US imaging, teleoperated US imaging, and human-robot cooperation. For each approach several systems are introduced and briefly discussed. We then describe a compact six degree of freedom parallel mechanism telerobotic system for ultrasound imaging developed by our research team. The long-term goal of this work is to enable remote ultrasound scanning through teleoperation. This parallel mechanism allows for both translation and rotation of an ultrasound probe mounted on the top plate along with force control. Our experimental results confirmed good mechanical system performance with a positioning error of < 1 mm. Phantom experiments by a radiologist showed promising results with good image quality.

  6. What is adaptive about adaptive decision making? A parallel constraint satisfaction account.

    Science.gov (United States)

    Glöckner, Andreas; Hilbig, Benjamin E; Jekel, Marc

    2014-12-01

    There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. Herein, we contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts vs. multi-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes (PCS-DM) and contrast it theoretically and empirically against a multi-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns - as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking), and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model for decision making in particular. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Rubus: A compiler for seamless and extensible parallelism

    Science.gov (United States)

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been

  8. Rubus: A compiler for seamless and extensible parallelism.

    Directory of Open Access Journals (Sweden)

    Muhammad Adnan

    Full Text Available Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU, originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84

  9. EX6AFS: A data acquisition system for high-speed dispersive EXAFS measurements implemented using object-oriented programming techniques

    International Nuclear Information System (INIS)

    Jennings, G.; Lee, P.L.

    1995-01-01

    In this paper we describe the design and implementation of a computerized data-acquisition system for high-speed energy-dispersive EXAFS experiments on the X6A beamline at the National Synchrotron Light Source. The acquisition system drives the stepper motors used to move the components of the experimental setup and controls the readout of the EXAFS spectra. The system runs on a Macintosh IIfx computer and is written entirely in the object-oriented language C++. Large segments of the system are implemented by means of commercial class libraries, specifically the MacApp application framework from Apple, the Rogue Wave class library, and the Hierarchical Data Format datafile format library from the National Center for Supercomputing Applications. This reduces the amount of code that must be written and enhances reliability. The system makes use of several advanced features of C++: Multiple inheritance allows the code to be decomposed into independent software components and the use of exception handling allows the system to be much more reliable in the event of unexpected errors. Object-oriented techniques allow the program to be extended easily as new requirements develop. All sections of the program related to a particular concept are located in a small set of source files. The program will also be used as a prototype for future software development plans for the Basic Energy Science Synchrotron Radiation Center Collaborative Access Team beamlines being designed and built at the Advanced Photon Source

  10. Estimating liver perfusion from free-breathing continuously acquired dynamic gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid-enhanced acquisition with compressed sensing reconstruction.

    Science.gov (United States)

    Chandarana, Hersh; Block, Tobias Kai; Ream, Justin; Mikheev, Artem; Sigal, Samuel H; Otazo, Ricardo; Rusinek, Henry

    2015-02-01

    The purpose of this study was to estimate perfusion metrics in healthy and cirrhotic liver with pharmacokinetic modeling of high-temporal resolution reconstruction of continuously acquired free-breathing gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid-enhanced acquisition in patients undergoing clinically indicated liver magnetic resonance imaging. In this Health Insurance Portability and Accountability Act-compliant prospective study, 9 cirrhotic and 10 noncirrhotic patients underwent clinical magnetic resonance imaging, which included continuously acquired radial stack-of-stars 3-dimensional gradient recalled echo sequence with golden-angle ordering scheme in free breathing during contrast injection. A total of 1904 radial spokes were acquired continuously in 318 to 340 seconds. High-temporal resolution data sets were formed by grouping 13 spokes per frame for temporal resolution of 2.2 to 2.4 seconds, which were reconstructed using the golden-angle radial sparse parallel technique that combines compressed sensing and parallel imaging. High-temporal resolution reconstructions were evaluated by a board-certified radiologist to generate gadolinium concentration-time curves in the aorta (arterial input function), portal vein (venous input function), and liver, which were fitted to dual-input dual-compartment model to estimate liver perfusion metrics that were compared between cirrhotic and noncirrhotic livers. The cirrhotic livers had significantly lower total plasma flow (70.1 ± 10.1 versus 103.1 ± 24.3 mL/min per 100 mL; P The mean transit time was higher in the cirrhotic livers (24.4 ± 4.7 versus 15.7 ± 3.4 seconds; P the hepatocellular uptake rate was lower (3.03 ± 2.1 versus 6.53 ± 2.4 100/min; P < 0.05). Liver perfusion metrics can be estimated from free-breathing dynamic acquisition performed for every clinical examination without additional contrast injection or time. This is a novel paradigm for dynamic liver imaging.

  11. Self-organizing map models of language acquisition

    Science.gov (United States)

    Li, Ping; Zhao, Xiaowei

    2013-01-01

    Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061

  12. A High-Performance Parallel FDTD Method Enhanced by Using SSE Instruction Set

    Directory of Open Access Journals (Sweden)

    Dau-Chyrh Chang

    2012-01-01

    Full Text Available We introduce a hardware acceleration technique for the parallel finite difference time domain (FDTD method using the SSE (streaming (single instruction multiple data SIMD extensions instruction set. The implementation of SSE instruction set to parallel FDTD method has achieved the significant improvement on the simulation performance. The benchmarks of the SSE acceleration on both the multi-CPU workstation and computer cluster have demonstrated the advantages of (vector arithmetic logic unit VALU acceleration over GPU acceleration. Several engineering applications are employed to demonstrate the performance of parallel FDTD method enhanced by SSE instruction set.

  13. Parallel real-time visualization system for large-scale simulation. Application to WSPEEDI

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Kitabata, Hideyuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    2000-01-01

    The real-time visualization system, PATRAS (PArallel TRAcking Steering system) has been developed on parallel computing servers. The system performs almost all of the visualization tasks on a parallel computing server, and uses image data compression technique for efficient communication between the server and the client terminal. Therefore, the system realizes high performance concurrent visualization in an internet computing environment. The experience in applying PATRAS to WSPEEDI (Worldwide version of System for Prediction Environmental Emergency Dose Information) is reported. The application of PATRAS to WSPEEDI enables users to understand behaviours of radioactive tracers from different release points easily and quickly. (author)

  14. Advances in non-Cartesian parallel magnetic resonance imaging using the GRAPPA operator

    International Nuclear Information System (INIS)

    Seiberlich, Nicole

    2008-01-01

    This thesis has presented several new non-Cartesian parallel imaging methods which simplify both gridding and the reconstruction of images from undersampled data. A novel approach which uses the concepts of parallel imaging to grid data sampled along a non-Cartesian trajectory called GRAPPA Operator Gridding (GROG) is described. GROG shifts any acquired k-space data point to its nearest Cartesian location, thereby converting non-Cartesian to Cartesian data. The only requirements for GROG are a multi-channel acquisition and a calibration dataset for the determination of the GROG weights. Then an extension of GRAPPA Operator Gridding, namely Self-Calibrating GRAPPA Operator Gridding (SC-GROG) is discussed. SC-GROG is a method by which non-Cartesian data can be gridded using spatial information from a multi-channel coil array without the need for an additional calibration dataset, as required in standard GROG. Although GROG can be used to grid undersampled datasets, it is important to note that this method uses parallel imaging only for gridding, and not to reconstruct artifact-free images from undersampled data. Thereafter a simple, novel method for performing modified Cartesian GRAPPA reconstructions on undersampled non-Cartesian k-space data gridded using GROG to arrive at a non-aliased image is introduced. Because the undersampled non-Cartesian data cannot be reconstructed using a single GRAPPA kernel, several Cartesian patterns are selected for the reconstruction. Finally a novel method of using GROG to mimic the bunched phase encoding acquisition (BPE) scheme is discussed. In MRI, it is generally assumed that an artifact-free image can be reconstructed only from sampled points which fulfill the Nyquist criterion. However, the BPE reconstruction is based on the Generalized Sampling Theorem of Papoulis, which states that a continuous signal can be reconstructed from sampled points as long as the points are on average sampled at the Nyquist frequency. A novel

  15. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  16. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  17. A review of theoretical perspectives on language learning and acquisition

    Directory of Open Access Journals (Sweden)

    Norbahira Mohamad Nor

    2018-01-01

    Full Text Available This paper reviews three main theoretical perspectives on language learning and acquisition in an attempt to elucidate how people acquire their first language (L1 and learn their second language (L2. Behaviorist, Innatist and Interactionist offer different perspectives on language learning and acquisition which influence the acceptance of how an L2 should be taught and learned. This paper also explicates the relationship between L1 and L2, and elaborates on the similarities and differences between the two. This paper concludes that there is no one solid linguistic theory which can provide the ultimate explanation of L1 acquisition and L2 learning as there are many interrelated factors that influence the success of language acquisition or language learning. The implication is that teachers should base their classroom management practices and pedagogical techniques on several theories rather than a single theory as learners learn and acquire language differently. It is hoped that this paper provides useful insights into the complex process involved in language acquisition and learning, and contributes to the increased awareness of the process among the stakeholders in the field of language education. Keywords: behaviorist, innatist, interactionist, language acquisition, second language learning

  18. Fuzzy Controlled Parallel AC-DC Converter for PFC

    Directory of Open Access Journals (Sweden)

    M Subba Rao

    2011-01-01

    Full Text Available Paralleling of converter modules is a well-known technique that is often used in medium-power applications to achieve the desired output power by using smaller size of high frequency transformers and inductors. In this paper, a parallel-connected single-phase PFC topology using flyback and forward converters is proposed to improve the output voltage regulation with simultaneous input power factor correction (PFC and control. The goal of the control is to stabilize the output voltage of the converter against the load variations. The paper presents the derivation of fuzzy control rules for the dc/dc converter circuit and control algorithm for regulating the dc/dc converter. This paper presents a design example and circuit analysis for 200 W power supply. The proposed approach offers cost effective, compact and efficient AC/DC converter by the use of parallel power processing. MATLAB/SIMULINK is used for implementation and simulation results show the performance improvement.

  19. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    Science.gov (United States)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  20. Treatment planning in radiosurgery: parallel Monte Carlo simulation software

    Energy Technology Data Exchange (ETDEWEB)

    Scielzo, G [Galliera Hospitals, Genova (Italy). Dept. of Hospital Physics; Grillo Ruggieri, F [Galliera Hospitals, Genova (Italy) Dept. for Radiation Therapy; Modesti, M; Felici, R [Electronic Data System, Rome (Italy); Surridge, M [University of South Hampton (United Kingdom). Parallel Apllication Centre

    1995-12-01

    The main objective of this research was to evaluate the possibility of direct Monte Carlo simulation for accurate dosimetry with short computation time. We made us of: graphics workstation, linear accelerator, water, PMMA and anthropomorphic phantoms, for validation purposes; ionometric, film and thermo-luminescent techniques, for dosimetry; treatment planning system for comparison. Benchmarking results suggest that short computing times can be obtained with use of the parallel version of EGS4 that was developed. Parallelism was obtained assigning simulation incident photons to separate processors, and the development of a parallel random number generator was necessary. Validation consisted in: phantom irradiation, comparison of predicted and measured values good agreement in PDD and dose profiles. Experiments on anthropomorphic phantoms (with inhomogeneities) were carried out, and these values are being compared with results obtained with the conventional treatment planning system.

  1. Neuroimaging and Research into Second Language Acquisition

    Science.gov (United States)

    Sabourin, Laura

    2009-01-01

    Neuroimaging techniques are becoming not only more and more sophisticated but are also coming to be increasingly accessible to researchers. One thing that one should take note of is the potential of neuroimaging research within second language acquisition (SLA) to contribute to issues pertaining to the plasticity of the adult brain and to general…

  2. VALU, AVX and GPU acceleration techniques for parallel FDTD methods

    CERN Document Server

    Yu, Wenhua

    2013-01-01

    This book introduces a general hardware acceleration technique that can significantly speed up FDTD simulations and their applications to engineering problems without requiring any additional hardware devices. This acceleration of complex problems can be efficient in saving both time and money and once learned these new techniques can be used repeatedly.

  3. A DSP controlled data acquisition system for CELSIUS

    International Nuclear Information System (INIS)

    Bengtsson, M.; Lofnes, T.; Ziemann, V.

    2000-01-01

    We describe a data acquisition system based on two 10 MHz A/D-converters, a SHARC Digital Signal Processor (DSP), and a digital synthesizer used for triggering the A/D-converters. The temporal macrostructure of the data acquisition can be determined by external triggers or by timer interrupts from the DSP. In this way up to two million samples can be stored in DSP external memory. The samples are analyzed by directly fast Fourier transforming blocks of samples. In another mode we use software-based downmixing and filtering techniques to increase the resolution and zoom in on a small frequency band. Spectra of up to 5 MHz can be manipulated and displayed as waterfall plots or spectral maps on the host computer directly. Moreover, signals of up to 70 MHz can be analyzed by undersampling techniques. We use this system to analyze Schottky spectra from electron-cooled ion beams in CELSIUS and report drag rate measurements and observations of instabilities

  4. A DSP controlled data acquisition system for CELSIUS

    CERN Document Server

    Bengtsson, M; Ziemann, Volker

    2000-01-01

    We describe a data acquisition system based on two 10 MHz A/D-converters, a SHARC Digital Signal Processor (DSP), and a digital synthesizer used for triggering the A/D-converters. The temporal macrostructure of the data acquisition can be determined by external triggers or by timer interrupts from the DSP. In this way up to two million samples can be stored in DSP external memory. The samples are analyzed by directly fast Fourier transforming blocks of samples. In another mode we use software-based downmixing and filtering techniques to increase the resolution and zoom in on a small frequency band. Spectra of up to 5 MHz can be manipulated and displayed as waterfall plots or spectral maps on the host computer directly. Moreover, signals of up to 70 MHz can be analyzed by undersampling techniques. We use this system to analyze Schottky spectra from electron-cooled ion beams in CELSIUS and report drag rate measurements and observations of instabilities.

  5. Performance-Based Service Acquisition (PBSA) Study and Graduate Level Course Material

    National Research Council Canada - National Science Library

    Kennedy, Penny S; McClure, Joe T

    2005-01-01

    .... It is important to understand that the PBSA contract form involves acquisition strategies, methods, and techniques that define and communicate measurable performance expectations in terms of outcomes...

  6. Optimal performance of data acquisition and processing for bone SPECT using Tc-99m

    International Nuclear Information System (INIS)

    Tantawy, F.A.; Ziada, G.A.; Talaat, T.; Hassan, A.A.

    1995-01-01

    The present work deals with the physical factors that could affect the quality in the bone SPECT technique. The factors included different acquisition and processing variables such as matrix size, time for acquisition, preprocessing filter and reconstruction back projection filter. Our results revealed that the best matrix size was 64x64. The acquisition time was tested between 20 s/step to 40 s/step. It has been found that the optimal acquisition time was 20 s/step. Concerning the preprocessing filter, 9-Bw (8-0.3) and F-Bw (8-0.3) were the best. At the same time, back projection filters were applied by Ramp, Shepp and logan, medium and chesler. It has been found that the best reconstruction back projection filter was ramp filter. From the above results, the matrix size 64x64, acquisition time 20 s/step, preprocessing filter (9-Bw (8-0.3) and F-Bw (8-0.3)) and reconstruction back projection filter Ramp were selected as the optimum parameters to be taken into consideration in bone SPECT technique. Tc- 99 m was used a radioactive isotope. 9 figs

  7. Cooperative storage of shared files in a parallel computing system with dynamic block size

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  8. Traditional Tracking with Kalman Filter on Parallel Architectures

    Science.gov (United States)

    Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; MacNeill, Ian; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2015-05-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. We report the results of our investigations into the potential and limitations of these algorithms on the new parallel hardware.

  9. Massively Parallel Single-Molecule Manipulation Using Centrifugal Force

    Science.gov (United States)

    Wong, Wesley; Halvorsen, Ken

    2011-03-01

    Precise manipulation of single molecules has led to remarkable insights in physics, chemistry, biology, and medicine. However, two issues that have impeded the widespread adoption of these techniques are equipment cost and the laborious nature of making measurements one molecule at a time. To meet these challenges, we have developed an approach that enables massively parallel single- molecule force measurements using centrifugal force. This approach is realized in the centrifuge force microscope, an instrument in which objects in an orbiting sample are subjected to a calibration-free, macroscopically uniform force- field while their micro-to-nanoscopic motions are observed. We demonstrate high- throughput single-molecule force spectroscopy with this technique by performing thousands of rupture experiments in parallel, characterizing force-dependent unbinding kinetics of an antibody-antigen pair in minutes rather than days. Currently, we are taking steps to integrate high-resolution detection, fluorescence, temperature control and a greater dynamic range in force. With significant benefits in efficiency, cost, simplicity, and versatility, single-molecule centrifugation has the potential to expand single-molecule experimentation to a wider range of researchers and experimental systems.

  10. Data acquisition electronics for positron emission mammography (PEM) detectors

    International Nuclear Information System (INIS)

    Martinez, J.D.; Sebastia, A.; Cerda, J.; Esteve, R.; Mora, F.J.; Toledo, J.F.; Benlloch, J.M.; Gimenez, N.; Gimenez, M.; Lerche, Ch. W.; Pavon, N.; Sanchez, F.

    2005-01-01

    Positron emission mammography (PEM) is an innovative technique to increase sensitivity and overcome the main drawbacks of conventional X-ray screening. However, dedicated PET imaging systems demand specific hardware solutions for data acquisition and processing that can take advantage of the reduction in the number of channels. Data acquisition issues can affect PEM scanners performance and they should be exhaustively addressed in order to exploit the increment in the event count rate. This is crucial in order to reduce both the scanning time and the total injected dose. This paper presents the electronics for our PEM camera prototype that enables us to achieve very high-count rates and perform comprehensive online processing. Results about acquisition in our detector for a typical clinical setup are studied using Monte Carlo simulation of hot lesion phantoms

  11. Fluorous Parallel Synthesis of A Hydantoin/Thiohydantoin Library

    Science.gov (United States)

    Lu, Yimin; Zhang, Wei

    2007-01-01

    Fluorous tagging strategy is applied to solution-phase parallel synthesis of a library containing hydantoin and thiohydantoin analogs. Two perfluoroalkyl (Rf)-tagged α-amino esters each react with 6 aromatic aldehydes under reductive amination conditions. Twelve amino esters then each react with 10 isocyanates and isothiocyanates in parallel. The resulting 120 ureas and thioureas undergo spontaneous cyclization to form the corresponding hydantoins and thiohydantoins. The intermediate and final product purifications are performed with solid-phase extraction (SPE) over FluoroFlash™ cartridges, no chromatography is required. Using standard instruments and straightforward SPE technique, one chemist accomplished the 120-member library synthesis in less than 5 working days, including starting material synthesis and product analysis. PMID:15789556

  12. 3-D acquisition geometry analysis : Incorporating information from multiples

    NARCIS (Netherlands)

    Kumar, A.; Blacquiere, G.; Verschuur, D.J.

    2014-01-01

    Recent advances in survey design have led to conventional common-midpoint-based analysis being replaced by the subsurface-based seismic acquisition analysis and design, with the emphasis on advance techniques of illumination analysis. Amongst them are wave-equation-based seismic illumination

  13. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  14. Current status of the ParInt package for parallel multivariate integration

    International Nuclear Information System (INIS)

    Doncker, E. de; Kaugars, K.; Cucos, L.; Zanny, R.

    2002-01-01

    The ParInt project focuses on the development of parallel algorithms and software for the computation of multi-variate integrals. We will give an overview of the contents and capabilities of the package. Our objective has been to provide the end-user with state of the art problem solving power. This has required work in a number of areas, including the fundamental numerical techniques, strategies for parallelization, user interfaces for general use and specific applications, and visualization of computations to analyze the mutual influences of problem characteristics and algorithm behavior. Furthermore, the integration of all the above into a versatile set of tools is aimed toward an efficient use of the available parallel or distributed computer resources. (author)

  15. The Wage Premium of Globalization: Evidence from European Mergers and Acquisitions

    OpenAIRE

    Oberhofer, Harald; Stöckl, Matthias; Winner, Hannes

    2012-01-01

    We provide evidence on the impact of globalization on labor market outcomes analyzing pay differences between foreign-acquired and domestically-owned firms. For this purpose, we use firm level data from 16 European countries over the time period 1999 to 2006. Applying propensity score matching techniques we estimate positive wage premia of cross-boarder merger and acquisitions (M&As), suggesting that foreign acquired firms exhibit higher short-run (post-acquisition) wages than their domestic ...

  16. High spatial and temporal resolution retrospective cine cardiovascular magnetic resonance from shortened free breathing real-time acquisitions.

    Science.gov (United States)

    Xue, Hui; Kellman, Peter; Larocca, Gina; Arai, Andrew E; Hansen, Michael S

    2013-11-14

    Cine cardiovascular magnetic resonance (CMR) is challenging in patients who cannot perform repeated breath holds. Real-time, free-breathing acquisition is an alternative, but image quality is typically inferior. There is a clinical need for techniques that achieve similar image quality to the segmented cine using a free breathing acquisition. Previously, high quality retrospectively gated cine images have been reconstructed from real-time acquisitions using parallel imaging and motion correction. These methods had limited clinical applicability due to lengthy acquisitions and volumetric measurements obtained with such methods have not previously been evaluated systematically. This study introduces a new retrospective reconstruction scheme for real-time cine imaging which aims to shorten the required acquisition. A real-time acquisition of 16-20s per acquired slice was inputted into a retrospective cine reconstruction algorithm, which employed non-rigid registration to remove respiratory motion and SPIRiT non-linear reconstruction with temporal regularization to fill in missing data. The algorithm was used to reconstruct cine loops with high spatial (1.3-1.8 × 1.8-2.1 mm²) and temporal resolution (retrospectively gated, 30 cardiac phases, temporal resolution 34.3 ± 9.1 ms). Validation was performed in 15 healthy volunteers using two different acquisition resolutions (256 × 144/192 × 128 matrix sizes). For each subject, 9 to 12 short axis and 3 long axis slices were imaged with both segmented and real-time acquisitions. The retrospectively reconstructed real-time cine images were compared to a traditional segmented breath-held acquisition in terms of image quality scores. Image quality scoring was performed by two experts using a scale between 1 and 5 (poor to good). For every subject, LAX and three SAX slices were selected and reviewed in the random order. The reviewers were blinded to the reconstruction approach and acquisition protocols and

  17. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  18. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  19. Registration of global cardiac function with real-time trueFISP in one respiratory cycle

    International Nuclear Information System (INIS)

    Wintersperger, B.J.; Nikolaou, K.; Huber, A.; Dietrich, O.; Reiser, M.F.; Schoenberg, S.O.; Muehling, O.; Nittka, M.; Kiefer, B.

    2004-01-01

    Real-time multislice cine techniques lead to inaccurate results in ventricular volumes based on limited temporal resolution. The purpose of the study is to evaluate a real-time cine technique with parallel imaging algorithms in comparison to standard segmented techniques. Twelve patients underwent cardiac cine MRI using real-time multislice cine trueFISP. Temporal resolution was improved using parallel acquisition techniques (iPAT) and data acquisition was performed in a single breath-hold along the patients' short axis. Evaluation of EDV, ESV, EF and myocardial mass was performed and results compared to a standard segmented single-slice cine trueFISP. Combination of real-time cine trueFISP and iPAT provided a temporal resolution of 48 ms. Results of the multislice approach showed an excellent correlation to standard single-slice trueFISP for EDV (0.94, p [de

  20. Hardware system of parallel processing for fast CT image reconstruction based on circular shifting float memory architecture

    International Nuclear Information System (INIS)

    Wang Shi; Kang Kejun; Wang Jingjin

    1995-01-01

    Computerized Tomography (CT) is expected to become an inevitable diagnostic technique in the future. However, the long time required to reconstruct an image has been one of the major drawbacks associated with this technique. Parallel process is one of the best way to solve this problem. This paper gives the architecture and hardware design of PIRS-4 (4-processor Parallel Image Reconstruction System) which is a parallel processing system for fast 3D-CT image reconstruction by circular shifting float memory architecture. It includes structure and component of the system, the design of cross bar switch and details of control model. The test results are described

  1. Data acquisition system for the Large Scintillating Neutrino Detector at Los Alamos

    International Nuclear Information System (INIS)

    Anderson, G.; Cohen, I.; Homann, B.; Smith, D.; Strossman, W.; VanDalen, G.J.; Weaver, L.S.; Evans, D.; Vernon, W.; Band, A.; Burman, R.; Chang, T.; Federspiel, F.; Foreman, W.; Gomulka, S.; Hart, G.; Kozlowski, T.; Louis, W.C.; Margulies, J.; Nuanes, A.; Sandberg, V.; Thompson, T.N.; White, D.H.; Whitehouse, D.

    1992-01-01

    The data acquisition system for the Large Scintillating Neutrino Detector (LSND) is described. The system collects time and charge information in real time from 1600 photomultiplier tubes and passes the data in intelligent-trigger selected time windows to analysis computers, where events are reconstructed and analyzed as candidates for a variety of neutrino-related physics processes. The system is composed of fourteen VME crates linked to a Silicon Graphics, Inc. ''4D/480'' multiprocessor computer through multiple, parallel Ethernets, and a collection of contemporary high-performance workstations

  2. Development of the transverse tensile and fracture toughness test techniques for spent fuel cladding

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, S. B.; Hong, K. P.; Jung, Y. H.; Seo, H. S.; Oh, W. H.; Yoo, B. O.; Kim, D. S.; Seo, K. S

    2001-12-01

    To define the cause of cladding damage which can take place during the operation of nuclear power plant and the storage through the degradation aspect of mechanical characteristics, the transverse tensile an fracture toughness test were developed in hot cell at IMEF(Irradiated Material Experiment Facility). The following hot cell techniques were developed. 1. The development of a jig and a specimen for transverse tensile test 2. The acquisition of a manufacturing technique for the transverse tensile specimen at hot cell 3. The acquisition of testing procedures and an analysis technque for the transverse tensile 4. The dimensional determination of an optimized fracture toughness specimen 5. The acquisition of manufacturing technique for the fracture toughness test specimen at the hot cell 6. The acquisition of testing procedures and analysis technique for the fracture toughness test (Multiple specimen method, DCPD method, Load ratio method)

  3. Parallel computing for homogeneous diffusion and transport equations in neutronics

    International Nuclear Information System (INIS)

    Pinchedez, K.

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  4. SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation

    Science.gov (United States)

    Steinman, Jeff S.

    1992-01-01

    Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.

  5. Parallelization of applications for networks with homogeneous and heterogeneous processors

    International Nuclear Information System (INIS)

    Colombet, L.

    1994-01-01

    The aim of this thesis is to study and develop efficient methods for parallelization of scientific applications on parallel computers with distributed memory. The first part presents two libraries of PVM (Parallel Virtual Machine) and MPI (Message Passing Interface) communication tools. They allow implementation of programs on most parallel machines, but also on heterogeneous computer networks. This chapter illustrates the problems faced when trying to evaluate performances of networks with heterogeneous processors. To evaluate such performances, the concepts of speed-up and efficiency have been modified and adapted to account for heterogeneity. The second part deals with a study of parallel application libraries such as ScaLAPACK and with the development of communication masking techniques. The general concept is based on communication anticipation, in particular by pipelining message sending operations. Experimental results on Cray T3D and IBM SP1 machines validates the theoretical studies performed on basic algorithms of the libraries discussed above. Two examples of scientific applications are given: the first is a model of young stars for astrophysics and the other is a model of photon trajectories in the Compton effect. (J.S.). 83 refs., 65 figs., 24 tabs

  6. Simultaneous multislice echo planar imaging with blipped controlled aliasing in parallel imaging results in higher acceleration: a promising technique for accelerated diffusion tensor imaging of skeletal muscle

    OpenAIRE

    Filli, Lukas; Piccirelli, Marco; Kenkel, David; Guggenberger, Roman; Andreisek, Gustav; Beck, Thomas; Runge, Val M; Boss, Andreas

    2015-01-01

    PURPOSE The aim of this study was to investigate the feasibility of accelerated diffusion tensor imaging (DTI) of skeletal muscle using echo planar imaging (EPI) applying simultaneous multislice excitation with a blipped controlled aliasing in parallel imaging results in higher acceleration unaliasing technique. MATERIALS AND METHODS After federal ethics board approval, the lower leg muscles of 8 healthy volunteers (mean [SD] age, 29.4 [2.9] years) were examined in a clinical 3-T magnetic ...

  7. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  8. Unconstrained Iris Acquisition and Recognition Using COTS PTZ Camera

    Directory of Open Access Journals (Sweden)

    Venugopalan Shreyas

    2010-01-01

    Full Text Available Abstract Uniqueness of iris patterns among individuals has resulted in the ubiquity of iris recognition systems in virtual and physical spaces, at high security facilities around the globe. Traditional methods of acquiring iris patterns in commercial systems scan the iris when an individual is at a predetermined location in front of the scanner. Most state-of-the-art techniques for unconstrained iris acquisition in literature use expensive custom equipment and are composed of a multicamera setup, which is bulky, expensive, and requires calibration. This paper investigates a method of unconstrained iris acquisition and recognition using a single commercial off-the-shelf (COTS pan-tilt-zoom (PTZ camera, that is compact and that reduces the cost of the final system, compared to other proposed hierarchical multicomponent systems. We employ state-of-the-art techniques for face detection and a robust eye detection scheme using active shape models for accurate landmark localization. Additionally, our system alleviates the need for any calibration stage prior to its use. We present results using a database of iris images captured using our system, while operating in an unconstrained acquisition mode at 1.5 m standoff, yielding an iris diameter in the 150–200 pixels range.

  9. Learning the „Look-at-you-go” Moment in Corporate Governance Negotiation Techniques

    Directory of Open Access Journals (Sweden)

    Clara VOLINTIRU

    2015-06-01

    Full Text Available This article explores in an interdisciplinary manner the way concepts are learned or internalized, depending on the varying means of transmission, as well as on the sequencing in which the information is transmitted. In this sense, we build on the constructivist methodology framework in assessing concept acquisition in academic disciplines, at an advanced level. We also present the evolution of certain negotiation techniques, from traditional setting, to less predictable ones. This assessment is compared to a specific Pop Culture case study in which we find an expressive representation of negotiation techniques. Our methodology employs both focus groups and experimental design to test the relative positioning of theoretical concept acquisition (TCA as opposed to expressive concept-acquisition (ECA. Our findings suggest that while expressive concept acquisition (ECA via popular culture representations enhances the students understanding of negotiation techniques, this can only happen in circumstances in which a theoretical concept acquisition (TCA is pre-existent.

  10. PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY

    Directory of Open Access Journals (Sweden)

    B. Kumar

    2016-06-01

    Full Text Available Extended morphological profile (EMP is a good technique for extracting spectral-spatial information from the images but large size of hyperspectral images is an important concern for creating EMPs. However, with the availability of modern multi-core processors and commodity parallel processing systems like graphics processing units (GPUs at desktop level, parallel computing provides a viable option to significantly accelerate execution of such computations. In this paper, parallel implementation of an EMP based spectralspatial classification method for hyperspectral imagery is presented. The parallel implementation is done both on multi-core CPU and GPU. The impact of parallelization on speed up and classification accuracy is analyzed. For GPU, the implementation is done in compute unified device architecture (CUDA C. The experiments are carried out on two well-known hyperspectral images. It is observed from the experimental results that GPU implementation provides a speed up of about 7 times, while parallel implementation on multi-core CPU resulted in speed up of about 3 times. It is also observed that parallel implementation has no adverse impact on the classification accuracy.

  11. An application specific integrated circuit and data acquisition system for digital X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Beuville, E.; Cederstroem, B.; Danielsson, M.; Luo, L.; Nygren, D.; Oltman, E.; Vestlund, J. [Lawrence Berkeley National Lab., CA (United States)

    1998-04-01

    We have developed an application specific integrated circuit (ASIC) and data acquisition system for digital X-ray imaging. The chip consists of 16 parallel channels, each containing preamplifier, shaper, comparator and a 16 bit counter. We have demonstrated noiseless single-photon counting over a threshold of 7.2 keV using Silicon detectors and are presently capable of maximum counting rates of 2 MHz per channel. The ASIC is controlled by a personal computer through a commercial PCI card, which is also used for data acquisition. The content of the 16 bit counters are loaded into a shift register and transferred to the PC at any time at a rate of 20 MHz. The system is non-complicated, low cost and high performance and is optimised for digital X-ray imaging applications. (orig.). 11 refs.

  12. An application specific integrated circuit and data acquisition system for digital X-ray imaging

    International Nuclear Information System (INIS)

    Beuville, E.; Cederstroem, B.; Danielsson, M.; Luo, L.; Nygren, D.; Oltman, E.; Vestlund, J.

    1998-01-01

    We have developed an application specific integrated circuit (ASIC) and data acquisition system for digital X-ray imaging. The chip consists of 16 parallel channels, each containing preamplifier, shaper, comparator and a 16 bit counter. We have demonstrated noiseless single-photon counting over a threshold of 7.2 keV using Silicon detectors and are presently capable of maximum counting rates of 2 MHz per channel. The ASIC is controlled by a personal computer through a commercial PCI card, which is also used for data acquisition. The content of the 16 bit counters are loaded into a shift register and transferred to the PC at any time at a rate of 20 MHz. The system is non-complicated, low cost and high performance and is optimised for digital X-ray imaging applications. (orig.)

  13. A higher level language data acquisition system (III) - the user data acquisition program

    International Nuclear Information System (INIS)

    Finn, J.M.; Gulbranson, R.L.; Huang, T.L.

    1983-01-01

    The nuclear physics group at the University of Illinois has implemented a data acquisition system using modified versions of the Concurrent Pascal and Sequential Pascal languages. The user, a physicist, develops a data acquisition ''operating system'', written in these higher level languages, which is tailored to the planned experiment. The user must include only those system functions which are essential to the task, thus improving efficiency. The user program is constructed from simple modules, mainly consisting of Concurrent Pascal PROCESSes, MONITORs, and CLASSes together with appropriate data type definitions. Entire programs can be put together using ''cut and paste'' techniques. Planned enhancements include the automating of this process. Systems written for the Perkin-Elmer 3220 using this approach can easily exceed 2 kHz data rates for event by event handling; 20 kHz data rates have been achieved by the addition of buffers in the interrupt handling software. These rates have been achieved without the use of special-purpose hardware such as micro-programmed branch drivers. With the addition of such devices even higher data rates should be possible

  14. A two-level parallel direct search implementation for arbitrarily sized objective functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, S.A.; Shadid, N.; Moffat, H.K. [Sandia National Labs., Albuquerque, NM (United States)] [and others

    1994-12-31

    In the past, many optimization schemes for massively parallel computers have attempted to achieve parallel efficiency using one of two methods. In the case of large and expensive objective function calculations, the optimization itself may be run in serial and the objective function calculations parallelized. In contrast, if the objective function calculations are relatively inexpensive and can be performed on a single processor, then the actual optimization routine itself may be parallelized. In this paper, a scheme based upon the Parallel Direct Search (PDS) technique is presented which allows the objective function calculations to be done on an arbitrarily large number (p{sub 2}) of processors. If, p, the number of processors available, is greater than or equal to 2p{sub 2} then the optimization may be parallelized as well. This allows for efficient use of computational resources since the objective function calculations can be performed on the number of processors that allow for peak parallel efficiency and then further speedup may be achieved by parallelizing the optimization. Results are presented for an optimization problem which involves the solution of a PDE using a finite-element algorithm as part of the objective function calculation. The optimum number of processors for the finite-element calculations is less than p/2. Thus, the PDS method is also parallelized. Performance comparisons are given for a nCUBE 2 implementation.

  15. Spectral analysis of parallel incomplete factorizations with implicit pseudo­-overlap

    NARCIS (Netherlands)

    Magolu monga Made, Mardochée; Vorst, H.A. van der

    2000-01-01

    Two general parallel incomplete factorization strategies are investigated. The techniques may be interpreted as generalized domain decomposition methods. In contrast to classical domain decomposition methods, adjacent subdomains exchange data during the construction of the incomplete

  16. Language Acquisition without an Acquisition Device

    Science.gov (United States)

    O'Grady, William

    2012-01-01

    Most explanatory work on first and second language learning assumes the primacy of the acquisition phenomenon itself, and a good deal of work has been devoted to the search for an "acquisition device" that is specific to humans, and perhaps even to language. I will consider the possibility that this strategy is misguided and that language…

  17. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  18. ICNTS. Benchmarking of momentum correction techniques

    International Nuclear Information System (INIS)

    Beidler, Craig D.; Isaev, Maxim Yu.; Kasilov, Sergei V.

    2008-01-01

    In the traditional neoclassical ordering, mono-energetic transport coefficients are evaluated using the simplified Lorentz form of the pitch-angle collision operator which violates momentum conservation. In this paper, the parallel momentum balance with radial parallel momentum transport and viscosity terms is analysed, in particular with respect to the radial electric field. Next, the impact of momentum conservation in the stellarator lmfp-regime is estimated for the radial transport and the parallel electric conductivity. Finally, momentum correction techniques are described based on mono-energetic transport coefficients calculated e.g. by the DKES code, and preliminary results for the parallel electric conductivity and the bootstrap current are presented. (author)

  19. An method of verify period signal based on data acquisition card

    International Nuclear Information System (INIS)

    Zeng Shaoli

    2005-01-01

    This paper introduces an method to verify index voltage of Period Signal Generator by using data acquisition card. which it's error is less 0.5%. A corresponding Win32's program, which use voluntarily developed VxD to control data acquisition card direct I/O and multi thread technique for gain the best time scale precision, has developed in Windows platform. The program will real time collect inda voltage data and auto measure period. (authors)

  20. Post-Acquisition IT Integration

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Yetton, Philip

    2013-01-01

    The extant research on post-acquisition IT integration analyzes how acquirers realize IT-based value in individual acquisitions. However, serial acquirers make 60% of acquisitions. These acquisitions are not isolated events, but are components in growth-by-acquisition programs. To explain how...... serial acquirers realize IT-based value, we develop three propositions on the sequential effects on post-acquisition IT integration in acquisition programs. Their combined explanation is that serial acquirers must have a growth-by-acquisition strategy that includes the capability to improve...... IT integration capabilities, to sustain high alignment across acquisitions and to maintain a scalable IT infrastructure with a flat or decreasing cost structure. We begin the process of validating the three propositions by investigating a longitudinal case study of a growth-by-acquisition program....

  1. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  2. On program restructuring, scheduling, and communication for parallel processor systems

    Energy Technology Data Exchange (ETDEWEB)

    Polychronopoulos, Constantine D. [Univ. of Illinois, Urbana, IL (United States)

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, these algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.

  3. Distributed mass data acquisition system based on PCs and windows NT for LHD fusion plasma experiment

    International Nuclear Information System (INIS)

    Nakanishi, H.; Kojima, M.; Ohsuna, M.; Komada, S.; Emoto, M.; Sugisaki, H.; Sudo, S.

    2000-12-01

    A new data acquisition and management system has been developed for the LHD experiment. It has the capability to process 100 MB - 1 GB raw data within a few tens seconds after every plasma discharge. It employs wholly distributed and loosely-tied parallel tasking structure through a fast network, and the cluster of the distributed database severs seems to be a virtual macro-machine as a whole. A PC/Windows NT computer is installed for each diagnostics data acquisition of about 30 kinds, and it controls CAMAC digitizers through the optical SCSI extenders. The diagnostic timing system consists of some kinds of VME modules that are installed to remotely control the diagnostic devices in real-time. They can, as a whole system, distribute the synchronous sampling clocks and programmable triggers for measurement digitizers. The data retrieving terminals can access database as application service clients, and are functionally separated from the data acquisition severs by way of the switching Ethernet. (author)

  4. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  5. Design and DSP implementation of star image acquisition and star point fast acquiring and tracking

    Science.gov (United States)

    Zhou, Guohui; Wang, Xiaodong; Hao, Zhihang

    2006-02-01

    Star sensor is a special high accuracy photoelectric sensor. Attitude acquisition time is an important function index of star sensor. In this paper, the design target is to acquire 10 samples per second dynamic performance. On the basis of analyzing CCD signals timing and star image processing, a new design and a special parallel architecture for improving star image processing are presented in this paper. In the design, the operation moving the data in expanded windows including the star to the on-chip memory of DSP is arranged in the invalid period of CCD frame signal. During the CCD saving the star image to memory, DSP processes the data in the on-chip memory. This parallelism greatly improves the efficiency of processing. The scheme proposed here results in enormous savings of memory normally required. In the scheme, DSP HOLD mode and CPLD technology are used to make a shared memory between CCD and DSP. The efficiency of processing is discussed in numerical tests. Only in 3.5ms is acquired the five lightest stars in the star acquisition stage. In 43us, the data in five expanded windows including stars are moved into the internal memory of DSP, and in 1.6ms, five star coordinates are achieved in the star tracking stage.

  6. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    Science.gov (United States)

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  7. High performance parallelism pearls 2 multicore and many-core programming approaches

    CERN Document Server

    Jeffers, Jim

    2015-01-01

    High Performance Parallelism Pearls Volume 2 offers another set of examples that demonstrate how to leverage parallelism. Similar to Volume 1, the techniques included here explain how to use processors and coprocessors with the same programming - illustrating the most effective ways to combine Xeon Phi coprocessors with Xeon and other multicore processors. The book includes examples of successful programming efforts, drawn from across industries and domains such as biomed, genetics, finance, manufacturing, imaging, and more. Each chapter in this edited work includes detailed explanations of t

  8. Parallel discrete event simulation: A shared memory approach

    Science.gov (United States)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  9. Optimizing monoscopic kV fluoro acquisition for prostate intrafraction motion evaluation

    International Nuclear Information System (INIS)

    Adamson, Justus; Wu Qiuwen

    2009-01-01

    Monoscopic kV imaging during radiotherapy has been recently implemented for prostate intrafraction motion evaluation. However, the accuracy of 3D localization techniques from monoscopic imaging of prostate and the effect of acquisition parameters on the 3D accuracy have not been studied in detail, with imaging dose remaining a concern. In this paper, we investigate methods to optimize the kV acquisition parameters and imaging protocol to achieve improved 3D localization and 2D image registration accuracy for minimal imaging dose. Prostate motion during radiotherapy was simulated using existing cine-MRI measurements, and was used to investigate the accuracy of various 3D localization techniques and the effect of the kV acquisition protocol. We also investigated the relationship between mAs and the accuracy of the 2D image registration for localization of fiducial markers and we measured imaging dose for a 30 cm diameter phantom to evaluate the necessary dose to achieve acceptable image registration accuracy. Simulations showed that the error in assuming the shortest path to localize the prostate in 3D using monoscopic imaging during a typical IMRT fraction will be less than ∼1.5 mm for 95% of localizations, and will also depend on prostate motion distribution, treatment duration and image acquisition and treatment protocol. Most uncertainty cannot be reduced from higher imaging frequency or acquiring during gantry rotation between beams. Measured maximum surface dose to the cylindrical phantom from monoscopic kV intrafraction acquisitions varied between 0.4 and 5.5 mGy, depending on the acquisition protocol, and was lower than the required dose for CBCT (21.1 mGy). Imaging dose can be lowered by ∼15-40% when mAs is optimized with acquisition angle. Images acquired during MV beam delivery require increased mAs to obtain the same level of registration accuracy, with mAs/registration increasing roughly linearly with field size and dose rate.

  10. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    Science.gov (United States)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  11. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    Science.gov (United States)

    2008-07-07

    analyzing multivariate data sets. The system was developed using the Java Development Kit (JDK) version 1.5; and it yields interactive performance on a... script and captures output from the MATLAB’s “regress” and “stepwisefit” utilities that perform simple and stepwise regression, respectively. The MATLAB...Statistical Association, vol. 85, no. 411, pp. 664–675, 1990. [9] H. Hauser, F. Ledermann, and H. Doleisch, “ Angular brushing of extended parallel coordinates

  12. Automatic Dictionary Expansion Using Non-parallel Corpora

    Science.gov (United States)

    Rapp, Reinhard; Zock, Michael

    Automatically generating bilingual dictionaries from parallel, manually translated texts is a well established technique that works well in practice. However, parallel texts are a scarce resource. Therefore, it is desirable also to be able to generate dictionaries from pairs of comparable monolingual corpora. For most languages, such corpora are much easier to acquire, and often in considerably larger quantities. In this paper we present the implementation of an algorithm which exploits such corpora with good success. Based on the assumption that the co-occurrence patterns between different languages are related, it expands a small base lexicon. For improved performance, it also realizes a novel interlingua approach. That is, if corpora of more than two languages are available, the translations from one language to another can be determined not only directly, but also indirectly via a pivot language.

  13. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    International Nuclear Information System (INIS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K

    2012-01-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  14. An improved data acquisition system for isotopic ratio mass spectrometers

    International Nuclear Information System (INIS)

    Saha, T.K.; Reddy, B.; Nazare, C.K.; Handu, V.K.

    1999-01-01

    Isotopic ratio mass spectrometers designed and fabricated to measure the isotopic ratios with a precision of better than 0.05%. In order to achieve this precision, the measurement system consisting of ion signal to voltage converters, analog to digital converters, and data acquisition electronics should be at least one order better than the overall precision of measurement. Using state of the art components and techniques, a data acquisition system, which is an improved version of the earlier system, has been designed and developed for use with multi-collector isotopic ratio mass spectrometers

  15. Parallel computing simulation of fluid flow in the unsaturated zone of Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Bodvarsson, G.S.

    2001-01-01

    This paper presents the application of parallel computing techniques to large-scale modeling of fluid flow in the unsaturated zone (UZ) at Yucca Mountain, Nevada. In this study, parallel computing techniques, as implemented into the TOUGH2 code, are applied in large-scale numerical simulations on a distributed-memory parallel computer. The modeling study has been conducted using an over-one-million-cell three-dimensional numerical model, which incorporates a wide variety of field data for the highly heterogeneous fractured formation at Yucca Mountain. The objective of this study is to analyze the impact of various surface infiltration scenarios (under current and possible future climates) on flow through the UZ system, using various hydrogeological conceptual models with refined grids. The results indicate that the one-million-cell models produce better resolution results and reveal some flow patterns that cannot be obtained using coarse-grid modeling models

  16. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  17. Hybrid parallel computing architecture for multiview phase shifting

    Science.gov (United States)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  18. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  19. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  20. A Simple Method for Static Load Balancing of Parallel FDTD Codes

    DEFF Research Database (Denmark)

    Franek, Ondrej

    2016-01-01

    A static method for balancing computational loads in parallel implementations of the finite-difference timedomain method is presented. The procedure is fairly straightforward and computationally inexpensive, thus providing an attractive alternative to optimization techniques. The method is descri...