WorldWideScience

Sample records for parallel acquisition technique

  1. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    International Nuclear Information System (INIS)

    Tintera, Jaroslav; Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter

    2004-01-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  2. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    Energy Technology Data Exchange (ETDEWEB)

    Tintera, Jaroslav [Institute for Clinical and Experimental Medicine, Prague (Czech Republic); Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter [University Clinic Mainz, Institute of Neuroradiology, Mainz (Germany)

    2004-12-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  3. Rapid musculoskeletal magnetic resonance imaging using integrated parallel acquisition techniques (IPAT) - Initial experiences

    International Nuclear Information System (INIS)

    Romaneehsen, B.; Oberholzer, K.; Kreitner, K.-F.; Mueller, L.P.

    2003-01-01

    Purpose: To investigate the feasibility of using multiple receiver coil elements for time saving integrated parallel imaging techniques (iPAT) in traumatic musculoskeletal disorders. Material and methods: 6 patients with traumatic derangements of the knee, ankle and hip underwent MR imaging at 1.5 T. For signal detection of the knee and ankle, we used a 6-channel body array coil that was placed around the joints, for hip imaging two 4-channel body array coils and two elements of the spine array coil were combined for signal detection. All patients were investigated with a standard imaging protocol that mainly consisted of different turbo spin-echo sequences (PD-, T 2 -weighted TSE with and without fat suppression, STIR). All sequences were repeated with an integrated parallel acquisition technique (iPAT) using a modified sensitivity encoding (mSENSE) technique with an acceleration factor of 2. Overall image quality was subjectively assessed using a five-point scale as well as the ability for detection of pathologic findings. Results: Regarding overall image quality, there were no significant differences between standard imaging and imaging using mSENSE. All pathologies (occult fracture, meniscal tear, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques. iPAT led to a 48% reduction of acquisition time compared with standard technique. Additionally, time savings with iPAT led to a decrease of pain-induced motion artifacts in two cases. Conclusion: In times of increasing cost pressure, iPAT using multiple coil elements seems to be an efficient and economic tool for fast musculoskeletal imaging with diagnostic performance comparable to conventional techniques. (orig.) [de

  4. Rapid musculoskeletal magnetic resonance imaging using integrated parallel acquisition techniques (IPAT) - Initial experiences

    Energy Technology Data Exchange (ETDEWEB)

    Romaneehsen, B.; Oberholzer, K.; Kreitner, K.-F. [Johannes Gutenberg-Univ. Mainz (Germany). Klinik und Poliklinik fuer Radiologie; Mueller, L.P. [Johannes Gutenberg-Univ. Mainz (Germany). Klinik und Poliklinik fuer Unfallchirurgie

    2003-09-01

    Purpose: To investigate the feasibility of using multiple receiver coil elements for time saving integrated parallel imaging techniques (iPAT) in traumatic musculoskeletal disorders. Material and methods: 6 patients with traumatic derangements of the knee, ankle and hip underwent MR imaging at 1.5 T. For signal detection of the knee and ankle, we used a 6-channel body array coil that was placed around the joints, for hip imaging two 4-channel body array coils and two elements of the spine array coil were combined for signal detection. All patients were investigated with a standard imaging protocol that mainly consisted of different turbo spin-echo sequences (PD-, T{sub 2}-weighted TSE with and without fat suppression, STIR). All sequences were repeated with an integrated parallel acquisition technique (iPAT) using a modified sensitivity encoding (mSENSE) technique with an acceleration factor of 2. Overall image quality was subjectively assessed using a five-point scale as well as the ability for detection of pathologic findings. Results: Regarding overall image quality, there were no significant differences between standard imaging and imaging using mSENSE. All pathologies (occult fracture, meniscal tear, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques. iPAT led to a 48% reduction of acquisition time compared with standard technique. Additionally, time savings with iPAT led to a decrease of pain-induced motion artifacts in two cases. Conclusion: In times of increasing cost pressure, iPAT using multiple coil elements seems to be an efficient and economic tool for fast musculoskeletal imaging with diagnostic performance comparable to conventional techniques. (orig.) [German] Ziel: Einsatz integrierter paralleler Akquisitionstechniken (iPAT) zur Verkuerzung der Untersuchungszeit bei muskuloskelettalen Verletzungen. Material und Methoden: 6 Patienten mit einem Knie, Sprunggelenks- oder Huefttrauma wurden bei 1,5 T

  5. Fast magnetic resonance imaging of the knee using a parallel acquisition technique (mSENSE): a prospective performance evaluation

    International Nuclear Information System (INIS)

    Kreitner, K.F.; Romaneehsen, Bernd; Oberholzer, Katja; Dueber, Christoph; Krummenauer, Frank; Mueller, L.P.

    2006-01-01

    The performance of a magnetic resonance (MR) imaging strategy that uses multiple receiver coil elements and integrated parallel imaging techniques (iPAT) in traumatic and degenerative disorders of the knee and to compare this technique with a standard MR imaging protocol was evaluated. Ninety patients with suspected internal derangements of the knee joint prospectively underwent MR imaging at 1.5 T. For signal detection, a 6-channel array coil was used. All patients were investigated with a standard imaging protocol consisting of different turbo spin-echo sequences proton density (PD), T 2 -weighted turbo spin echo (TSE) with and without fat suppression in three imaging planes. All sequences were repeated with an integrated parallel acquisition technique (iPAT) using the modified sensitivity encoding (mSENSE) algorithm with an acceleration factor of 2. Two radiologists independently evaluated and scored all images with regard to overall image quality, artefacts and pathologic findings. Agreement of the parallel ratings between readers and imaging techniques, respectively, was evaluated by means of pairwise kappa coefficients that were stratified for the area of evaluation. Agreement between the parallel readers for both the iPAT imaging and the conventional technique, respectively, as well as between imaging techniques was found encouraging with inter-observer kappa values ranging between 0.78 and 0.98 for both imaging techniques, and the inter-method kappa values ranging between 0.88 and 1.00 for both clinical readers. All pathological findings (e.g. occult fractures, meniscal and cruciate ligament tears, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques with comparable performance. The use of iPAT lead to a 48% reduction of acquisition time compared with standard technique. Parallel imaging using mSENSE proved to be an efficient and economic tool for fast musculoskeletal MR imaging of the knee joint with comparable

  6. VIBE with parallel acquisition technique - a novel approach to dynamic contrast-enhanced MR imaging of the liver

    International Nuclear Information System (INIS)

    Dobritz, M.; Radkow, T.; Bautz, W.; Fellner, F.A.; Nittka, M.

    2002-01-01

    Purpose: The VIBE (volume interpolated breath-hold examination) sequence in combination with parallel acquisition technique (iPAT: integrated parallel acquisition technique) allows dynamic contrast-enhanced MRI of the liver with high temporal and spatial resolution. The aim of this study was to obtain first clinical experience with this technique for the detection and characterization of focal liver lesions. Materials and Methods: We examined 10 consecutive patients using a 1.5 T MR system (gradient field strength 30 mT/m) with a phased-array coil combination. Following sequences- were acquired: T 2 -w TSE and T 1 -w FLASH, after administration of gadolinium, 6 VIBE sequences with iPAT (TR/TE/matrix/partition thickness/time of acquisition: 6.2 ms/ 3.2 ms/256 x 192/4 mm/13 s), as well as T 1 -weighted FLASH with fat saturation. Two observers evaluated the different sequences concerning the number of lesions and their dignity. Following lesions were found: hepatocellular carcinoma (5 patients), hemangioma (2), metastasis (1), cyst (1), adenoma (1). Results: The VIBE sequences were superior for the detection of lesions with arterial hyperperfusion with a total of 33 focal lesions. 21 lesions were found with T 2 -w TSE and 20 with plain T 1 -weighted FLASH. Diagnostic accuracy increased with the VIBE sequence in comparison to the other sequences. Conclusion: VIBE with iPAT allows MR imaging of the liver with high spatial and temporal resolution providing dynamic contrast-enhanced information about the whole liver. This may lead to improved detection of liver lesions, especially hepatocellular carcinoma. (orig.) [de

  7. MR sialography: evaluation of an ultra-fast sequence in consideration of a parallel acquisition technique and different functional conditions in patients with salivary gland diseases

    International Nuclear Information System (INIS)

    Petridis, C.; Ries, T.; Cramer, M.C.; Graessner, J.; Petersen, K.U.; Reitmeier, F.; Jaehne, M.; Weiss, F.; Adam, G.; Habermann, C.R.

    2007-01-01

    Purpose: To evaluate an ultra-fast sequence for MR sialography requiring no post-processing and to compare the acquisition technique regarding the effect of oral stimulation with a parallel acquisition technique in patients with salivary gland diseases. Materials and Methods: 128 patients with salivary gland disease were prospectively examined using a 1.5-T superconducting system with a 30 mT/m maximum gradient capability and a maximum slew rate of 125 mT/m/sec. A single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation. All images were obtained with and without a parallel imaging technique. The evaluation of the ductal system of the parotid and submandibular gland was performed using a visual scale of 1-5 for each side. The images were assessed by two independent experienced radiologists. An ANOVA with posthoc comparisons and an overall two tailed significance level of p=0.05 was used for the statistical evaluation. An intraclass correlation was computed to evaluate interobserver variability and a correlation of >0.8 was determined, thereby indicating a high correlation. Results: Depending on the diagnosed diseases and the absence of abruption of the ducts, all parts of excretory ducts were able to be visualized in all patients using the developed technique with an overall rating for all ducts of 2.70 (SD±0.89). A high correlation was achieved between the two observers with an intraclass correlation of 0.73. Oral application of a sialogogum improved the visibility of excretory ducts significantly (p<0.001). In contrast, the use of a parallel imaging technique led to a significant decrease in image quality (p=0,011). (orig.)

  8. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands

    International Nuclear Information System (INIS)

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G.; Graessner, J.; Reitmeier, F.; Jaehne, M.; Petersen, K.U.

    2005-01-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD±1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p 2 =0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p 2 =0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post-processing needed. To improve results of MR

  9. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands; MR-Sialographie: Optimierung und Bewertung ultraschneller Sequenzen mit paralleler Bildgebung und oraler Stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G. [Radiologisches Zentrum, Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie, Universitaetsklinikum Hamburg-Eppendorf (Germany); Graessner, J. [Siemens Medical Systems, Hamburg (Germany); Reitmeier, F.; Jaehne, M. [Kopf- und Hautzentrum, Klinik und Poliklinik fuer Hals-, Nasen- und Ohrenheilkunde, Universitaetsklinikum Hamburg-Eppendorf (Germany); Petersen, K.U. [Zentrum fuer Psychosoziale Medizin, Klinik und Poliklinik fuer Psychiatrie und Psychotherapie, Universitaetsklinikum Hamburg-Eppendorf (Germany)

    2005-04-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD{+-}1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p<0.001; {eta}{sup 2}=0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p<0.001; {eta}{sup 2}=0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post

  10. VIBE with parallel acquisition technique - a novel approach to dynamic contrast-enhanced MR imaging of the liver; VIBE mit paralleler Akquisitionstechnik - eine neue Moeglichkeit der dynamischen kontrastverstaerkten MRT der Leber

    Energy Technology Data Exchange (ETDEWEB)

    Dobritz, M.; Radkow, T.; Bautz, W.; Fellner, F.A. [Inst. fuer Diagnostische Radiologie, Friedrich-Alexander-Univ. Erlangen-Nuernberg (Germany); Nittka, M. [Siemens Medical Solutions, Erlangen (Germany)

    2002-06-01

    Purpose: The VIBE (volume interpolated breath-hold examination) sequence in combination with parallel acquisition technique (iPAT: integrated parallel acquisition technique) allows dynamic contrast-enhanced MRI of the liver with high temporal and spatial resolution. The aim of this study was to obtain first clinical experience with this technique for the detection and characterization of focal liver lesions. Materials and Methods: We examined 10 consecutive patients using a 1.5 T MR system (gradient field strength 30 mT/m) with a phased-array coil combination. Following sequences- were acquired: T{sub 2}-w TSE and T{sub 1}-w FLASH, after administration of gadolinium, 6 VIBE sequences with iPAT (TR/TE/matrix/partition thickness/time of acquisition: 6.2 ms/ 3.2 ms/256 x 192/4 mm/13 s), as well as T{sub 1}-weighted FLASH with fat saturation. Two observers evaluated the different sequences concerning the number of lesions and their dignity. Following lesions were found: hepatocellular carcinoma (5 patients), hemangioma (2), metastasis (1), cyst (1), adenoma (1). Results: The VIBE sequences were superior for the detection of lesions with arterial hyperperfusion with a total of 33 focal lesions. 21 lesions were found with T{sub 2}-w TSE and 20 with plain T{sub 1}-weighted FLASH. Diagnostic accuracy increased with the VIBE sequence in comparison to the other sequences. Conclusion: VIBE with iPAT allows MR imaging of the liver with high spatial and temporal resolution providing dynamic contrast-enhanced information about the whole liver. This may lead to improved detection of liver lesions, especially hepatocellular carcinoma. (orig.) [German] Ziel: Die VIBE-Sequenz (Volume Interpolated Breath-hold Examination) in Kombination mit paralleler Bildgebung (iPAT) ermoeglicht eine dynamische kontrastmittel-gestuetzte Untersuchung der Leber in hoher zeitlicher und oertlicher Aufloesung. Ziel war es, erste klinische Erfahrungen mit dieser Technik in der Detektion fokaler

  11. Data acquisition techniques

    International Nuclear Information System (INIS)

    Dougherty, R.C.

    1976-01-01

    Testing neutron generators and major subassemblies has undergone a transition in the past few years. Digital information is now used for storage and analysis. The key to the change is the availability of a high-speed digitizer system. The status of the Sandia Laboratory data acquisition and handling system as applied to this area is surveyed. 1 figure

  12. Application of parallel preprocessors in data acquisition

    International Nuclear Information System (INIS)

    Butler, H.S.; Cooper, M.D.; Williams, R.A.; Hughes, E.B.; Rolfe, J.R.; Wilson, S.L.; Zeman, H.D.

    1981-01-01

    A data-acquisition system is being developed for a large-scale experiment at LAMPF. It will make use of four microprocessors running in parallel to acquire and preprocess data from 432 photomultiplier tubes (PMT) attached to 396 NaI crystals. The microprocessors are LSI-11/23s operating through CAMAC Auxiliary Crate Controllers (ACC). Data acquired by the microprocessors will be collected through a programmable Branch Driver (MBD) which also will read data from 52 scintillators (88 PMTs) and 728 wires comprising a drift chamber. The MBD will transfer data from each event into a PDP-11/44 for further processing and taping. The microprocessors will perform the secondary function of monitoring the calibration of the NaI PMTs. A special trigger circuit allows the system to stack data from a second event while the first is still being processed. Major components of the system were tested in April 1981. Timing measurements from this test are reported

  13. Data acquisition techniques using PC

    CERN Document Server

    Austerlitz, Howard

    1991-01-01

    Data Acquisition Techniques Using Personal Computers contains all the information required by a technical professional (engineer, scientist, technician) to implement a PC-based acquisition system. Including both basic tutorial information as well as some advanced topics, this work is suitable as a reference book for engineers or as a supplemental text for engineering students. It gives the reader enough understanding of the topics to implement a data acquisition system based on commercial products. A reader can alternatively learn how to custom build hardware or write his or her own software.

  14. Dynamic MRI of the liver with parallel acquisition technique. Characterization of focal liver lesions and analysis of the hepatic vasculature in a single MRI session

    International Nuclear Information System (INIS)

    Heilmaier, C.; Sutter, R.; Lutz, A.M.; Willmann, J.K.; Seifert, B.

    2008-01-01

    Purpose: to retrospectively evaluate the performance of breath-hold contrast-enhanced 3D dynamic parallel gradient echo MRI (pMRT) for the characterization of focal liver lesions (standard of reference: histology) and for the analysis of hepatic vasculature (standard of reference: contrast-enhanced 64-detector row computed tomography; MSCT) in a single MRI session. Materials and method: two blinded readers independently analyzed preoperative pMRT data sets (1.5T-MRT) of 45 patients (23 men, 22 women; 28 - 77 years, average age, 48 years) with a total of 68 focal liver lesions with regard to image quality of hepatic arteries, portal and hepatic veins, presence of variant anatomy of the hepatic vasculature, as well as presence of portal vein thrombosis and hemodynamically significant arterial stenosis. In addition, both readers were asked to identify and characterize focal liver lesions. Imaging parameters of pMRT were: TR/TE/matrix/slice thickness/acquisition time: 3.1 ms/1.4 ms/384 x 224/4 mm/15 - 17 s. MSCT was performed with a pitch of 1.2, an effective slice thickness of 1 mm and a matrix of 512 x 512. Results: based on histology, the 68 liver lesions were found to be 42 hepatocellular carcinomas (HCC), 20 metastases, 3 cholangiocellular carcinomas (CCC) as well as 1 dysplastic nodule, 1 focal nodular hyperplasia (FNH) and 1 atypical hemangioma. Overall, the diagnostic accuracy was high for both readers (91 - 100%) in the characterization of these focal liver lesions with an excellent interobserver agreement (κ-values of 0.89 [metastases], 0.97 [HCC] and 1 [CCC]). On average, the image quality of all vessels under consideration was rated good or excellent in 89% (reader 1) and 90% (reader 2). Anatomical variants of the hepatic arteries, hepatic veins and portal vein as well as thrombosis of the portal vein were reliably detected by pMRT. Significant arterial stenosis was found with a sensitivity between 86% and 100% and an excellent interobserver agreement (κ

  15. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  16. A tomograph VMEbus parallel processing data acquisition system

    International Nuclear Information System (INIS)

    Wilkinson, N.A.; Rogers, J.G.; Atkins, M.S.

    1989-01-01

    This paper describes a VME based data acquisition system suitable for the development of Positron Volume Imaging tomographs which use 3-D data for improved image resolution over slice-oriented tomographs. the data acquisition must be flexible enough to accommodate several 3-D reconstruction algorithms; hence, a software-based system is most suitable. Furthermore, because of the increased dimensions and resolution of volume imaging tomographs, the raw data event rate is greater than that of slice-oriented machines. These dual requirements are met by our data acquisition system. Flexibility is achieved through an array of processors connected over a VMEbus, operating asynchronously and in parallel. High raw data throughput is achieved using a dedicated high speed data transfer device available for the VMEbus. The device can attain a raw data rate of 2.5 million coincidence events per second for raw events which are 64 bits wide

  17. Parallel preconditioning techniques for sparse CG solvers

    Energy Technology Data Exchange (ETDEWEB)

    Basermann, A.; Reichel, B.; Schelthoff, C. [Central Institute for Applied Mathematics, Juelich (Germany)

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  18. A tomograph VMEbus parallel processing data acquisition system

    International Nuclear Information System (INIS)

    Atkins, M.S.; Wilkinson, N.A.; Rogers, J.G.

    1988-11-01

    This paper describes a VME based data acquisition system suitable for the development of Positron Volume Imaging tomographs which use 3-D data for improved image resolution over slice-oriented tomographs. The data acquisition must be flexible enough to accommodate several 3-D reconstruction algorithms; hence, a software-based system is most suitable. Furthermore, because of the increased dimensions and resolution of volume imaging tomographs, the raw data event rate is greater than that of slice-oriented machines. These dual requirements are met by our data acquisition systems. Flexibility is achieved through an array of processors connected over a VMEbus, operating asynchronously and in parallel. High raw data throughput is achieved using a dedicated high speed data transfer device available for the VMEbus. The device can attain a raw data rate of 2.5 million coincidence events per second for raw events per second for raw events which are 64 bits wide. Real-time data acquisition and pre-processing requirements can be met by about forty 20 MHz Motorola 68020/68881 processors

  19. Parallel computing techniques for rotorcraft aerodynamics

    Science.gov (United States)

    Ekici, Kivanc

    The modification of unsteady three-dimensional Navier-Stokes codes for application on massively parallel and distributed computing environments is investigated. The Euler/Navier-Stokes code TURNS (Transonic Unsteady Rotor Navier-Stokes) was chosen as a test bed because of its wide use by universities and industry. For the efficient implementation of TURNS on parallel computing systems, two algorithmic changes are developed. First, main modifications to the implicit operator, Lower-Upper Symmetric Gauss Seidel (LU-SGS) originally used in TURNS, is performed. Second, application of an inexact Newton method, coupled with a Krylov subspace iterative method (Newton-Krylov method) is carried out. Both techniques have been tried previously for the Euler equations mode of the code. In this work, we have extended the methods to the Navier-Stokes mode. Several new implicit operators were tried because of convergence problems of traditional operators with the high cell aspect ratio (CAR) grids needed for viscous calculations on structured grids. Promising results for both Euler and Navier-Stokes cases are presented for these operators. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. The parallel implicit operators used in the previous step are employed as preconditioners and the results are compared. The Message Passing Interface (MPI) protocol has been used because of its portability to various parallel architectures. It should be noted that the proposed methodology is general and can be applied to several other CFD codes (e.g. OVERFLOW).

  20. Microprocessor event analysis in parallel with Camac data acquisition

    International Nuclear Information System (INIS)

    Cords, D.; Eichler, R.; Riege, H.

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a Camac System (GEC-ELLIOTT System Crate) and shares the Camac access with a Nord-1OS computer. Interfaces have been designed and tested for execution of Camac cycles, communication with the Nord-1OS computer and DMA-transfer from Camac to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-1OS computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the result of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-1OS buffer will be reset and the event omitted from further processing. (orig.)

  1. Microprocessor event analysis in parallel with CAMAC data acquisition

    CERN Document Server

    Cords, D; Riege, H

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a CAMAC System (GEC-ELLIOTT System Crate) and shares the CAMAC access with a Nord-10S computer. Interfaces have been designed and tested for execution of CAMAC cycles, communication with the Nord-10S computer and DMA-transfer from CAMAC to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-10S computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the results of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-10S buffer will be reset and the event omitted from further processing. (5 refs).

  2. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  3. Parallel halftoning technique using dot diffusion optimization

    Science.gov (United States)

    Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara

    2017-05-01

    In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.

  4. Logical inference techniques for loop parallelization

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2012-01-01

    the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S={}, where S is a set expression representing array indexes. Using...... of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECT-CLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers....

  5. Transient data acquisition techniques under EDS

    International Nuclear Information System (INIS)

    Telford, S.

    1985-06-01

    This paper is the first of a series which describes the Enrichment Diagnostic System (EDS) developed for the MARS project at Lawrence Livermore National Laboratory. Although EDS was developed for use on AVLIS, the functional requirements, overall design, and specific techniques are applicable to any experimental data acquisition system involving large quantities of transient data. In particular this paper will discuss the techniques and equipment used to do the data acquisition. Included are what types of hardware are used and how that hardware (CAMAC, digital oscilloscopes) is interfaced to the HP computers. In this discussion the author will address the problems encountered and the solutions used, as well as the performance of the instrument/computer interfaces. The second topic the author will discuss is how the acquired data is associated to graphics and analysis portions of EDS through efficient real time data bases. This discussion will include how the acquired data is folded into the overall structure of EDS providing the user immediate access to raw and analyzed data. By example you will see how easily a new diagnostic can be added to the EDS structure without modifying the other parts of the system. 8 figs

  6. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.; Rauchwerger, Lawrence

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  7. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  8. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  9. Parallel imaging: is GRAPPA a useful acquisition tool for MR imaging intended for volumetric brain analysis?

    Directory of Open Access Journals (Sweden)

    Frank Anders

    2009-08-01

    Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.

  10. Parallel preprocessing in a nuclear data acquisition system

    International Nuclear Information System (INIS)

    Pichot, G.; Auriol, E.; Lemarchand, G.; Millaud, J.

    1977-01-01

    The appearance of microprocessors and large memory chips has somewhat modified the spectrum of tools usable by the data acquisition system designer. This is particular true in the nuclear research field where the data flow has been continuously growing as a consequence of the increasing capabilities of new detectors. This paper deals with the insertion, between a data acquisition system and a computer, of a preprocessing structure based on microprocessors and large capacity high speed memories. The results shows a significant improvement on several aspects in the operation of the system with returns paying back the investments in 18 months

  11. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics

    International Nuclear Information System (INIS)

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered

  12. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics. [Data Acquisition by Parallel Histogramming and NEtworking

    Energy Technology Data Exchange (ETDEWEB)

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered.

  13. Fast implementations of 3D PET reconstruction using vector and parallel programming techniques

    International Nuclear Information System (INIS)

    Guerrero, T.M.; Cherry, S.R.; Dahlbom, M.; Ricci, A.R.; Hoffman, E.J.

    1993-01-01

    Computationally intensive techniques that offer potential clinical use have arisen in nuclear medicine. Examples include iterative reconstruction, 3D PET data acquisition and reconstruction, and 3D image volume manipulation including image registration. One obstacle in achieving clinical acceptance of these techniques is the computational time required. This study focuses on methods to reduce the computation time for 3D PET reconstruction through the use of fast computer hardware, vector and parallel programming techniques, and algorithm optimization. The strengths and weaknesses of i860 microprocessor based workstation accelerator boards are investigated in implementations of 3D PET reconstruction

  14. Impacts of Vocabulary Acquisition Techniques Instruction on Students' Learning

    Science.gov (United States)

    Orawiwatnakul, Wiwat

    2011-01-01

    The objectives of this study were to determine how the selected vocabulary acquisition techniques affected the vocabulary ability of 35 students who took EN 111 and investigate their attitudes towards the techniques instruction. The research study was one-group pretest and post-test design. The instruments employed were in-class exercises…

  15. Parallel transmission techniques in magnetic resonance imaging: experimental realization, applications and perspectives

    International Nuclear Information System (INIS)

    Ullmann, P.

    2007-06-01

    The primary objective of this work was the first experimental realization of parallel RF transmission for accelerating spatially selective excitation in magnetic resonance imaging. Furthermore, basic aspects regarding the performance of this technique were investigated, potential risks regarding the specific absorption rate (SAR) were considered and feasibility studies under application-oriented conditions as first steps towards a practical utilisation of this technique were undertaken. At first, based on the RF electronics platform of the Bruker Avance MRI systems, the technical foundations were laid to perform simultaneous transmission of individual RF waveforms on different RF channels. Another essential requirement for the realization of Parallel Excitation (PEX) was the design and construction of suitable RF transmit arrays with elements driven by separate transmit channels. In order to image the PEX results two imaging methods were implemented based on a spin-echo and a gradient-echo sequence, in which a parallel spatially selective pulse was included as an excitation pulse. In the course of this work PEX experiments were successfully performed on three different MRI systems, a 4.7 T and a 9.4 T animal system and a 3 T human scanner, using 5 different RF coil setups in total. In the last part of this work investigations regarding possible applications of Parallel Excitation were performed. A first study comprised experiments of slice-selective B1 inhomogeneity correction by using 3D-selective Parallel Excitation. The investigations were performed in a phantom as well as in a rat fixed in paraformaldehyde solution. In conjunction with these experiments a novel method of calculating RF pulses for spatially selective excitation based on a so-called Direct Calibration approach was developed, which is particularly suitable for this type of experiments. In the context of these experiments it was demonstrated how to combine the advantages of parallel transmission

  16. Characterization of Harmonic Signal Acquisition with Parallel Dipole and Multipole Detectors

    Science.gov (United States)

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2018-04-01

    Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) is a powerful instrument for the study of complex biological samples due to its high resolution and mass measurement accuracy. However, the relatively long signal acquisition periods needed to achieve high resolution can serve to limit applications of FTICR-MS. The use of multiple pairs of detector electrodes enables detection of harmonic frequencies present at integer multiples of the fundamental cyclotron frequency, and the obtained resolving power for a given acquisition period increases linearly with the order of harmonic signal. However, harmonic signal detection also increases spectral complexity and presents challenges for interpretation. In the present work, ICR cells with independent dipole and harmonic detection electrodes and preamplifiers are demonstrated. A benefit of this approach is the ability to independently acquire fundamental and multiple harmonic signals in parallel using the same ions under identical conditions, enabling direct comparison of achieved performance as parameters are varied. Spectra from harmonic signals showed generally higher resolving power than spectra acquired with fundamental signals and equal signal duration. In addition, the maximum observed signal to noise (S/N) ratio from harmonic signals exceeded that of fundamental signals by 50 to 100%. Finally, parallel detection of fundamental and harmonic signals enables deconvolution of overlapping harmonic signals since observed fundamental frequencies can be used to unambiguously calculate all possible harmonic frequencies. Thus, the present application of parallel fundamental and harmonic signal acquisition offers a general approach to improve utilization of harmonic signals to yield high-resolution spectra with decreased acquisition time. [Figure not available: see fulltext.

  17. Improved parallel solution techniques for the integral transport matrix method

    Energy Technology Data Exchange (ETDEWEB)

    Zerr, R. Joseph, E-mail: rjz116@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA (United States); Azmy, Yousry Y., E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Burlington Engineering Laboratories, Raleigh, NC (United States)

    2011-07-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  18. Improved parallel solution techniques for the integral transport matrix method

    International Nuclear Information System (INIS)

    Zerr, R. Joseph; Azmy, Yousry Y.

    2011-01-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  19. Modeling, realization and evaluation of a parallel architecture for the data acquisition in multidetectors

    International Nuclear Information System (INIS)

    Guirande, Ph.; Aleonard, M-M.; Dien, Q-T.; Pedroza, J-L.

    1997-01-01

    The efficiency increasing in four π (EUROGAM, EUROBALL, DIAMANT) is achieved by an increase in the granularity, hence in the event counting rate in the acquisition system. Consequently, an evolution of the architecture of readout systems, coding and software is necessary. To achieve the required evaluation we have implemented a parallel architecture to check the quality of the events. The first application of this architecture was to make available an improved data acquisition system for the DIAMANT multidetector. The data acquisition system of DIAMANT is based on an ensemble of VME cards which must manage: the event readout, their salvation on magnetic support and histogram construction. The ensemble consists of processors distributed in a net, a workstation to control the experiment and a display system for spectra and arrays. In such architecture the task of VME bus becomes quickly a limitation for performances not only for the data transfer but also for coordination of different processors. The parallel architecture used makes the VME bus operation easy. It is based on three DSP C40 (Digital Signal Processor) implanted in a commercial (LSI) VME. It is provided with an external bus used to read the raw data from an interface card (ROCVI) between the 32 bit ECL bus reading the real time VME-based encoders. The performed tests have evidenced jamming after data exchanges between the processors using two communication lines. The analysis of this problem has indicated the necessity of dynamical changes of tasks to avoid this blocking. Intrinsic evaluation (i.e. without transfer on the VME bus) has been carried out for two parallel topologies (processor farm and tree). The simulation software permitted the generation of event packets. The obtained rates are sensibly equivalent (6 Mo/s) independent of topology. The farm topology has been chosen because it is simple to implant. The charge evaluation has reduced the rate in 'simplex' communication mode to 5.3 Mo/s and

  20. Parallel transmission techniques in magnetic resonance imaging: experimental realization, applications and perspectives; Parallele Sendetechniken in der Magnetresonanztomographie: experimentelle Realisierung, Anwendungen und Perspektiven

    Energy Technology Data Exchange (ETDEWEB)

    Ullmann, P.

    2007-06-15

    The primary objective of this work was the first experimental realization of parallel RF transmission for accelerating spatially selective excitation in magnetic resonance imaging. Furthermore, basic aspects regarding the performance of this technique were investigated, potential risks regarding the specific absorption rate (SAR) were considered and feasibility studies under application-oriented conditions as first steps towards a practical utilisation of this technique were undertaken. At first, based on the RF electronics platform of the Bruker Avance MRI systems, the technical foundations were laid to perform simultaneous transmission of individual RF waveforms on different RF channels. Another essential requirement for the realization of Parallel Excitation (PEX) was the design and construction of suitable RF transmit arrays with elements driven by separate transmit channels. In order to image the PEX results two imaging methods were implemented based on a spin-echo and a gradient-echo sequence, in which a parallel spatially selective pulse was included as an excitation pulse. In the course of this work PEX experiments were successfully performed on three different MRI systems, a 4.7 T and a 9.4 T animal system and a 3 T human scanner, using 5 different RF coil setups in total. In the last part of this work investigations regarding possible applications of Parallel Excitation were performed. A first study comprised experiments of slice-selective B1 inhomogeneity correction by using 3D-selective Parallel Excitation. The investigations were performed in a phantom as well as in a rat fixed in paraformaldehyde solution. In conjunction with these experiments a novel method of calculating RF pulses for spatially selective excitation based on a so-called Direct Calibration approach was developed, which is particularly suitable for this type of experiments. In the context of these experiments it was demonstrated how to combine the advantages of parallel transmission

  1. In search of the best technique for vocabulary acquisition

    Directory of Open Access Journals (Sweden)

    Mohammad Mohseni-Far

    2008-05-01

    Full Text Available Teade plagiaadi kohta / Report of an Act of Plagiarism (6. mai 2012 / 6 May, 2012ERÜ aastaraamatus 4 (2008 lk 121–138 ilmunud Mohammad Mohseni-Far'i artikli "In Search of the Best Technique for Vocabulary Acquisition" näol on tegemist iseenda plagiaadiga. Sama artikkel on 2008. a ilmunud lisaks ERÜ aastaraamatule veel KAKS KORDA ligilähedases sõnastuses ning ligilähedase pealkirjaga. Kuna autor on tegelnud sõnastuse muutmisega, siis järelikult on tegemist teadliku plagiaadiga. Vt ka Check for Plagiarism On the Web.We are sorry to inform that Mohammad Mohseni-Far, the author of 'In Search of the Best Technique for Vocabulary Acquisition' published in ERÜ aastaraamat / EAAL yearbook, Vol. 4 (2008 pp. 121–138, has published the same article TWICE in another journal just by changing the title and a few wordings. The plagiarism is verified, using the Check for Plagiarism On the Web.A Cognitively-oriented Encapsulation of Strategies Utilized for Lexical Development: In search of a flexible and highly interactive curriculum. – Porta Linguarum 9 (2008, 35–42. Techniques and Strategies Utilized for Vocabulary Acquisition: the necessity to design a multifaceted framework with an instructionally wise equilibrium. – Porta Linguarum 8 (2007, 137–152.ERÜ aastaraamatu toimetus / Editors of the EAAL yearbook***The present study is intended to critically examine vocabulary learning/acquisition techniques within second/foreign language context. Accordingly, the purpose of this survey is to concentrate particularly on the variables connected with lexical knowledge and establish a fairly all-inclusive framework which comprises and expounds on the most significant strategies and relevant factors within the vocabulary acquisition context. At the outset, the study introduces four salient variables; learner, task and strategy serve as a general structure of inquiry (Flavell’s cognitive model, 1992. Besides, the variable of context

  2. Evaluation of Parallel and Fan-Beam Data Acquisition Geometries and Strategies for Myocardial SPECT Imaging

    Science.gov (United States)

    Qi, Yujin; Tsui, B. M. W.; Gilland, K. L.; Frey, E. C.; Gullberg, G. T.

    2004-06-01

    This study evaluates myocardial SPECT images obtained from parallel-hole (PH) and fan-beam (FB) collimator geometries using both circular-orbit (CO) and noncircular-orbit (NCO) acquisitions. A newly developed 4-D NURBS-based cardiac-torso (NCAT) phantom was used to simulate the /sup 99m/Tc-sestamibi uptakes in human torso with myocardial defects in the left ventricular (LV) wall. Two phantoms were generated to simulate patients with thick and thin body builds. Projection data including the effects of attenuation, collimator-detector response and scatter were generated using SIMSET Monte Carlo simulations. A large number of photon histories were generated such that the projection data were close to noise free. Poisson noise fluctuations were then added to simulate the count densities found in clinical data. Noise-free and noisy projection data were reconstructed using the iterative OS-EM reconstruction algorithm with attenuation compensation. The reconstructed images from noisy projection data show that the noise levels are lower for the FB as compared to the PH collimator due to increase in detected counts. The NCO acquisition method provides slightly better resolution and small improvement in defect contrast as compared to the CO acquisition method in noise-free reconstructed images. Despite lower projection counts the NCO shows the same noise level as the CO in the attenuation corrected reconstruction images. The results from the channelized Hotelling observer (CHO) study show that FB collimator is superior to PH collimator in myocardial defect detection, but the NCO shows no statistical significant difference from the CO for either PH or FB collimator. In conclusion, our results indicate that data acquisition using NCO makes a very small improvement in the resolution over CO for myocardial SPECT imaging. This small improvement does not make a significant difference on myocardial defect detection. However, an FB collimator provides better defect detection than a

  3. Run control techniques for the Fermilab DART data acquisition system

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.; Moore, C.; Pordes, R.; Udumula, L.; Votava, M.; Drunen, E. van; Zioulas, G.

    1996-01-01

    DART is the high speed, Unix based data acquisition system being developed by the Fermilab Computing Division in collaboration with eight High Energy Physics Experiments. This paper describes DART run-control which implements flexible, distributed, extensible and portable paradigms for the control monitoring of a data acquisition systems. We discuss the unique and interesting aspects of the run-control - why we chose the concepts we did, the benefits we have seen from the choices we made, as well as our experiences in deploying and supporting it for experiments during their commissioning and sub-system testing phases. We emphasize the software and techniques we believe are extensible to future use, and potential future modifications and extensions for those we feel are not. (author)

  4. Run control techniques for the Fermilab DART data acquisition system

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-10-01

    DART is the high speed, Unix based data acquisition system being developed by the Fermilab Computing Division in collaboration with eight High Energy Physics Experiments. This paper describes DART run-control which implements flexible, distributed, extensible and portable paradigms for the control and monitoring of data acquisition systems. We discuss the unique and interesting aspects of the run-control - why we chose the concepts we did, the benefits we have seen from the choices we made, as well as our experiences in deploying and supporting it for experiments during their commissioning and sub-system testing phases. We emphasize the software and techniques we believe are extensible to future use, and potential future modifications and extensions for those we feel are not

  5. The design and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    International Nuclear Information System (INIS)

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1987-05-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these systems, taking advantages of the independent nature of ''events,'' is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, CPU power equivalent to several VAXs is obtained at a fraction of the cost of one VAX. The front end interfacs to a VAX 11/750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to reply data using the same command language. Design concepts, data structures, performance, and experience to data are discussed

  6. The design, creation, and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    International Nuclear Information System (INIS)

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1986-01-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these system, taking advantage of the independent nature of ''events'', is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, multi-VAX cpu power is obtained at a fraction of the cost of one VAX. The front end interfaces to a VAX 750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to replay data using the same command language. Design concepts, data structures, performance, and experience to data are discussed. 5 refs., 2 figs

  7. A Linguistic Technique for Marking and Analyzing Syntactic Parallelism.

    Science.gov (United States)

    Sackler, Jessie Brome

    Sentences in rhetoric texts were used in this study to determine a way in which thetorical syntactic parallelism can be analyzed. A tagmemic analysis determined tagmas which were parallel or identical or similar to one another. These were distinguished from tagmas which were identical because of the syntactic constraints of the language…

  8. Detector techniques and data acquisition for LHC experiments

    CERN Document Server

    AUTHOR|(CDS)2071367; Cittolin, Sergio; CERN. Geneva

    1996-01-01

    An overview of the technologies for LHC tracking detectors, particle identification and calorimeters will be given. In addition, the requirements of the front-end readout electronics for each type of detector will be addressed. The latest results from the R&D studies in each of the technologies will be presented. The data handling techniques needed to read out the LHC detectors and the multi-level trigger systems used to select the events of interest will be described. An overview of the LHC experiments data acquisition architectures and their current state of developments will be presented.

  9. Techniques applied in design optimization of parallel manipulators

    CSIR Research Space (South Africa)

    Modungwa, D

    2011-11-01

    Full Text Available the desired dexterous workspace " Robot.Comput.Integrated Manuf., vol. 23, pp. 38 - 46, 2007. [12] A.P. Murray, F. Pierrot, P. Dauchez and J.M. McCarthy, "A planar quaternion approach to the kinematic synthesis of a parallel manipulator " Robotica, vol... design of a three translational DoFs parallel manipulator " Robotica, vol. 24, pp. 239, 2005. [15] J. Angeles, "The robust design of parallel manipulators," in 1st Int. Colloquium, Collaborative Research Centre 562, 2002. [16] S. Bhattacharya, H...

  10. High-energy physics software parallelization using database techniques

    International Nuclear Information System (INIS)

    Argante, E.; Van der Stok, P.D.V.; Willers, I.

    1997-01-01

    A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradigm, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI. (orig.)

  11. Automatic Parallelization An Overview of Fundamental Compiler Techniques

    CERN Document Server

    Midkiff, Samuel P

    2012-01-01

    Compiling for parallelism is a longstanding topic of compiler research. This book describes the fundamental principles of compiling "regular" numerical programs for parallelism. We begin with an explanation of analyses that allow a compiler to understand the interaction of data reads and writes in different statements and loop iterations during program execution. These analyses include dependence analysis, use-def analysis and pointer analysis. Next, we describe how the results of these analyses are used to enable transformations that make loops more amenable to parallelization, and

  12. Single breath-hold real-time cine MR imaging: improved temporal resolution using generalized autocalibrating partially parallel acquisition (GRAPPA) algorithm

    International Nuclear Information System (INIS)

    Wintersperger, Bernd J.; Nikolaou, Konstantin; Dietrich, Olaf; Reiser, Maximilian F.; Schoenberg, Stefan O.; Rieber, Johannes; Nittka, Matthias

    2003-01-01

    The purpose of this study was to test parallel imaging techniques for improvement of temporal resolution in multislice single breath-hold real-time cine steady-state free precession (SSFP) in comparison with standard segmented single-slice SSFP techniques. Eighteen subjects were examined on a 1.5-T scanner using a multislice real-time cine SSFP technique using the GRAPPA algorithm. Global left ventricular parameters (EDV, ESV, SV, EF) were evaluated and results compared with a standard segmented single-slice SSFP technique. Results for EDV (r=0.93), ESV (r=0.99), SV (r=0.83), and EF (r=0.99) of real-time multislice SSFP imaging showed a high correlation with results of segmented SSFP acquisitions. Systematic differences between both techniques were statistically non-significant. Single breath-hold multislice techniques using GRAPPA allow for improvement of temporal resolution and for accurate assessment of global left ventricular functional parameters. (orig.)

  13. Applying of USB interface technique in nuclear spectrum acquisition system

    International Nuclear Information System (INIS)

    Zhou Jianbin; Huang Jinhua

    2004-01-01

    This paper introduces applying of USB technique and constructing nuclear spectrum acquisition system via PC's USB interface. The authors choose the USB component USB100 module and the W77E58μc to do the key work. It's easy to apply USB interface technique, when USB100 module is used. USB100 module can be treated as a common I/O component for the μc controller, and can be treated as a communication interface (COM) when connected to PC' USB interface. It's easy to modify the PC's program for the new system with USB100 module. The authors can smoothly change from ISA, RS232 bus to USB bus. (authors)

  14. Acquisition and visualization techniques for narrow spectral color imaging.

    Science.gov (United States)

    Neumann, László; García, Rafael; Basa, János; Hegedüs, Ramón

    2013-06-01

    This paper introduces a new approach in narrow-band imaging (NBI). Existing NBI techniques generate images by selecting discrete bands over the full visible spectrum or an even wider spectral range. In contrast, here we perform the sampling with filters covering a tight spectral window. This image acquisition method, named narrow spectral imaging, can be particularly useful when optical information is only available within a narrow spectral window, such as in the case of deep-water transmittance, which constitutes the principal motivation of this work. In this study we demonstrate the potential of the proposed photographic technique on nonunderwater scenes recorded under controlled conditions. To this end three multilayer narrow bandpass filters were employed, which transmit at 440, 456, and 470 nm bluish wavelengths, respectively. Since the differences among the images captured in such a narrow spectral window can be extremely small, both image acquisition and visualization require a novel approach. First, high-bit-depth images were acquired with multilayer narrow-band filters either placed in front of the illumination or mounted on the camera lens. Second, a color-mapping method is proposed, using which the input data can be transformed onto the entire display color gamut with a continuous and perceptually nearly uniform mapping, while ensuring optimally high information content for human perception.

  15. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging

    International Nuclear Information System (INIS)

    Rabrait, C.

    2007-11-01

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  16. Decomposition based parallel processing technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2000-01-01

    In practical design studies, most of designers solve multidisciplinary problems with complex design structure. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder original design processes to minimize total cost and time. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  17. Using Motivational Interviewing Techniques to Address Parallel Process in Supervision

    Science.gov (United States)

    Giordano, Amanda; Clarke, Philip; Borders, L. DiAnne

    2013-01-01

    Supervision offers a distinct opportunity to experience the interconnection of counselor-client and counselor-supervisor interactions. One product of this network of interactions is parallel process, a phenomenon by which counselors unconsciously identify with their clients and subsequently present to their supervisors in a similar fashion…

  18. Parallel processing based decomposition technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2001-01-01

    In practical design studies, most of designers solve multidisciplinary problems with large sized and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  19. VALU, AVX and GPU acceleration techniques for parallel FDTD methods

    CERN Document Server

    Yu, Wenhua

    2013-01-01

    This book introduces a general hardware acceleration technique that can significantly speed up FDTD simulations and their applications to engineering problems without requiring any additional hardware devices. This acceleration of complex problems can be efficient in saving both time and money and once learned these new techniques can be used repeatedly.

  20. Effects of various event building techniques on data acquisition system architectures

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.

    1990-04-01

    The preliminary specifications for various new detectors throughout the world including those at the Superconducting Super Collider (SSC) already make it clear that existing event building techniques will be inadequate for the high trigger and data rates anticipated for these detectors. In the world of high-energy physics many approaches have been taken to solving the problem of reading out data from a whole detector and presenting a complete event to the physicist, while simultaneously keeping deadtime to a minimum. This paper includes a review of multiprocessor and telecommunications interconnection networks and how these networks relate to event building in general, illustrating advantages of the various approaches. It presents a more detailed study of recent research into new event building techniques which incorporate much greater parallelism to better accommodate high data rates. The future in areas such as front-end electronics architectures, high speed data links, event building and online processor arrays is also examined. Finally, details of a scalable parallel data acquisition system architecture being developed at Fermilab are given. 35 refs., 31 figs., 1 tab

  1. A Note on Using Partitioning Techniques for Solving Unconstrained Optimization Problems on Parallel Systems

    Directory of Open Access Journals (Sweden)

    Mehiddin Al-Baali

    2015-12-01

    Full Text Available We deal with the design of parallel algorithms by using variable partitioning techniques to solve nonlinear optimization problems. We propose an iterative solution method that is very efficient for separable functions, our scope being to discuss its performance for general functions. Experimental results on an illustrative example have suggested some useful modifications that, even though they improve the efficiency of our parallel method, leave some questions open for further investigation.

  2. Reliability of contemporary data-acquisition techniques for LEED analysis

    International Nuclear Information System (INIS)

    Noonan, J.R.; Davis, H.L.

    1980-10-01

    It is becoming clear that one of the principal limitations in LEED structure analysis is the quality of the experimental I-V profiles. This limitation is discussed, and data acquisition procedures described, which for simple systems, seem to enhance the quality of agreement between the results of theoretical model calculations and experimental LEED spectra. By employing such procedures to obtain data from Cu(100), excellent agreement between computed and measured profiles has been achieved. 7 figures

  3. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  4. Parallel image-acquisition in continuous-wave electron paramagnetic resonance imaging with a surface coil array: Proof-of-concept experiments

    Science.gov (United States)

    Enomoto, Ayano; Hirata, Hiroshi

    2014-02-01

    This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

  5. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  6. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  7. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  8. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    Science.gov (United States)

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    Science.gov (United States)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  10. Improvements in image quality with pseudo-parallel imaging in the phase-scrambling fourier transform technique

    International Nuclear Information System (INIS)

    Ito, Satoshi; Kawawa, Yasuhiro; Yamada, Yoshifumi

    2010-01-01

    The signal obtained in the phase-scrambling Fourier transform (PSFT) imaging technique can be transformed to the signal described by the Fresnel transform of the objects, in which the amplitude of the PSFT presents some kind of blurred image of the objects. Therefore, the signal can be considered to exist in the object domain as well as the Fourier domain of the object. This notable feature makes it possible to assign weights to the reconstructed images by applying a weighting function to the PSFT signal after data acquisition, and as a result, pseudo-parallel image reconstruction using these aliased image data with different weights on the images is feasible. In this study, the improvements in image quality with such pseudo-parallel imaging were examined and demonstrated. The weighting function of the PSFT signal that provides a given weight on the image is estimated using the obtained image data and is iteratively updated after sensitivity encoding (SENSE)-based image reconstruction. Simulation studies showed that reconstruction errors were dramatically reduced and that the spatial resolution was also improved in almost all image spaces. The proposed method was applied to signals synthesized from MR image data with phase variations to verify its effectiveness. It was found that the image quality was improved and that images almost entirely free of aliasing artifacts could be obtained. (author)

  11. Parallel search engine optimisation and pay-per-click campaigns: A comparison of cost per acquisition

    Directory of Open Access Journals (Sweden)

    Wouter T. Kritzinger

    2017-07-01

    Full Text Available Background: It is imperative that commercial websites should rank highly in search engine result pages because these provide the main entry point to paying customers. There are two main methods to achieve high rankings: search engine optimisation (SEO and pay-per-click (PPC systems. Both require a financial investment – SEO mainly at the beginning, and PPC spread over time in regular amounts. If marketing budgets are applied in the wrong area, this could lead to losses and possibly financial ruin. Objectives: The objective of this research was to investigate, using three real-world case studies, the actual expenditure on and income from both SEO and PPC systems. These figures were then compared, and specifically, the cost per acquisition (CPA was used to decide which system yielded the best results. Methodology: Three diverse websites were chosen, and analytics data for all three were compared over a 3-month period. Calculations were performed to reduce the figures to single ratios, to make comparisons between them possible. Results: Some of the resultant ratios varied widely between websites. However, the CPA was shown to be on average 52.1 times lower for SEO than for PPC systems. Conclusion: It was concluded that SEO should be the marketing system of preference for e-commerce-based websites. However, there are cases where PPC would yield better results – when instant traffic is required, and when a large initial expenditure is not possible.

  12. Image acquisition system using on sensor compressed sampling technique

    Science.gov (United States)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  13. Evaluation of an innovative radiographic technique - parallel profile radiography - to determine the dimensions of dentogingival unit

    Directory of Open Access Journals (Sweden)

    Sushama R Galgali

    2011-01-01

    Full Text Available Background: Maintenance of gingival health is a key factor for longevity of the teeth as well as of restorations. The physiologic dentogingival unit (DGU, which is composed of the epithelial and connective tissue attachments of the gingiva, functions as a barrier against microbial entry into the periodontium. Invasion of this space triggers inflammation and causes periodontal destruction. Despite the clinical relevance of the determination of the length and width of the DGU, there is no standardized technique. The length of the DGU can be either determined by histologic preparations or by transgingival probing. Although width can also be assessed by transgingival probing or with an ultrasound device, they are either invasive or expensive Aims: This study sought to evaluate an innovative radiographic exploration technique - parallel profile radiography - for measuring the dimensions of the DGU on the labial surfaces of anterior teeth. Materials and Methods: Two radiographs were made using the long-cone parallel technique in ten individuals, one in frontal projection, while the second radiograph was a parallel profile radiograph obtained from a lateral position. The length and width of the DGU was measured using computer software. Transgingival probing (trans-sulcular was done for these same patients and length of the DGU was measured. The values obtained by the two methods were compared. Pearson product correlation coefficient was calculated to examine the agreement between the values obtained by PPRx and transgingival probing. Results: The mean biologic width by the parallel profile radiography (PPRx technique was 1.72 mm (range 0.94-2.11 mm, while the mean thickness of the gingiva was 1.38 mm (range 0.92-1.77 mm. The mean biologic width by trans-gingival probing was 1.6 mm (range 0.8-2.2mm. Pearson product correlation coefficient (r for the above values was 0.914; thus, a high degree of agreement exists between the PPRx and TGP techniques

  14. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  15. An overview of data acquisition, signal coding and data analysis techniques for MST radars

    Science.gov (United States)

    Rastogi, P. K.

    1986-01-01

    An overview is given of the data acquisition, signal processing, and data analysis techniques that are currently in use with high power MST/ST (mesosphere stratosphere troposphere/stratosphere troposphere) radars. This review supplements the works of Rastogi (1983) and Farley (1984) presented at previous MAP workshops. A general description is given of data acquisition and signal processing operations and they are characterized on the basis of their disparate time scales. Then signal coding, a brief description of frequently used codes, and their limitations are discussed, and finally, several aspects of statistical data processing such as signal statistics, power spectrum and autocovariance analysis, outlier removal techniques are discussed.

  16. A proposed scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; Vanberg, R.

    1990-01-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequence, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a proposed new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of Gigabytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the proposed Scalable Parallel Open Architecture data acquisition system are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build a prototype of the proposed data acquisition system architecture is given in the paper. The major component of the system, a self-routing parallel event builder, is described in detail

  17. The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging

    Science.gov (United States)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-06-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.

  18. 10-channel fiber array fabrication technique for parallel optical coherence tomography system

    Science.gov (United States)

    Arauz, Lina J.; Luo, Yuan; Castillo, Jose E.; Kostuk, Raymond K.; Barton, Jennifer

    2007-02-01

    Optical Coherence Tomography (OCT) shows great promise for low intrusive biomedical imaging applications. A parallel OCT system is a novel technique that replaces mechanical transverse scanning with electronic scanning. This will reduce the time required to acquire image data. In this system an array of small diameter fibers is required to obtain an image in the transverse direction. Each fiber in the array is configured in an interferometer and is used to image one pixel in the transverse direction. In this paper we describe a technique to package 15μm diameter fibers on a siliconsilica substrate to be used in a 2mm endoscopic probe tip. Single mode fibers are etched to reduce the cladding diameter from 125μm to 15μm. Etched fibers are placed into a 4mm by 150μm trench in a silicon-silica substrate and secured with UV glue. Active alignment was used to simplify the lay out of the fibers and minimize unwanted horizontal displacement of the fibers. A 10-channel fiber array was built, tested and later incorporated into a parallel optical coherence system. This paper describes the packaging, testing, and operation of the array in a parallel OCT system.

  19. Parallel Reservoir Simulations with Sparse Grid Techniques and Applications to Wormhole Propagation

    KAUST Repository

    Wu, Yuanqing

    2015-09-08

    In this work, two topics of reservoir simulations are discussed. The first topic is the two-phase compositional flow simulation in hydrocarbon reservoir. The major obstacle that impedes the applicability of the simulation code is the long run time of the simulation procedure, and thus speeding up the simulation code is necessary. Two means are demonstrated to address the problem: parallelism in physical space and the application of sparse grids in parameter space. The parallel code can gain satisfactory scalability, and the sparse grids can remove the bottleneck of flash calculations. Instead of carrying out the flash calculation in each time step of the simulation, a sparse grid approximation of all possible results of the flash calculation is generated before the simulation. Then the constructed surrogate model is evaluated to approximate the flash calculation results during the simulation. The second topic is the wormhole propagation simulation in carbonate reservoir. In this work, different from the traditional simulation technique relying on the Darcy framework, we propose a new framework called Darcy-Brinkman-Forchheimer framework to simulate wormhole propagation. Furthermore, to process the large quantity of cells in the simulation grid and shorten the long simulation time of the traditional serial code, standard domain-based parallelism is employed, using the Hypre multigrid library. In addition to that, a new technique called “experimenting field approach” to set coefficients in the model equations is introduced. In the 2D dissolution experiments, different configurations of wormholes and a series of properties simulated by both frameworks are compared. We conclude that the numerical results of the DBF framework are more like wormholes and more stable than the Darcy framework, which is a demonstration of the advantages of the DBF framework. The scalability of the parallel code is also evaluated, and good scalability can be achieved. Finally, a mixed

  20. Spacing Techniques in Second Language Vocabulary Acquisition: Short-Term Gains vs. Long-Term Memory

    Science.gov (United States)

    Schuetze, Ulf

    2015-01-01

    This article reports the results of two experiments using the spacing technique (Leitner, 1972; Landauer & Bjork, 1978) in second language vocabulary acquisition. In the past, studies in this area have produced mixed results attempting to differentiate between massed, uniform and expanded intervals of spacing (Balota, Duchek, & Logan,…

  1. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  2. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    Science.gov (United States)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  3. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  4. Reply to "Comments on Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes"

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available This is a reply to the comments by Gunnam et al. "Comments on 'Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes'", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 704174 on our recent work "Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 723465.

  5. A study on evaluating validity of SNR calculation using a conventional two region method in MR images applied a multichannel coil and parallel imaging technique

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Kwan Woo; Son, Soon Yong [Dept. of Radiology, Asan Medical Center, Seoul (Korea, Republic of); Min, Jung Whan [Dept. of Radiological Technology, Shingu University, Sungnam (Korea, Republic of); Kwon, Kyung Tae [Dept. of Radiological Technology, Dongnam Health University, Suwon (Korea, Republic of); Yoo, Beong Gyu; Lee, Jong Seok [Dept. of Radiotechnology, Wonkwang Health Science University, Iksan (Korea, Republic of)

    2015-12-15

    The purpose of this study was to investigate the problems of a signal to noise ratio measurement using a two region measurement method that is conventionally used when using a multi-channel coil and a parallel imaging technique. As a research method, after calculating the standard SNR using a single channel head coil of which coil satisfies three preconditions when using a two region measurement method, we made comparisons and evaluations after calculating an SNR by using a two region measurement method of which method is problematic because it is used without considering the methods recommended by reputable organizations and the preconditions at the time of using a multi-channel coil and a parallel imaging technique. We found that a two region measurement method using a multi-channel coil and a parallel imaging technique shows the highest relative standard deviation, and thus shows a low degree of precision. In addition, we found out that the difference of SNR according to ROI location was very high, and thus a spatial noise distribution was not uniform. Also, 95% confidence interval through Blend-Altman plot is the widest, and thus the conformity degree with a two region measurement method using the standard single channel head coil is low. By directly comparing an AAPM method, which serves as a standard of a performance evaluation test of a magnetic resonance imaging device under the same image acquisition conditions, an NEMA method which can accurately determine the noise level in a signal region and the methods recommended by manufacturers of a magnetic resonance imaging device, there is a significance in that we quantitatively verified the inaccurate problems of a signal to noise ratio using a two region measurement method when using a multi-channel coil and a parallel imaging technique of which method does not satisfy the preconditions that researchers could overlook.

  6. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging; Imagerie par resonance magnetique a haute resolution temporelle: developpement d'une methode d'acquisition parallele tridimensionnelle pour l'imagerie fonctionnelle cerebrale

    Energy Technology Data Exchange (ETDEWEB)

    Rabrait, C

    2007-11-15

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  7. Image acquisition and planimetry systems to develop wounding techniques in 3D wound model

    Directory of Open Access Journals (Sweden)

    Kiefer Ann-Kathrin

    2017-09-01

    Full Text Available Wound healing represents a complex biological repair process. Established 2D monolayers and wounding techniques investigate cell migration, but do not represent coordinated multi-cellular systems. We aim to use wound surface area measurements obtained from image acquisition and planimetry systems to establish our wounding technique and in vitro organotypic tissue. These systems will be used in our future wound healing treatment studies to assess the rate of wound closure in response to wound healing treatment with light therapy (photobiomodulation. The image acquisition and planimetry systems were developed, calibrated, and verified to measure wound surface area in vitro. The system consists of a recording system (Sony DSC HX60, 20.4 M Pixel, 1/2.3″ CMOS sensor and calibrated with 1mm scale paper. Macro photography with an optical zoom magnification of 2:1 achieves sufficient resolution to evaluate the 3mm wound size and healing growth. The camera system was leveled with an aluminum construction to ensure constant distance and orientation of the images. The JPG-format images were processed with a planimetry system in MATLAB. Edge detection enables definition of the wounded area. Wound area can be calculated with surface integrals. To separate the wounded area from the background, the image was filtered in several steps. Agar models, injured through several test persons with different levels of experience, were used as pilot data to test the planimetry software. These image acquisition and planimetry systems support the development of our wound healing research. The reproducibility of our wounding technique can be assessed by the variability in initial wound surface area. Also, wound healing treatment effects can be assessed by the change in rate of wound closure. These techniques represent the foundations of our wound model, wounding technique, and analysis systems in our ongoing studies in wound healing and therapy.

  8. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  9. Marketing practitioner’s tacit knowledge acquisition using Repertory Grid Technique (RTG)

    Science.gov (United States)

    Azmi, Afdhal; Adriman, Ramzi

    2018-05-01

    The tacit knowledge of Marketing practitioner’s experts is excellent resources and priceless. It takes into account their experiential, skill, ideas, belief systems, insight and speculation into management decision-making. This expertise is an individual intuitive judgment and personal shortcuts to complete the work efficiently. Tacit knowledge of Marketing practitioner’s experts is one of best problem solutions in marketing strategy, environmental analysis, product management and partner’s relationship. This paper proposes the acquisition method of tacit knowledge from Marketing practitioner’s using Repertory Grid Technique (RGT). The RGT is a software application for tacit acquisition knowledge to provide a systematic approach to capture and acquire the constructs from an individual. The result shows the understanding of RGT could make TKE and MPE get a good result in capturing and acquiring tacit knowledge of Marketing practitioner’s experts.

  10. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC [Superconducting Super Collider] detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; VanBerg, R.

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab

  11. Diffusion MRI of the neonate brain: acquisition, processing and analysis techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pannek, Kerstin [University of Queensland, Centre for Clinical Research, Brisbane (Australia); University of Queensland, School of Medicine, Brisbane (Australia); University of Queensland, Centre for Advanced Imaging, Brisbane (Australia); Guzzetta, Andrea [IRCCS Stella Maris, Department of Developmental Neuroscience, Calambrone Pisa (Italy); Colditz, Paul B. [University of Queensland, Centre for Clinical Research, Brisbane (Australia); University of Queensland, Perinatal Research Centre, Brisbane (Australia); Rose, Stephen E. [University of Queensland, Centre for Clinical Research, Brisbane (Australia); University of Queensland, Centre for Advanced Imaging, Brisbane (Australia); University of Queensland Centre for Clinical Research, Royal Brisbane and Women' s Hospital, Brisbane (Australia)

    2012-10-15

    Diffusion MRI (dMRI) is a popular noninvasive imaging modality for the investigation of the neonate brain. It enables the assessment of white matter integrity, and is particularly suited for studying white matter maturation in the preterm and term neonate brain. Diffusion tractography allows the delineation of white matter pathways and assessment of connectivity in vivo. In this review, we address the challenges of performing and analysing neonate dMRI. Of particular importance in dMRI analysis is adequate data preprocessing to reduce image distortions inherent to the acquisition technique, as well as artefacts caused by head movement. We present a summary of techniques that should be used in the preprocessing of neonate dMRI data, and demonstrate the effect of these important correction steps. Furthermore, we give an overview of available analysis techniques, ranging from voxel-based analysis of anisotropy metrics including tract-based spatial statistics (TBSS) to recently developed methods of statistical analysis addressing issues of resolving complex white matter architecture. We highlight the importance of resolving crossing fibres for tractography and outline several tractography-based techniques, including connectivity-based segmentation, the connectome and tractography mapping. These techniques provide powerful tools for the investigation of brain development and maturation. (orig.)

  12. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    Energy Technology Data Exchange (ETDEWEB)

    PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

  13. Load-balancing techniques for a parallel electromagnetic particle-in-cell code

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

    2000-01-01

    QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER

  14. Screening crops for efficient phosphorus acquisition in a low phosphorus soil using radiotracer technique

    International Nuclear Information System (INIS)

    Meena, S.; Malarvizhi, P.; Rajeswari, R.

    2017-01-01

    Deficiency of phosphorus (P) is the major limitation to agricultural production. Identification of cultivars with greater capacity to grow in soils having low P availability (phosphorus efficiency) will help in P management in a sustainable way. Green house experiment with maize (CO 6) and cotton (MCU 13) as test crops with four levels of phosphorus (0, 3.75, 7.50 and 15 mg P kg -1 soil) was conducted in a P deficient soil (7.2 kg ha -1 ) to study the phosphorus acquisition characteristics and to select efficient crop using 32 P radiotracer technique. Carrier free 32 P obtained as orthophosphoric acid in dilute hydrochloric acid medium from the Board of Radiation and Isotope Technology, Mumbai was used for labeling the soil @ 3200 kBq pot -1 . After 60 days the crops were harvested and the radioactivity was measured in the plant samples using Liquid scintillation counter (PerkinElmer - Tricarb 2810 TR). Different values of specific radioactivity and Isotopically Exchangeable Phosphorus for maize and cotton indicated that chemically different pools of soil P were utilized and maize accessing a larger pool than cotton. Maize having recorded high Phosphorus Use Efficiency, Phosphorus Efficiency and low Phosphorus Stress Factor values, it is a better choice for P deficient soils. Higher Phosphorus Acquisition Efficiency of maize (59 %) than cotton (48%) can be related to the ability of maize to take up P from insoluble inorganic P forms. (author)

  15. Implementation of the neutron noise technique for subcritical reactors using a new data acquisition system

    International Nuclear Information System (INIS)

    Bellino, Pablo A.; Gomez, Angel

    2009-01-01

    A new data acquisition system was designed and programmed for nuclear kinetics parameter estimations in subcritical reactors. The system allows using any of the neutron noise techniques, since it could store the whole information available in the neutron detection system. The α Rossi, α Feynman and spectral analysis methods were performed in order to estimate the prompt neutron decay constant (and hence the reactivity). The measurements were done in the nuclear research reactor RA-1, where introducing the control rods, different reactivity levels where reached (until -7 dollars). With the three methods used, agreement was found between the estimations and the reference reactivities in each level, even when the detector efficiency was low. All the measurements were performed with a high gamma flux, although the results were found to be satisfactory. (author)

  16. Usefulness of 3D-CE renal artery MRA using parallel imaging with array spatial sensitivity encoding technique (ASSET)

    International Nuclear Information System (INIS)

    Shibasaki, Toshiro; Seno Masafumi; Takoi, Kunihiro; Sato, Hirofumi; Hino, Tsuyoshi

    2003-01-01

    In this study of 3D contrast enhanced MR angiography of the renal artery using the array spatial sensitivity encoding technique (ASSET), the acquisition time per 1 phase shortened fairly. And using the technique of spectral inversion at lipids (SPECIAL) together with ASSET, the quality of image was improved by emphasizing the contrast. The timing of acquisition was determined by the test injection. We started acquiring the MR angiography 2 seconds after the arrival of maximum enhancement of the test injection at the upper abdominal aorta near the renal artery. As a result parenchymal enhancement was not visible and depiction of the segmental artery was possible in 14 (82%) of 17 patients. At the present time we consider it better not to use the Fractional number of excitation (NEX) together with ASSET, as it may cause various artifacts. (author)

  17. Simultaneous Multislice Echo Planar Imaging With Blipped Controlled Aliasing in Parallel Imaging Results in Higher Acceleration: A Promising Technique for Accelerated Diffusion Tensor Imaging of Skeletal Muscle.

    Science.gov (United States)

    Filli, Lukas; Piccirelli, Marco; Kenkel, David; Guggenberger, Roman; Andreisek, Gustav; Beck, Thomas; Runge, Val M; Boss, Andreas

    2015-07-01

    The aim of this study was to investigate the feasibility of accelerated diffusion tensor imaging (DTI) of skeletal muscle using echo planar imaging (EPI) applying simultaneous multislice excitation with a blipped controlled aliasing in parallel imaging results in higher acceleration unaliasing technique. After federal ethics board approval, the lower leg muscles of 8 healthy volunteers (mean [SD] age, 29.4 [2.9] years) were examined in a clinical 3-T magnetic resonance scanner using a 15-channel knee coil. The EPI was performed at a b value of 500 s/mm2 without slice acceleration (conventional DTI) as well as with 2-fold and 3-fold acceleration. Fractional anisotropy (FA) and mean diffusivity (MD) were measured in all 3 acquisitions. Fiber tracking performance was compared between the acquisitions regarding the number of tracks, average track length, and anatomical precision using multivariate analysis of variance and Mann-Whitney U tests. Acquisition time was 7:24 minutes for conventional DTI, 3:53 minutes for 2-fold acceleration, and 2:38 minutes for 3-fold acceleration. Overall FA and MD values ranged from 0.220 to 0.378 and 1.595 to 1.829 mm2/s, respectively. Two-fold acceleration yielded similar FA and MD values (P ≥ 0.901) and similar fiber tracking performance compared with conventional DTI. Three-fold acceleration resulted in comparable MD (P = 0.199) but higher FA values (P = 0.006) and significantly impaired fiber tracking in the soleus and tibialis anterior muscles (number of tracks, P DTI of skeletal muscle with similar image quality and quantification accuracy of diffusion parameters. This may increase the clinical applicability of muscle anisotropy measurements.

  18. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    National Research Council Canada - National Science Library

    Steed, Chad A; Fitzpatrick, Patrick J; Jankun-Kelly, T. J; Swan II, J. E

    2008-01-01

    ... for a particular dependent variable. These capabilities are combined into a unique visualization system that is demonstrated via a North Atlantic hurricane climate study using a systematic workflow. This research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets.

  19. Comments on “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes”

    Directory of Open Access Journals (Sweden)

    Mark B. Yeary

    2009-01-01

    Full Text Available This is a comment article on the publication “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes” Rovini et al. (2009. We mention that there has been similar work reported in the literature before, and the previous work has not been cited correctly, for example Gunnam et al. (2006, 2007. This brief note serves to clarify these issues.

  20. Development of fast parallel multi-technique scanning X-ray imaging at Synchrotron Soleil

    Science.gov (United States)

    Medjoubi, K.; Leclercq, N.; Langlois, F.; Buteau, A.; Lé, S.; Poirier, S.; Mercère, P.; Kewish, C. M.; Somogyi, A.

    2013-10-01

    A fast multimodal scanning X-ray imaging scheme is prototyped at Soleil Synchrotron. It permits the simultaneous acquisition of complementary information on the sample structure, composition and chemistry by measuring transmission, differential phase contrast, small-angle scattering, and X-ray fluorescence by dedicated detectors with ms dwell time per pixel. The results of the proof of principle experiments are presented in this paper.

  1. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    Science.gov (United States)

    2008-07-07

    analyzing multivariate data sets. The system was developed using the Java Development Kit (JDK) version 1.5; and it yields interactive performance on a... script and captures output from the MATLAB’s “regress” and “stepwisefit” utilities that perform simple and stepwise regression, respectively. The MATLAB...Statistical Association, vol. 85, no. 411, pp. 664–675, 1990. [9] H. Hauser, F. Ledermann, and H. Doleisch, “ Angular brushing of extended parallel coordinates

  2. Parallel analysis tools and new visualization techniques for ultra-large climate data set

    Energy Technology Data Exchange (ETDEWEB)

    Middleton, Don [National Center for Atmospheric Research, Boulder, CO (United States); Haley, Mary [National Center for Atmospheric Research, Boulder, CO (United States)

    2014-12-10

    ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

  3. Reducing contrast contamination in radial turbo-spin-echo acquisitions by combining a narrow-band KWIC filter with parallel imaging.

    Science.gov (United States)

    Neumann, Daniel; Breuer, Felix A; Völker, Michael; Brandt, Tobias; Griswold, Mark A; Jakob, Peter M; Blaimer, Martin

    2014-12-01

    Cartesian turbo spin-echo (TSE) and radial TSE images are usually reconstructed by assembling data containing different contrast information into a single k-space. This approach results in mixed contrast contributions in the images, which may reduce their diagnostic value. The goal of this work is to improve the image contrast from radial TSE acquisitions by reducing the contribution of signals with undesired contrast information. Radial TSE acquisitions allow the reconstruction of multiple images with different T2 contrasts using the k-space weighted image contrast (KWIC) filter. In this work, the image contrast is improved by reducing the band-width of the KWIC filter. Data for the reconstruction of a single image are selected from within a small temporal range around the desired echo time. The resulting dataset is undersampled and, therefore, an iterative parallel imaging algorithm is applied to remove aliasing artifacts. Radial TSE images of the human brain reconstructed with the proposed method show an improved contrast when compared with Cartesian TSE images or radial TSE images with conventional KWIC reconstructions. The proposed method provides multi-contrast images from radial TSE data with contrasts similar to multi spin-echo images. Contaminations from unwanted contrast weightings are strongly reduced. © 2014 Wiley Periodicals, Inc.

  4. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  5. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    Science.gov (United States)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  6. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors

    Directory of Open Access Journals (Sweden)

    Kaiming Nie

    2016-01-01

    Full Text Available This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM. The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs are used to quantize the time of photons’ arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor’s resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip’s output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5–20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes.

  7. Acquisition of Dental Skills in Preclinical Technique Courses: Influence of Spatial and Manual Abilities

    Science.gov (United States)

    Schwibbe, Anja; Kothe, Christian; Hampe, Wolfgang; Konradt, Udo

    2016-01-01

    Sixty years of research have not added up to a concordant evaluation of the influence of spatial and manual abilities on dental skill acquisition. We used Ackerman's theory of ability determinants of skill acquisition to explain the influence of spatial visualization and manual dexterity on the task performance of dental students in two…

  8. A Novel Technique for Design of Ultra High Tunable Electrostatic Parallel Plate RF MEMS Variable Capacitor

    Science.gov (United States)

    Baghelani, Masoud; Ghavifekr, Habib Badri

    2017-12-01

    This paper introduces a novel method for designing of low actuation voltage, high tuning ratio electrostatic parallel plate RF MEMS variable capacitors. It is feasible to achieve ultra-high tuning ratios way beyond 1.5:1 barrier, imposed by pull-in effect, by the proposed method. The proposed method is based on spring strengthening of the structure just before the unstable region. Spring strengthening could be realized by embedding some dimples on the spring arms with the precise height. These dimples shorten the spring length when achieved to the substrate. By the proposed method, as high tuning ratios as 7.5:1 is attainable by only considering four dimple sets. The required actuation voltage for this high tuning ratio is 14.33 V which is simply achievable on-chip by charge pump circuits. Brownian noise effect is also discussed and mechanical natural frequency of the structure is calculated.

  9. Parallel Reservoir Simulations with Sparse Grid Techniques and Applications to Wormhole Propagation

    KAUST Repository

    Wu, Yuanqing

    2015-01-01

    the traditional simulation technique relying on the Darcy framework, we propose a new framework called Darcy-Brinkman-Forchheimer framework to simulate wormhole propagation. Furthermore, to process the large quantity of cells in the simulation grid and shorten

  10. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    Energy Technology Data Exchange (ETDEWEB)

    Shimada, Kotaro, E-mail: kotaro@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Isoda, Hiroyoshi, E-mail: sayuki@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Okada, Tomohisa, E-mail: tomokada@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Kamae, Toshikazu, E-mail: toshi13@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Arizono, Shigeki, E-mail: arizono@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Hirokawa, Yuusuke, E-mail: yuusuke@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Shibata, Toshiya, E-mail: ksj@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Togashi, Kaori, E-mail: ktogashi@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan)

    2011-01-15

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 {+-} 1.0 min (mean {+-} standard deviation), 5.9 {+-} 0.8 min, and 5.8 {+-} 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  11. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    International Nuclear Information System (INIS)

    Shimada, Kotaro; Isoda, Hiroyoshi; Okada, Tomohisa; Kamae, Toshikazu; Arizono, Shigeki; Hirokawa, Yuusuke; Shibata, Toshiya; Togashi, Kaori

    2011-01-01

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 ± 1.0 min (mean ± standard deviation), 5.9 ± 0.8 min, and 5.8 ± 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  12. Optimal design of a spherical parallel manipulator based on kinetostatic performance using evolutionary techniques

    Energy Technology Data Exchange (ETDEWEB)

    Daneshmand, Morteza [University of Tartu, Tartu (Estonia); Saadatzi, Mohammad Hossein [Colorado School of Mines, Golden (United States); Kaloorazi, Mohammad Hadi [École de Technologie Supérieur, Montréal (Canada); Masouleh, Mehdi Tale [University of Tehran, Tehran (Iran, Islamic Republic of); Anbarjafari, Gholamreza [Hasan Kalyoncu University, Gaziantep (Turkmenistan)

    2016-03-15

    This study aims to provide an optimal design for a Spherical parallel manipulator (SPM), namely, the Agile Eye. This aim is approached by investigating kinetostatic performance and workspace and searching for the most promising design. Previously recommended designs are examined to determine whether they provide acceptable kinetostatic performance and workspace. Optimal designs are provided according to different kinetostatic performance indices, especially kinematic sensitivity. The optimization process is launched based on the concept of the genetic algorithm. A single-objective process is implemented in accordance with the guidelines of an evolutionary algorithm called differential evolution. A multi-objective procedure is then provided following the reasoning of the nondominated sorting genetic algorithm-II. This process results in several sets of Pareto points for reconciliation between kinetostatic performance indices and workspace. The concept of numerous kinetostatic performance indices and the results of optimization algorithms are elaborated. The conclusions provide hints on the provided set of designs and their credibility to provide a well-conditioned workspace and acceptable kinetostatic performance for the SPM under study, which can be well extended to other types of SPMs.

  13. Selection and integration of a network of parallel processors in the real time acquisition system of the 4π DIAMANT multidetector: modeling, realization and evaluation of the software installed on this network

    International Nuclear Information System (INIS)

    Guirande, F.

    1997-01-01

    The increase in sensitivity of 4π arrays such as EUROBALL or DIAMANT has led to an increase in the data flow rate into the data acquisition system. If at the electronic level, the data flow has been distributed onto several data acquisition buses, it is necessary in the data processing system to increase the processing power. This work regards the modelling and implementation of the software allocated onto an architecture of parallel processors. Object analysis and formal methods were used, benchmark and evolution in the future of this architecture are presented. The thesis consists of two parts. Part A, devoted to 'Nuclear Spectroscopy with 4 π multidetectors', contains a first chapter entitled 'The Physics of 4π multidetectors' and a second chapter entitled 'Integral architecture of 4π multidetectors'. Part B, devoted to 'Parallel acquisition system of DIAMANT' contains three chapters entitled 'Material architecture', 'Software architecture' and 'Validation and Performances'. Four appendices and a term glossary close this work. (author)

  14. Evaluation of alias-less reconstruction by pseudo-parallel imaging in a phase-scrambling fourier transform technique

    International Nuclear Information System (INIS)

    Ito, Satoshi; Kawawa, Yasuhiro; Yamada, Yoshifumi

    2010-01-01

    We propose an image reconstruction technique in which parallel image reconstruction is performed based on the sensitivity encoding (SENSE) algorithm using only a single set of signals. The signal obtained in the phase-scrambling Fourier transform (PSFT) imaging technique can be transformed to the signal described by the Fresnel transform of the objects, which is known as the diffracted wave-front equation of the object in acoustics or optics. Since the Fresnel transform is a convolution integral on the object space, the space where the PSFT signal exists can be considered as both in the Fourier domain and in the object domain. This notable feature indicates that weighting functions corresponding to the sensitivity of radiofrequency (RF) coils can be approximately given in the PSFT signal space. Therefore, we can obtain two folded images from a single set of signals with different weighting functions, and image reconstruction based on the SENSE parallel imaging algorithm is possible using a series of folded images. Simulation and experimental studies showed that almost alias-free images can be synthesized using a single signal that does not satisfy the sampling theorem. (author)

  15. A smart technique for attendance system to recognize faces through parallelism

    Science.gov (United States)

    Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.

    2017-11-01

    Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.

  16. Selection and integration of a network of parallel processors in the real time acquisition system of the 4{pi} DIAMANT multidetector: modeling, realization and evaluation of the software installed on this network; Choix et integration d`un reseau de processeurs paralleles dans le systeme d`acquisition temps reel du multidetecteur 4{pi} DIAMANT: modelisation, realisation et evaluation du logiciel implante sur ce reseau

    Energy Technology Data Exchange (ETDEWEB)

    Guirande, F. [Ecole Doctorale des Sciences Physiques et de l`Ingenieur, Bordeaux-1 Univ., 33 (France)

    1997-07-11

    The increase in sensitivity of 4{pi} arrays such as EUROBALL or DIAMANT has led to an increase in the data flow rate into the data acquisition system. If at the electronic level, the data flow has been distributed onto several data acquisition buses, it is necessary in the data processing system to increase the processing power. This work regards the modelling and implementation of the software allocated onto an architecture of parallel processors. Object analysis and formal methods were used, benchmark and evolution in the future of this architecture are presented. The thesis consists of two parts. Part A, devoted to `Nuclear Spectroscopy with 4 {pi} multidetectors`, contains a first chapter entitled `The Physics of 4{pi} multidetectors` and a second chapter entitled `Integral architecture of 4{pi} multidetectors`. Part B, devoted to `Parallel acquisition system of DIAMANT` contains three chapters entitled `Material architecture`, `Software architecture` and `Validation and Performances`. Four appendices and a term glossary close this work. (author) 58 refs.

  17. Evaluation of Medium Spatial Resolution BRDF-Adjustment Techniques Using Multi-Angular SPOT4 (Take5) Acquisitions

    OpenAIRE

    Claverie, Martin; Vermote, Eric; Franch, Belen; He, Tao; Hagolle, Olivier; Kadiri, Mohamed; Masek, Jeff

    2015-01-01

    High-resolution sensor Surface Reflectance (SR) data are affected by surface anisotropy but are difficult to adjust because of the low temporal frequency of the acquisitions and the low angular sampling. This paper evaluates five high spatial resolution Bidirectional Reflectance Distribution Function (BRDF) adjustment techniques. The evaluation is based on the noise level of the SR Time Series (TS) corrected to a normalized geometry (nadir view, 45° sun zenith angle) extracted from the multi-...

  18. Combined Acquisition Technique (CAT) for Neuroimaging of Multiple Sclerosis at Low Specific Absorption Rates (SAR)

    Science.gov (United States)

    Biller, Armin; Choli, Morwan; Blaimer, Martin; Breuer, Felix A.; Jakob, Peter M.; Bartsch, Andreas J.

    2014-01-01

    Purpose To compare a novel combined acquisition technique (CAT) of turbo-spin-echo (TSE) and echo-planar-imaging (EPI) with conventional TSE. CAT reduces the electromagnetic energy load transmitted for spin excitation. This radiofrequency (RF) burden is limited by the specific absorption rate (SAR) for patient safety. SAR limits restrict high-field MRI applications, in particular. Material and Methods The study was approved by the local Medical Ethics Committee. Written informed consent was obtained from all participants. T2- and PD-weighted brain images of n = 40 Multiple Sclerosis (MS) patients were acquired by CAT and TSE at 3 Tesla. Lesions were recorded by two blinded, board-certificated neuroradiologists. Diagnostic equivalence of CAT and TSE to detect MS lesions was evaluated along with their SAR, sound pressure level (SPL) and sensations of acoustic noise, heating, vibration and peripheral nerve stimulation. Results Every MS lesion revealed on TSE was detected by CAT according to both raters (Cohen’s kappa of within-rater/across-CAT/TSE lesion detection κCAT = 1.00, at an inter-rater lesion detection agreement of κLES = 0.82). CAT reduced the SAR burden significantly compared to TSE (pCAT were 29.0 (±5.7) % for the T2-contrast and 32.7 (±21.9) % for the PD-contrast (expressed as percentages of the effective SAR limit of 3.2 W/kg for head examinations). Average SPL of CAT was no louder than during TSE. Sensations of CAT- vs. TSE-induced heating, noise and scanning vibrations did not differ. Conclusion T2−/PD-CAT is diagnostically equivalent to TSE for MS lesion detection yet substantially reduces the RF exposure. Such SAR reduction facilitates high-field MRI applications at 3 Tesla or above and corresponding protocol standardizations but CAT can also be used to scan faster, at higher resolution or with more slices. According to our data, CAT is no more uncomfortable than TSE scanning. PMID:24608106

  19. Combined acquisition technique (CAT for neuroimaging of multiple sclerosis at low specific absorption rates (SAR.

    Directory of Open Access Journals (Sweden)

    Armin Biller

    Full Text Available PURPOSE: To compare a novel combined acquisition technique (CAT of turbo-spin-echo (TSE and echo-planar-imaging (EPI with conventional TSE. CAT reduces the electromagnetic energy load transmitted for spin excitation. This radiofrequency (RF burden is limited by the specific absorption rate (SAR for patient safety. SAR limits restrict high-field MRI applications, in particular. MATERIAL AND METHODS: The study was approved by the local Medical Ethics Committee. Written informed consent was obtained from all participants. T2- and PD-weighted brain images of n = 40 Multiple Sclerosis (MS patients were acquired by CAT and TSE at 3 Tesla. Lesions were recorded by two blinded, board-certificated neuroradiologists. Diagnostic equivalence of CAT and TSE to detect MS lesions was evaluated along with their SAR, sound pressure level (SPL and sensations of acoustic noise, heating, vibration and peripheral nerve stimulation. RESULTS: Every MS lesion revealed on TSE was detected by CAT according to both raters (Cohen's kappa of within-rater/across-CAT/TSE lesion detection κCAT = 1.00, at an inter-rater lesion detection agreement of κLES = 0.82. CAT reduced the SAR burden significantly compared to TSE (p<0.001. Mean SAR differences between TSE and CAT were 29.0 (± 5.7 % for the T2-contrast and 32.7 (± 21.9 % for the PD-contrast (expressed as percentages of the effective SAR limit of 3.2 W/kg for head examinations. Average SPL of CAT was no louder than during TSE. Sensations of CAT- vs. TSE-induced heating, noise and scanning vibrations did not differ. CONCLUSION: T2-/PD-CAT is diagnostically equivalent to TSE for MS lesion detection yet substantially reduces the RF exposure. Such SAR reduction facilitates high-field MRI applications at 3 Tesla or above and corresponding protocol standardizations but CAT can also be used to scan faster, at higher resolution or with more slices. According to our data, CAT is no more uncomfortable than TSE scanning.

  20. Dynamic motion analysis of fetuses with central nervous system disorders by cine magnetic resonance imaging using fast imaging employing steady-state acquisition and parallel imaging: a preliminary result.

    Science.gov (United States)

    Guo, Wan-Yuo; Ono, Shigeki; Oi, Shizuo; Shen, Shu-Huei; Wong, Tai-Tong; Chung, Hsiao-Wen; Hung, Jeng-Hsiu

    2006-08-01

    The authors present a novel cine magnetic resonance (MR) imaging, two-dimensional (2D) fast imaging employing steady-state acquisition (FIESTA) technique with parallel imaging. It achieves temporal resolution at less than half a second as well as high spatial resolution cine imaging free of motion artifacts for evaluating the dynamic motion of fetuses in utero. The information obtained is used to predict postnatal outcome. Twenty-five fetuses with anomalies were studied. Ultrasonography demonstrated severe abnormalities in five of the fetuses; the other 20 fetuses constituted a control group. The cine fetal MR imaging demonstrated fetal head, neck, trunk, extremity, and finger as well as swallowing motions. Imaging findings were evaluated and compared in fetuses with major central nervous system (CNS) anomalies in five cases and minor CNS, non-CNS, or no anomalies in 20 cases. Normal motility was observed in the latter group. For fetuses in the former group, those with abnormal motility failed to survive after delivery, whereas those with normal motility survived with functioning preserved. The power deposition of radiofrequency, presented as specific absorption rate (SAR), was calculated. The SAR of FIESTA was approximately 13 times lower than that of conventional MR imaging of fetuses obtained using single-shot fast spin echo sequences. The following conclusions are drawn: 1) Fetal motion is no longer a limitation for prenatal imaging after the implementation of parallel imaging with 2D FIESTA, 2) Cine MR imaging illustrates fetal motion in utero with high clinical reliability, 3) For cases involving major CNS anomalies, cine MR imaging provides information on extremity motility in fetuses and serves as a prognostic indicator of postnatal outcome, and 4) The cine MR used to observe fetal activity is technically 2D and conceptually three-dimensional. It provides four-dimensional information for making proper and timely obstetrical and/or postnatal management

  1. Magnetic resonance imaging acquisition techniques intended to decrease movement artefact in paediatric brain imaging: a systematic review

    International Nuclear Information System (INIS)

    Woodfield, Julie; Kealey, Susan

    2015-01-01

    Attaining paediatric brain images of diagnostic quality can be difficult because of young age or neurological impairment. The use of anaesthesia to reduce movement in MRI increases clinical risk and cost, while CT, though faster, exposes children to potentially harmful ionising radiation. MRI acquisition techniques that aim to decrease movement artefact may allow diagnostic paediatric brain imaging without sedation or anaesthesia. We conducted a systematic review to establish the evidence base for ultra-fast sequences and sequences using oversampling of k-space in paediatric brain MR imaging. Techniques were assessed for imaging time, occurrence of movement artefact, the need for sedation, and either image quality or diagnostic accuracy. We identified 24 relevant studies. We found that ultra-fast techniques had shorter imaging acquisition times compared to standard MRI. Techniques using oversampling of k-space required equal or longer imaging times than standard MRI. Both ultra-fast sequences and those using oversampling of k-space reduced movement artefact compared with standard MRI in unsedated children. Assessment of overall diagnostic accuracy was difficult because of the heterogeneous patient populations, imaging indications, and reporting methods of the studies. In children with shunt-treated hydrocephalus there is evidence that ultra-fast MRI is sufficient for the assessment of ventricular size. (orig.)

  2. Magnetic resonance imaging acquisition techniques intended to decrease movement artefact in paediatric brain imaging: a systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Woodfield, Julie [University of Edinburgh, Child Life and Health, Edinburgh (United Kingdom); Kealey, Susan [Western General Hospital, Department of Neuroradiology, Edinburgh (United Kingdom)

    2015-08-15

    Attaining paediatric brain images of diagnostic quality can be difficult because of young age or neurological impairment. The use of anaesthesia to reduce movement in MRI increases clinical risk and cost, while CT, though faster, exposes children to potentially harmful ionising radiation. MRI acquisition techniques that aim to decrease movement artefact may allow diagnostic paediatric brain imaging without sedation or anaesthesia. We conducted a systematic review to establish the evidence base for ultra-fast sequences and sequences using oversampling of k-space in paediatric brain MR imaging. Techniques were assessed for imaging time, occurrence of movement artefact, the need for sedation, and either image quality or diagnostic accuracy. We identified 24 relevant studies. We found that ultra-fast techniques had shorter imaging acquisition times compared to standard MRI. Techniques using oversampling of k-space required equal or longer imaging times than standard MRI. Both ultra-fast sequences and those using oversampling of k-space reduced movement artefact compared with standard MRI in unsedated children. Assessment of overall diagnostic accuracy was difficult because of the heterogeneous patient populations, imaging indications, and reporting methods of the studies. In children with shunt-treated hydrocephalus there is evidence that ultra-fast MRI is sufficient for the assessment of ventricular size. (orig.)

  3. Method for signal conditioning and data acquisition system, based on variable amplification and feedback technique

    Energy Technology Data Exchange (ETDEWEB)

    Conti, Livio, E-mail: livio.conti@uninettunouniversity.net [Facoltà di Ingegneria, Università Telematica Internazionale Uninettuno, Corso Vittorio Emanuele II 39, 00186 Rome, Italy INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome (Italy); Sgrigna, Vittorio [Dipartimento di Matematica e Fisica, Università Roma Tre, 84 Via della Vasca Navale, I-00146 Rome (Italy); Zilpimiani, David [National Institute of Geophysics, Georgian Academy of Sciences, 1 M. Alexidze St., 009 Tbilisi, Georgia (United States); Assante, Dario [Facoltà di Ingegneria, Università Telematica Internazionale Uninettuno, Corso Vittorio Emanuele II 39, 00186 Rome, Italy INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome (Italy)

    2014-08-21

    An original method of signal conditioning and adaptive amplification is proposed for data acquisition systems of analog signals, conceived to obtain a high resolution spectrum of any input signal. The procedure is based on a feedback scheme of the signal amplification with aim at maximizing the dynamic range and resolution of the data acquisition system. The paper describes the signal conditioning, digitization, and data processing procedures applied to an a priori unknown signal in order to enucleate its amplitude and frequency content for applications in different environments: on the ground, in space, or in the laboratory. An electronic board of the conditioning module has also been constructed and described. In the paper are also discussed the main fields of application and advantages of the method with respect to those known today.

  4. Method for signal conditioning and data acquisition system, based on variable amplification and feedback technique

    International Nuclear Information System (INIS)

    Conti, Livio; Sgrigna, Vittorio; Zilpimiani, David; Assante, Dario

    2014-01-01

    An original method of signal conditioning and adaptive amplification is proposed for data acquisition systems of analog signals, conceived to obtain a high resolution spectrum of any input signal. The procedure is based on a feedback scheme of the signal amplification with aim at maximizing the dynamic range and resolution of the data acquisition system. The paper describes the signal conditioning, digitization, and data processing procedures applied to an a priori unknown signal in order to enucleate its amplitude and frequency content for applications in different environments: on the ground, in space, or in the laboratory. An electronic board of the conditioning module has also been constructed and described. In the paper are also discussed the main fields of application and advantages of the method with respect to those known today

  5. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  6. Evaluation of Medium Spatial Resolution BRDF-Adjustment Techniques Using Multi-Angular SPOT4 (Take5 Acquisitions

    Directory of Open Access Journals (Sweden)

    Martin Claverie

    2015-09-01

    Full Text Available High-resolution sensor Surface Reflectance (SR data are affected by surface anisotropy but are difficult to adjust because of the low temporal frequency of the acquisitions and the low angular sampling. This paper evaluates five high spatial resolution Bidirectional Reflectance Distribution Function (BRDF adjustment techniques. The evaluation is based on the noise level of the SR Time Series (TS corrected to a normalized geometry (nadir view, 45° sun zenith angle extracted from the multi-angular acquisitions of SPOT4 over three study areas (one in Arizona, two in France during the five-month SPOT4 (Take5 experiment. Two uniform techniques (Cst, for Constant, and Av, for Average, relying on the Vermote–Justice–Bréon (VJB BRDF method, assume no variation in space of the BRDF shape. Two methods (VI-dis, for NDVI-based disaggregation and LC-dis, for Land-Cover based disaggregation are based on disaggregation of the MODIS-derived BRDF VJB parameters using vegetation index and land cover, respectively. The last technique (LUM, for Look-Up Map relies on the MCD43 MODIS BRDF products and a crop type data layer. The VI-dis technique produced the lowest level of noise corresponding to the most effective adjustment: reduction from directional to normalized SR TS noises by 40% and 50% on average, for red and near-infrared bands, respectively. The uniform techniques displayed very good results, suggesting that a simple and uniform BRDF-shape assumption is good enough to adjust the BRDF in such geometric configuration (the view zenith angle varies from nadir to 25°. The most complex techniques relying on land cover (LC-dis and LUM displayed contrasting results depending on the land cover.

  7. A Report on The Data Acquisition and On-line Instrument Technique Development for FTL

    International Nuclear Information System (INIS)

    Sim, B. S.; Chi, D. Y.; Lee, C. Y.; Park, S. K.; Lee, J. M.; Ahn, S. H.; Kim, Y. K.

    2009-01-01

    Documents produced during the design, procurement manufacturing, and commissioning stage for the instruments such as SPND, Thermocouple, and LVDT which are installed on the In-pile section(IPS) of the fuel test loop(FTL) are gathered together. The values measured by the instruments are stored on the database of the data acquisition system(DAS) and displayed through both DAS and remote monitoring system installed on the users office. The commissioning status and the problems and items to be improved which are revealed during the commissioning stage are described. The report will be used for the development and operation of the instruments in the near future

  8. The blood-pool technique of radionuclide ventriculography: Data acquisition and evaluation

    International Nuclear Information System (INIS)

    Mueller-Schauenburg, W.

    1986-01-01

    For gated heart studies an in-vitro-labelling of erythrocytes is commonly used. Rest and exercise studies are acquired from LAO. Complementary studies may have different views. Besides the most common direct frame mode acquisition, there are the more flexible list mode and a hybrid mode. Concerning evaluation the ejection fraction is the leading parameter of global ventricular analysis. In local analysis a pixelwise evaluation generates functional images of phases and amplitudes (the Fourier approach developed by the Ulm group) or Noelep's trend images. Special attention has to be paid to the varying cycle length when a sine or cosine fitting (Fourier) is used for curve smoothing or phase and amplitude images. There are two opposed problems: If there are undetected QRS-complexes, the end of the representative cycle will contain early phases of subsequent cycles which must be cut off. In the case of really varying cycle length, the last images of the representative cycle must be corrected for acquisition time per frame. The total count curve may help to discriminate both cases and supplies suitable correction factors in the latter case. (orig.) [de

  9. Post-acquisition data mining techniques for LC-MS/MS-acquired data in drug metabolite identification.

    Science.gov (United States)

    Dhurjad, Pooja Sukhdev; Marothu, Vamsi Krishna; Rathod, Rajeshwari

    2017-08-01

    Metabolite identification is a crucial part of the drug discovery process. LC-MS/MS-based metabolite identification has gained widespread use, but the data acquired by the LC-MS/MS instrument is complex, and thus the interpretation of data becomes troublesome. Fortunately, advancements in data mining techniques have simplified the process of data interpretation with improved mass accuracy and provide a potentially selective, sensitive, accurate and comprehensive way for metabolite identification. In this review, we have discussed the targeted (extracted ion chromatogram, mass defect filter, product ion filter, neutral loss filter and isotope pattern filter) and untargeted (control sample comparison, background subtraction and metabolomic approaches) post-acquisition data mining techniques, which facilitate the drug metabolite identification. We have also discussed the importance of integrated data mining strategy.

  10. Toward Bulk Synchronous Parallel-Based Machine Learning Techniques for Anomaly Detection in High-Speed Big Data Networks

    Directory of Open Access Journals (Sweden)

    Kamran Siddique

    2017-09-01

    Full Text Available Anomaly detection systems, also known as intrusion detection systems (IDSs, continuously monitor network traffic aiming to identify malicious actions. Extensive research has been conducted to build efficient IDSs emphasizing two essential characteristics. The first is concerned with finding optimal feature selection, while another deals with employing robust classification schemes. However, the advent of big data concepts in anomaly detection domain and the appearance of sophisticated network attacks in the modern era require some fundamental methodological revisions to develop IDSs. Therefore, we first identify two more significant characteristics in addition to the ones mentioned above. These refer to the need for employing specialized big data processing frameworks and utilizing appropriate datasets for validating system’s performance, which is largely overlooked in existing studies. Afterwards, we set out to develop an anomaly detection system that comprehensively follows these four identified characteristics, i.e., the proposed system (i performs feature ranking and selection using information gain and automated branch-and-bound algorithms respectively; (ii employs logistic regression and extreme gradient boosting techniques for classification; (iii introduces bulk synchronous parallel processing to cater computational requirements of high-speed big data networks; and; (iv uses the Infromation Security Centre of Excellence, of the University of Brunswick real-time contemporary dataset for performance evaluation. We present experimental results that verify the efficacy of the proposed system.

  11. Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions.

    Science.gov (United States)

    Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng

    2015-07-28

    Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed

  12. Parallel-scanning tomosynthesis using a slot scanning technique: Fixed-focus reconstruction and the resulting image quality

    International Nuclear Information System (INIS)

    Shibata, Koichi; Notohara, Daisuke; Sakai, Takihito

    2014-01-01

    Purpose: Parallel-scanning tomosynthesis (PS-TS) is a novel technique that fuses the slot scanning technique and the conventional tomosynthesis (TS) technique. This approach allows one to obtain long-view tomosynthesis images in addition to normally sized tomosynthesis images, even when using a system that has no linear tomographic scanning function. The reconstruction technique and an evaluation of the resulting image quality for PS-TS are described in this paper. Methods: The PS-TS image-reconstruction technique consists of several steps (1) the projection images are divided into strips, (2) the strips are stitched together to construct images corresponding to the reconstruction plane, (3) the stitched images are filtered, and (4) the filtered stitched images are back-projected. In the case of PS-TS using the fixed-focus reconstruction method (PS-TS-F), one set of stitched images is used for the reconstruction planes at all heights, thus avoiding the necessity of repeating steps (1)–(3). A physical evaluation of the image quality of PS-TS-F compared with that of the conventional linear TS was performed using a R/F table (Sonialvision safire, Shimadzu Corp., Kyoto, Japan). The tomographic plane with the best theoretical spatial resolution (the in-focus plane, IFP) was set at a height of 100 mm from the table top by adjusting the reconstruction program. First, the spatial frequency response was evaluated at heights of −100, −50, 0, 50, 100, and 150 mm from the IFP using the edge of a 0.3-mm-thick copper plate. Second, the spatial resolution at each height was visually evaluated using an x-ray test pattern (Model No. 38, PTW Freiburg, Germany). Third, the slice sensitivity at each height was evaluated via the wire method using a 0.1-mm-diameter tungsten wire. Phantom studies using a knee phantom and a whole-body phantom were also performed. Results: The spatial frequency response of PS-TS-F yielded the best results at the IFP and degraded slightly as the

  13. Parallel-scanning tomosynthesis using a slot scanning technique: fixed-focus reconstruction and the resulting image quality.

    Science.gov (United States)

    Shibata, Koichi; Notohara, Daisuke; Sakai, Takihito

    2014-11-01

    Parallel-scanning tomosynthesis (PS-TS) is a novel technique that fuses the slot scanning technique and the conventional tomosynthesis (TS) technique. This approach allows one to obtain long-view tomosynthesis images in addition to normally sized tomosynthesis images, even when using a system that has no linear tomographic scanning function. The reconstruction technique and an evaluation of the resulting image quality for PS-TS are described in this paper. The PS-TS image-reconstruction technique consists of several steps (1) the projection images are divided into strips, (2) the strips are stitched together to construct images corresponding to the reconstruction plane, (3) the stitched images are filtered, and (4) the filtered stitched images are back-projected. In the case of PS-TS using the fixed-focus reconstruction method (PS-TS-F), one set of stitched images is used for the reconstruction planes at all heights, thus avoiding the necessity of repeating steps (1)-(3). A physical evaluation of the image quality of PS-TS-F compared with that of the conventional linear TS was performed using a R/F table (Sonialvision safire, Shimadzu Corp., Kyoto, Japan). The tomographic plane with the best theoretical spatial resolution (the in-focus plane, IFP) was set at a height of 100 mm from the table top by adjusting the reconstruction program. First, the spatial frequency response was evaluated at heights of -100, -50, 0, 50, 100, and 150 mm from the IFP using the edge of a 0.3-mm-thick copper plate. Second, the spatial resolution at each height was visually evaluated using an x-ray test pattern (Model No. 38, PTW Freiburg, Germany). Third, the slice sensitivity at each height was evaluated via the wire method using a 0.1-mm-diameter tungsten wire. Phantom studies using a knee phantom and a whole-body phantom were also performed. The spatial frequency response of PS-TS-F yielded the best results at the IFP and degraded slightly as the distance from the IFP increased. A

  14. Parallel-scanning tomosynthesis using a slot scanning technique: Fixed-focus reconstruction and the resulting image quality

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Koichi, E-mail: shibatak@suzuka-u.ac.jp [Department of Radiological Technology, Faculty of Health Science, Suzuka University of Medical Science 1001-1, Kishioka-cho, Suzuka 510-0293 (Japan); Notohara, Daisuke; Sakai, Takihito [R and D Department, Medical Systems Division, Shimadzu Corporation 1, Nishinokyo-Kuwabara-cho, Nakagyo-ku, Kyoto 604-8511 (Japan)

    2014-11-01

    Purpose: Parallel-scanning tomosynthesis (PS-TS) is a novel technique that fuses the slot scanning technique and the conventional tomosynthesis (TS) technique. This approach allows one to obtain long-view tomosynthesis images in addition to normally sized tomosynthesis images, even when using a system that has no linear tomographic scanning function. The reconstruction technique and an evaluation of the resulting image quality for PS-TS are described in this paper. Methods: The PS-TS image-reconstruction technique consists of several steps (1) the projection images are divided into strips, (2) the strips are stitched together to construct images corresponding to the reconstruction plane, (3) the stitched images are filtered, and (4) the filtered stitched images are back-projected. In the case of PS-TS using the fixed-focus reconstruction method (PS-TS-F), one set of stitched images is used for the reconstruction planes at all heights, thus avoiding the necessity of repeating steps (1)–(3). A physical evaluation of the image quality of PS-TS-F compared with that of the conventional linear TS was performed using a R/F table (Sonialvision safire, Shimadzu Corp., Kyoto, Japan). The tomographic plane with the best theoretical spatial resolution (the in-focus plane, IFP) was set at a height of 100 mm from the table top by adjusting the reconstruction program. First, the spatial frequency response was evaluated at heights of −100, −50, 0, 50, 100, and 150 mm from the IFP using the edge of a 0.3-mm-thick copper plate. Second, the spatial resolution at each height was visually evaluated using an x-ray test pattern (Model No. 38, PTW Freiburg, Germany). Third, the slice sensitivity at each height was evaluated via the wire method using a 0.1-mm-diameter tungsten wire. Phantom studies using a knee phantom and a whole-body phantom were also performed. Results: The spatial frequency response of PS-TS-F yielded the best results at the IFP and degraded slightly as the

  15. Parallel path nebulizer: Critical parameters for use with microseparation techniques combined with inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Yanes, Enrique G.; Miller-Ihli, Nancy J.

    2005-01-01

    Four different, low flow parallel path Mira Mist CE nebulizers were evaluated and compared in support of an ongoing project related to the use of microseparation techniques interfaced to inductively coupled plasma mass spectrometry for the quantification of cobalamin species (Vitamin B12). For the characterization of the different Mira Mist CE nebulizers, the nebulizer orientation as well as the effect of methanol on analytical response was the focus of the study. The position of the gas outlet on the nebulizer which consistently provided the maximum signal was when it was rotated to the 11 o'clock position when the nebulizer is viewed end-on. With this orientation the increased signal may be explained by the fact that the cone angle of the aerosol is such that the largest percentage of the aerosol is directed to the center of the spray chamber and consequently into the plasma. To characterize the nebulizer's performance, the signal response of a multielement solution containing elements with a variety of ionization potentials was used. The selection of elements with varying ionization energies and degrees of ionization was essential for a better understanding of observed increases in signal enhancement when methanol was used. Two different phenomena contribute to signal enhancement when using methanol: the first is improved transport efficiency and the second is the 'carbon enhancement effect'. The net result was that as much as a 30-fold increase in signal was observed for As and Mg when using a make-up solution of 20% methanol at a 15 μL/min flow rate which is equivalent to a net volume of 3 μL/min of pure methanol

  16. High-Speed Data Acquisition and Digital Signal Processing System for PET Imaging Techniques Applied to Mammography

    Science.gov (United States)

    Martinez, J. D.; Benlloch, J. M.; Cerda, J.; Lerche, Ch. W.; Pavon, N.; Sebastia, A.

    2004-06-01

    This paper is framed into the Positron Emission Mammography (PEM) project, whose aim is to develop an innovative gamma ray sensor for early breast cancer diagnosis. Currently, breast cancer is detected using low-energy X-ray screening. However, functional imaging techniques such as PET/FDG could be employed to detect breast cancer and track disease changes with greater sensitivity. Furthermore, a small and less expensive PET camera can be utilized minimizing main problems of whole body PET. To accomplish these objectives, we are developing a new gamma ray sensor based on a newly released photodetector. However, a dedicated PEM detector requires an adequate data acquisition (DAQ) and processing system. The characterization of gamma events needs a free-running analog-to-digital converter (ADC) with sampling rates of more than 50 Ms/s and must achieve event count rates up to 10 MHz. Moreover, comprehensive data processing must be carried out to obtain event parameters necessary for performing the image reconstruction. A new generation digital signal processor (DSP) has been used to comply with these requirements. This device enables us to manage the DAQ system at up to 80 Ms/s and to execute intensive calculi over the detector signals. This paper describes our designed DAQ and processing architecture whose main features are: very high-speed data conversion, multichannel synchronized acquisition with zero dead time, a digital triggering scheme, and high throughput of data with an extensive optimization of the signal processing algorithms.

  17. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  18. Earth Resources: A continuing bibliography with indexes, issue 2. [remote sensors and data acquisition techniques

    Science.gov (United States)

    1975-01-01

    Reports, articles, and other documents announced between April and June 1974 in Scientific and Technical Aerospace Reports (STAR), and International Aerospace Abstracts (IAA) are cited. Documents related to the identification and evaluation by means of sensors in spacecraft and aircraft of vegetation, minerals, and other natural resources, and the techniques and potentialities of surveying and keeping up-to-date inventories of such riches are included along with studies of such natural phenomena as earthquakes, volcanoes, ocean currents, and magnetic fields; and such cultural phenomena as cities, transportation networks, and irrigation systems. The components and use of remote sensing and geophysical instrumentation, their subsystems, observational procedures, signature and analyses and interpretive techniques for gathering data are, described. All reports generated under NASA's Earth Resources Survey Program for the time period covered are included.

  19. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1998-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  20. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1997-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  1. Diagnostic accuracy of dynamic contrast-enhanced MR imaging of renal masses with rapid-acquisition spin-echo technique

    International Nuclear Information System (INIS)

    Eilenberg, S.S.; Lee, J.K.T.; Brown, J.J.; Heiken, J.P.; Mirowitz, S.A.

    1990-01-01

    This paper compares the diagnostic accuracy of Gd-DTPA-enhanced rapid-acquisition spin-echo (RASE) imaging with standard spin-echo techniques for detecting renal cysts and solid renal neoplasms. RASE imaging combines a short TR (275 msec)/short TE (10 msec), single excitation pulse sequence with half-Fourier data sampling. Eighteen patients with CT evidence of renal masses were first evaluated with standard T1-and T2-weighted SE sequences. Pre- and serial postcontrast (Cd-DTPA, 0.1 mmol./kg) RASE sequences were then performed during suspended respiration. A final set of postcontrast images was obtained with the standard T1-weighted SE sequence. Each set of MR images was first reviewed separately (ie, T1, T2, pre- and post-contrast RASE, etc)

  2. Improved detection and mapping of deepwater hydrocarbon seeps: optimizing multibeam echosounder seafloor backscatter acquisition and processing techniques

    Science.gov (United States)

    Mitchell, Garrett A.; Orange, Daniel L.; Gharib, Jamshid J.; Kennedy, Paul

    2018-02-01

    Marine seep hunting surveys are a current focus of hydrocarbon exploration surveys due to recent advances in offshore geophysical surveying, geochemical sampling, and analytical technologies. Hydrocarbon seeps are ephemeral, small, discrete, and therefore difficult to sample on the deep seafloor. Multibeam echosounders are an efficient seafloor exploration tool to remotely locate and map seep features. Geophysical signatures from hydrocarbon seeps are acoustically-evident in bathymetric, seafloor backscatter, midwater backscatter datasets. Interpretation of these signatures in backscatter datasets is a fundamental component of commercial seep hunting campaigns. Degradation of backscatter datasets resulting from environmental, geometric, and system noise can interfere with the detection and delineation of seeps. We present a relative backscatter intensity normalization method and an oversampling acquisition technique that can improve the geological resolvability of hydrocarbon seeps. We use Green Canyon (GC) Block 600 in the Northern Gulf of Mexico as a seep calibration site for a Kongsberg EM302 30 kHz MBES prior to the start of the Gigante seep hunting program to analyze these techniques. At GC600, we evaluate the results of a backscatter intensity normalization, assess the effectiveness of 2X seafloor coverage in resolving seep-related features in backscatter data, and determine the off-nadir detection limits of bubble plumes using the EM302. Incorporating these techniques into seep hunting surveys can improve the detectability and sampling of seafloor seeps.

  3. Improved detection and mapping of deepwater hydrocarbon seeps: optimizing multibeam echosounder seafloor backscatter acquisition and processing techniques

    Science.gov (United States)

    Mitchell, Garrett A.; Orange, Daniel L.; Gharib, Jamshid J.; Kennedy, Paul

    2018-06-01

    Marine seep hunting surveys are a current focus of hydrocarbon exploration surveys due to recent advances in offshore geophysical surveying, geochemical sampling, and analytical technologies. Hydrocarbon seeps are ephemeral, small, discrete, and therefore difficult to sample on the deep seafloor. Multibeam echosounders are an efficient seafloor exploration tool to remotely locate and map seep features. Geophysical signatures from hydrocarbon seeps are acoustically-evident in bathymetric, seafloor backscatter, midwater backscatter datasets. Interpretation of these signatures in backscatter datasets is a fundamental component of commercial seep hunting campaigns. Degradation of backscatter datasets resulting from environmental, geometric, and system noise can interfere with the detection and delineation of seeps. We present a relative backscatter intensity normalization method and an oversampling acquisition technique that can improve the geological resolvability of hydrocarbon seeps. We use Green Canyon (GC) Block 600 in the Northern Gulf of Mexico as a seep calibration site for a Kongsberg EM302 30 kHz MBES prior to the start of the Gigante seep hunting program to analyze these techniques. At GC600, we evaluate the results of a backscatter intensity normalization, assess the effectiveness of 2X seafloor coverage in resolving seep-related features in backscatter data, and determine the off-nadir detection limits of bubble plumes using the EM302. Incorporating these techniques into seep hunting surveys can improve the detectability and sampling of seafloor seeps.

  4. Energy-dependent imaging in digital radiography: a review on acquisition, processing and display technique

    International Nuclear Information System (INIS)

    Coppini, G.; Maltinti, G.; Valli, G.; Baroni, M.; Buchignan, M.; Valli, G.

    1986-01-01

    The capabilities of energy-dependent imaging in digital radiography are analyzed paying particular attention to digital video systems. The main techniques developed in recent years for selective energy imaging are reviewed following a unified approach. Discussion about advantages and limits of energy methods is carried out by a comparative analysis of computer simulated data and experimental results as obtained by standard x-ray equipments coupled to a digital video unit. Geometric phantoms are used as test object, as also images of a chest phantom are produced. Since signal-to-noise ratio degradation is one of the major problems when dealing with selective imaging, a particular effort is made to investigate noise effects. In this perspective, an original colour encoding display of energy sequences is presented. By mapping the various energy measurements on different colour bands (typically those of an RGB TV-monitor), an increased image conspicuity is obtained without a significant noise degradation: this is ensured by the energy dependence of attenuation coefficients and by the integrating characteristics of the display device

  5. Detection and compensation of organ/lesion motion using 4D-PET/CT respiratory gated acquisition techniques

    International Nuclear Information System (INIS)

    Bettinardi, Valentino; Picchio, Maria; Di Muzio, Nadia; Gianolli, Luigi; Gilardi, Maria Carla; Messa, Cristina

    2010-01-01

    Purpose: To describe the degradation effects produced by respiratory organ and lesion motion on PET/CT images and to define the role of respiratory gated (RG) 4D-PET/CT techniques to compensate for such effects. Methods: Based on the literature and on our own experience, technical recommendations and clinical indications for the use of RG 4D PET/CT have been outlined. Results: RG 4D-PET/CT techniques require a state of the art PET/CT scanner, a respiratory monitoring system and dedicated acquisition and processing protocols. Patient training is particularly important to obtain a regular breathing pattern. An adequate number of phases has to be selected to balance motion compensation and statistical noise. RG 4D PET/CT motion free images may be clinically useful for tumour tissue characterization, monitoring patient treatment and target definition in radiation therapy planning. Conclusions: RG 4D PET/CT is a valuable tool to improve image quality and quantitative accuracy and to assess and measure organ and lesion motion for radiotherapy planning.

  6. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  7. Suitability of helical multislice acquisition technique for routine unenhanced brain CT: an image quality study using a 16-row detector configuration

    Energy Technology Data Exchange (ETDEWEB)

    Hernalsteen, Danielle; Cosnard, Guy; Grandin, Cecile; Duprez, Thierry [Universite Catholique de Louvain, Cliniques Universitaires Saint-Luc, Department of Radiology and Medical Imaging, Brussels (Belgium); Robert, Annie [Public Health School, Universite Catholique de Louvain, Department of Epidemiologics and Medical Statistics, Brussels (Belgium); Vlassenbroek, Alain [CT Clinical Science, Philips Medical Systems, Cleveland, OH (United States)

    2007-04-15

    Subjective and objective image quality (IQ) criteria, radiation doses, and acquisition times were compared using incremental monoslice, incremental multislice, and helical multislice acquisition techniques for routine unenhanced brain computed tomography (CT). Twenty-four patients were examined by two techniques in the same imaging session using a 16-row CT system equipped with 0.75-width detectors. Contiguous ''native'' 3-mm-thick slices were reconstructed for all acquisitions from four detectors for each slice (4 x 0.75 mm), with one channel available per detector. Two protocols were tailored to compare: (1) one-slice vs four-slice incremental images; (2) incremental vs helical four-slice images. Two trained observers independently scored 12 subjective items of IQ. Preference for the technique was assessed by one-tailed t test and the interobserver variation by two-tailed t test. The two observers gave very close IQ scores for the three techniques without significant interobserver variations. Measured IQ parameters failed to reveal any difference between techniques, and an approximate half radiation dose reduction was obtained by using the full 16-row configuration. Acquisition times were cumulatively shortened by using the multislice and the helical modality. (orig.)

  8. Suitability of helical multislice acquisition technique for routine unenhanced brain CT: an image quality study using a 16-row detector configuration

    International Nuclear Information System (INIS)

    Hernalsteen, Danielle; Cosnard, Guy; Grandin, Cecile; Duprez, Thierry; Robert, Annie; Vlassenbroek, Alain

    2007-01-01

    Subjective and objective image quality (IQ) criteria, radiation doses, and acquisition times were compared using incremental monoslice, incremental multislice, and helical multislice acquisition techniques for routine unenhanced brain computed tomography (CT). Twenty-four patients were examined by two techniques in the same imaging session using a 16-row CT system equipped with 0.75-width detectors. Contiguous ''native'' 3-mm-thick slices were reconstructed for all acquisitions from four detectors for each slice (4 x 0.75 mm), with one channel available per detector. Two protocols were tailored to compare: (1) one-slice vs four-slice incremental images; (2) incremental vs helical four-slice images. Two trained observers independently scored 12 subjective items of IQ. Preference for the technique was assessed by one-tailed t test and the interobserver variation by two-tailed t test. The two observers gave very close IQ scores for the three techniques without significant interobserver variations. Measured IQ parameters failed to reveal any difference between techniques, and an approximate half radiation dose reduction was obtained by using the full 16-row configuration. Acquisition times were cumulatively shortened by using the multislice and the helical modality. (orig.)

  9. Simultaneous multislice echo planar imaging with blipped controlled aliasing in parallel imaging results in higher acceleration: a promising technique for accelerated diffusion tensor imaging of skeletal muscle

    OpenAIRE

    Filli, Lukas; Piccirelli, Marco; Kenkel, David; Guggenberger, Roman; Andreisek, Gustav; Beck, Thomas; Runge, Val M; Boss, Andreas

    2015-01-01

    PURPOSE The aim of this study was to investigate the feasibility of accelerated diffusion tensor imaging (DTI) of skeletal muscle using echo planar imaging (EPI) applying simultaneous multislice excitation with a blipped controlled aliasing in parallel imaging results in higher acceleration unaliasing technique. MATERIALS AND METHODS After federal ethics board approval, the lower leg muscles of 8 healthy volunteers (mean [SD] age, 29.4 [2.9] years) were examined in a clinical 3-T magnetic ...

  10. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    Science.gov (United States)

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. Management of Transjugular Intrahepatic Portosystemic Shunt (TIPS)-associated Refractory Hepatic Encephalopathy by Shunt Reduction Using the Parallel Technique: Outcomes of a Retrospective Case Series

    International Nuclear Information System (INIS)

    Cookson, Daniel T.; Zaman, Zubayr; Gordon-Smith, James; Ireland, Hamish M.; Hayes, Peter C.

    2011-01-01

    Purpose: To investigate the reproducibility and technical and clinical success of the parallel technique of transjugular intrahepatic portosystemic shunt (TIPS) reduction in the management of refractory hepatic encephalopathy (HE). Materials and Methods: A 10-mm-diameter self-expanding stent graft and a 5–6-mm-diameter balloon-expandable stent were placed in parallel inside the existing TIPS in 8 patients via a dual unilateral transjugular approach. Changes in portosystemic pressure gradient and HE grade were used as primary end points. Results: TIPS reduction was technically successful in all patients. Mean ± standard deviation portosystemic pressure gradient before and after shunt reduction was 4.9 ± 3.6 mmHg (range, 0–12 mmHg) and 10.5 ± 3.9 mmHg (range, 6–18 mmHg). Duration of follow-up was 137 ± 117.8 days (range, 18–326 days). Clinical improvement of HE occurred in 5 patients (62.5%) with resolution of HE in 4 patients (50%). Single episodes of recurrent gastrointestinal hemorrhage occurred in 3 patients (37.5%). These were self-limiting in 2 cases and successfully managed in 1 case by correction of coagulopathy and blood transfusion. Two of these patients (25%) died, one each of renal failure and hepatorenal failure. Conclusion: The parallel technique of TIPS reduction is reproducible and has a high technical success rate. A dual unilateral transjugular approach is advantageous when performing this procedure. The parallel technique allows repeat bidirectional TIPS adjustment and may be of significant clinical benefit in the management of refractory HE.

  12. Examination of SUV of regional activity concentration for simultaneous emission/transmission acquisition using the mask technique

    International Nuclear Information System (INIS)

    Abe, Shinji; Nishino, Masanari; Yamashita, Masato; Yamaguchi, Hiroshi

    2003-01-01

    To achieve quantitative accuracy of simultaneous emission/transmission (SET) acquisition using the mask technique, we determined the factor of expression that derives the true transmission data from the measured transmission and emission data. We then evaluated the standardized uptake value (SUV) of the regional activity concentration derived respectively from the SET scans and conventional scans. First, to determine the attenuation factor for the transmission source when the photons of the cylindrical phantom filled with 18 F solution reached emission memory, SET scans were performed with a dummy transmission source and under the blank status of the transmission source. Second, to evaluate the SUV, we used a hollow-sphere phantom filled with 18 F solution whose activity concentrations were approximately 3 and 5 times that of the background. Then we performed conventional and SET scans of the phantom for solutions ranging from the higher concentration to the lower concentration. All of the data were reconstructed with the decay correction, and the SUV of each sphere was derived. The results demonstrated that, when the conventional factor was used, SUV was underestimated according to the increasing activity concentration of the solution. However, when a new factor that took into account the attenuation of the transmission source was used, there was no significant difference in the SUV. We estimated the SUV derived from the SET scans was within 3% for the large spheres and within 16% for the small spheres. (author)

  13. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  14. Improving parallel imaging by jointly reconstructing multi-contrast data.

    Science.gov (United States)

    Bilgic, Berkin; Kim, Tae Hyung; Liao, Congyu; Manhard, Mary Kate; Wald, Lawrence L; Haldar, Justin P; Setsompop, Kawin

    2018-08-01

    To develop parallel imaging techniques that simultaneously exploit coil sensitivity encoding, image phase prior information, similarities across multiple images, and complementary k-space sampling for highly accelerated data acquisition. We introduce joint virtual coil (JVC)-generalized autocalibrating partially parallel acquisitions (GRAPPA) to jointly reconstruct data acquired with different contrast preparations, and show its application in 2D, 3D, and simultaneous multi-slice (SMS) acquisitions. We extend the joint parallel imaging concept to exploit limited support and smooth phase constraints through Joint (J-) LORAKS formulation. J-LORAKS allows joint parallel imaging from limited autocalibration signal region, as well as permitting partial Fourier sampling and calibrationless reconstruction. We demonstrate highly accelerated 2D balanced steady-state free precession with phase cycling, SMS multi-echo spin echo, 3D multi-echo magnetization-prepared rapid gradient echo, and multi-echo gradient recalled echo acquisitions in vivo. Compared to conventional GRAPPA, proposed joint acquisition/reconstruction techniques provide more than 2-fold reduction in reconstruction error. JVC-GRAPPA takes advantage of additional spatial encoding from phase information and image similarity, and employs different sampling patterns across acquisitions. J-LORAKS achieves a more parsimonious low-rank representation of local k-space by considering multiple images as additional coils. Both approaches provide dramatic improvement in artifact and noise mitigation over conventional single-contrast parallel imaging reconstruction. Magn Reson Med 80:619-632, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  15. Parallel SOR methods with a parabolic-diffusion acceleration technique for solving an unstructured-grid Poisson equation on 3D arbitrary geometries

    Science.gov (United States)

    Zapata, M. A. Uh; Van Bang, D. Pham; Nguyen, K. D.

    2016-05-01

    This paper presents a parallel algorithm for the finite-volume discretisation of the Poisson equation on three-dimensional arbitrary geometries. The proposed method is formulated by using a 2D horizontal block domain decomposition and interprocessor data communication techniques with message passing interface. The horizontal unstructured-grid cells are reordered according to the neighbouring relations and decomposed into blocks using a load-balanced distribution to give all processors an equal amount of elements. In this algorithm, two parallel successive over-relaxation methods are presented: a multi-colour ordering technique for unstructured grids based on distributed memory and a block method using reordering index following similar ideas of the partitioning for structured grids. In all cases, the parallel algorithms are implemented with a combination of an acceleration iterative solver. This solver is based on a parabolic-diffusion equation introduced to obtain faster solutions of the linear systems arising from the discretisation. Numerical results are given to evaluate the performances of the methods showing speedups better than linear.

  16. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  17. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  18. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  19. Modelling and Experimental Evaluation of a Static Balancing Technique for a New Horizontally Mounted 3-UPU Parallel Mechanism

    Directory of Open Access Journals (Sweden)

    Maryam Banitalebi Dehkordi

    2012-11-01

    Full Text Available This paper presents the modelling and experimental evaluation of the gravity compensation of a horizontal 3-UPU parallel mechanism. The conventional Newton-Euler method for static analysis and balancing of mechanisms works for serial robots; however, it can become computationally expensive when applied to the analysis of parallel manipulators. To overcome this difficulty, in this paper we propose an approach, based on a Lagrangian method, that is more efficient in terms of computation time. The derivation of the gravity compensation model is based on the analytical computation of the total potential energy of the system at each position of the end-effector. In order to satisfy the gravity compensation condition, the total potential energy of the system should remain constant for all of the manipulator's configurations. Analytical and mechanical gravity compensation is taken into account, and the set of conditions and the system of springs are defined. Finally, employing a virtual reality environment, some experiments are carried out and the reliability and feasibility of the proposed model are evaluated in the presence and absence of the elastic components.

  20. Estimation of organ-absorbed radiation doses during 64-detector CT coronary angiography using different acquisition techniques and heart rates: a phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Matsubara, Kosuke; Koshida, Kichiro; Kawashima, Hiroko (Dept. of Quantum Medical Technology, Faculty of Health Sciences, Kanazawa Univ., Kanazawa (Japan)), email: matsuk@mhs.mp.kanazawa-u.ac.jp; Noto, Kimiya; Takata, Tadanori; Yamamoto, Tomoyuki (Dept. of Radiological Technology, Kanazawa Univ. Hospital, Kanazawa (Japan)); Shimono, Tetsunori (Dept. of Radiology, Hoshigaoka Koseinenkin Hospital, Hirakata (Japan)); Matsui, Osamu (Dept. of Radiology, Faculty of Medicine, Kanazawa Univ., Kanazawa (Japan))

    2011-07-15

    Background: Though appropriate image acquisition parameters allow an effective dose below 1 mSv for CT coronary angiography (CTCA) performed with the latest dual-source CT scanners, a single-source 64-detector CT procedure results in a significant radiation dose due to its technical limitations. Therefore, estimating the radiation doses absorbed by an organ during 64-detector CTCA is important. Purpose: To estimate the radiation doses absorbed by organs located in the chest region during 64-detector CTCA using different acquisition techniques and heart rates. Material and Methods: Absorbed doses for breast, heart, lung, red bone marrow, thymus, and skin were evaluated using an anthropomorphic phantom and radiophotoluminescence glass dosimeters (RPLDs). Electrocardiogram (ECG)-gated helical and ECG-triggered non-helical acquisitions were performed by applying a simulated heart rate of 60 beats per minute (bpm) and ECG-gated helical acquisitions using ECG modulation (ECGM) of the tube current were performed by applying simulated heart rates of 40, 60, and 90 bpm after placing RPLDs on the anatomic location of each organ. The absorbed dose for each organ was calculated by multiplying the calibrated mean dose values of RPLDs with the mass energy coefficient ratio. Results: For all acquisitions, the highest absorbed dose was observed for the heart. When the helical and non-helical acquisitions were performed by applying a simulated heart rate of 60 bpm, the absorbed doses for heart were 215.5, 202.2, and 66.8 mGy for helical, helical with ECGM, and non-helical acquisitions, respectively. When the helical acquisitions using ECGM were performed by applying simulated heart rates of 40, 60, and 90 bpm, the absorbed doses for heart were 178.6, 139.1, and 159.3 mGy, respectively. Conclusion: ECG-triggered non-helical acquisition is recommended to reduce the radiation dose. Also, controlling the patients' heart rate appropriately during ECG-gated helical acquisition with

  1. SU-F-J-220: Micro-CT Based Quantification of Mouse Brain Vasculature: The Effects of Acquisition Technique and Contrast Material

    International Nuclear Information System (INIS)

    Tipton, C; Lamba, M; Qi, Z; LaSance, K; Tipton, C

    2016-01-01

    Purpose: Cognitive impairment from radiation therapy to the brain may be linked to the loss of total blood volume in the brain. To account for brain injury, it is crucial to develop an understanding of blood volume loss as a result of radiation therapy. This study investigates µCT based quantification of mouse brain vasculature, focusing on the effect of acquisition technique and contrast material. Methods: Four mice were scanned on a µCT scanner (Siemens Inveon). The reconstructed voxel size was 18µm3 and all protocols were Hounsfield Unit (HU) calibrated. The mice were injected with 40mg of gold nanoparticles (MediLumine) or 100µl of Exitron 12000 (Miltenyi Biotec). Two acquisition techniques were also performed. A single kVp technique scanned the mouse once using an x-ray beam of 80kVp and segmentation was completed based on a threshold of HU values. The dual kVp technique scanned the mouse twice using 50kVp and 80kVp, this segmentation was based on the ratio of the HU value of the two kVps. After image reconstruction and segmentation, the brain blood volume was determined as a percentage of the total brain volume. Results: For the single kVp acquisition at 80kVp, the brain blood volume had an average of 3.5% for gold and 4.0% for Exitron 12000. Also at 80kVp, the contrast-noise ratio was significantly better for images acquired with the gold nanoparticles (2.0) than for those acquired with the Exitron 12000 (1.4). The dual kVp acquisition shows improved separation of skull from vasculature, but increased image noise. Conclusion: In summary, the effects of acquisition technique and contrast material for quantification of mouse brain vasculature showed that gold nanoparticles produced more consistent segmentation of brain vasculature than Exitron 12000. Also, dual kVp acquisition may improve the accuracy of brain vasculature quantification, although the effect of noise amplification warrants further study.

  2. SU-F-J-220: Micro-CT Based Quantification of Mouse Brain Vasculature: The Effects of Acquisition Technique and Contrast Material

    Energy Technology Data Exchange (ETDEWEB)

    Tipton, C; Lamba, M; Qi, Z; LaSance, K; Tipton, C [University of Cincinnati College of Medicine, Cincinnati, OH (United States)

    2016-06-15

    Purpose: Cognitive impairment from radiation therapy to the brain may be linked to the loss of total blood volume in the brain. To account for brain injury, it is crucial to develop an understanding of blood volume loss as a result of radiation therapy. This study investigates µCT based quantification of mouse brain vasculature, focusing on the effect of acquisition technique and contrast material. Methods: Four mice were scanned on a µCT scanner (Siemens Inveon). The reconstructed voxel size was 18µm3 and all protocols were Hounsfield Unit (HU) calibrated. The mice were injected with 40mg of gold nanoparticles (MediLumine) or 100µl of Exitron 12000 (Miltenyi Biotec). Two acquisition techniques were also performed. A single kVp technique scanned the mouse once using an x-ray beam of 80kVp and segmentation was completed based on a threshold of HU values. The dual kVp technique scanned the mouse twice using 50kVp and 80kVp, this segmentation was based on the ratio of the HU value of the two kVps. After image reconstruction and segmentation, the brain blood volume was determined as a percentage of the total brain volume. Results: For the single kVp acquisition at 80kVp, the brain blood volume had an average of 3.5% for gold and 4.0% for Exitron 12000. Also at 80kVp, the contrast-noise ratio was significantly better for images acquired with the gold nanoparticles (2.0) than for those acquired with the Exitron 12000 (1.4). The dual kVp acquisition shows improved separation of skull from vasculature, but increased image noise. Conclusion: In summary, the effects of acquisition technique and contrast material for quantification of mouse brain vasculature showed that gold nanoparticles produced more consistent segmentation of brain vasculature than Exitron 12000. Also, dual kVp acquisition may improve the accuracy of brain vasculature quantification, although the effect of noise amplification warrants further study.

  3. Double random phase spread spectrum spread space technique for secure parallel optical multiplexing with individual encryption key

    Science.gov (United States)

    Hennelly, B. M.; Javidi, B.; Sheridan, J. T.

    2005-09-01

    A number of methods have been recently proposed in the literature for the encryption of 2-D information using linear optical systems. In particular the double random phase encoding system has received widespread attention. This system uses two Random Phase Keys (RPK) positioned in the input spatial domain and the spatial frequency domain and if these random phases are described by statistically independent white noises then the encrypted image can be shown to be a white noise. Decryption only requires knowledge of the RPK in the frequency domain. The RPK may be implemented using a Spatial Light Modulators (SLM). In this paper we propose and investigate the use of SLMs for secure optical multiplexing. We show that in this case it is possible to encrypt multiple images in parallel and multiplex them for transmission or storage. The signal energy is effectively spread in the spatial frequency domain. As expected the number of images that can be multiplexed together and recovered without loss is proportional to the ratio of the input image and the SLM resolution. Many more images may be multiplexed with some loss in recovery. Furthermore each individual encryption is more robust than traditional double random phase encoding since decryption requires knowledge of both RPK and a lowpass filter in order to despread the spectrum and decrypt the image. Numerical simulations are presented and discussed.

  4. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks.

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A; Carrillo, Richard R; Ros, Eduardo; Luque, Niceto R

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  5. Application of particle image velocimetry measurement techniques to study turbulence characteristics of oscillatory flows around parallel-plate structures in thermoacoustic devices

    International Nuclear Information System (INIS)

    Mao, Xiaoan; Jaworski, Artur J

    2010-01-01

    This paper describes the development of the experimental setup and measurement methodologies to study the physics of oscillatory flows in the vicinity of parallel-plate stacks by using the particle image velocimetry (PIV) techniques. Parallel-plate configurations often appear as internal structures in thermoacoustic devices and are responsible for the hydrodynamic energy transfer processes. The flow around selected stack configurations is induced by a standing acoustic wave, whose amplitude can be varied. Depending on the direction of the flow within the acoustic cycle, relative to the stack, it can be treated as an entrance flow or a wake flow. The insight into the flow behaviour, its kinematics, dynamics and scales of turbulence, is obtained using the classical Reynolds decomposition to separate the instantaneous velocity fields into ensemble-averaged mean velocity fields and fluctuations in a set of predetermined phases within an oscillation cycle. The mean velocity field and the fluctuation intensity distributions are investigated over the acoustic oscillation cycle. The velocity fluctuation is further divided into large- and small-scale fluctuations by using fast Fourier transform (FFT) spatial filtering techniques

  6. Left Ventricular Function Evaluation on a 3T MR Scanner with Parallel RF Transmission Technique: Prospective Comparison of Cine Sequences Acquired before and after Gadolinium Injection.

    Science.gov (United States)

    Caspar, Thibault; Schultz, Anthony; Schaeffer, Mickaël; Labani, Aïssam; Jeung, Mi-Young; Jurgens, Paul Thomas; El Ghannudi, Soraya; Roy, Catherine; Ohana, Mickaël

    To compare cine MR b-TFE sequences acquired before and after gadolinium injection, on a 3T scanner with a parallel RF transmission technique in order to potentially improve scanning time efficiency when evaluating LV function. 25 consecutive patients scheduled for a cardiac MRI were prospectively included and had their b-TFE cine sequences acquired before and right after gadobutrol injection. Images were assessed qualitatively (overall image quality, LV edge sharpness, artifacts and LV wall motion) and quantitatively with measurement of LVEF, LV mass, and telediastolic volume and contrast-to-noise ratio (CNR) between the myocardium and the cardiac chamber. Statistical analysis was conducted using a Bayesian paradigm. No difference was found before or after injection for the LVEF, LV mass and telediastolic volume evaluations. Overall image quality and CNR were significantly lower after injection (estimated coefficient cine after > cine before gadolinium: -1.75 CI = [-3.78;-0.0305], prob(coef>0) = 0% and -0.23 CI = [-0.49;0.04], prob(coef>0) = 4%) respectively), but this decrease did not affect the visual assessment of LV wall motion (cine after > cine before gadolinium: -1.46 CI = [-4.72;1.13], prob(coef>0) = 15%). In 3T cardiac MRI acquired with parallel RF transmission technique, qualitative and quantitative assessment of LV function can reliably be performed with cine sequences acquired after gadolinium injection, despite a significant decrease in the CNR and the overall image quality.

  7. MR angiography of the carotid arteries in 3 D TOF-technique with sagittal ''double-slab'' acquisition using a new head-neck coil

    International Nuclear Information System (INIS)

    Link, J.; Mueller-Huelsbeck, S.; Heller, M.

    1996-01-01

    Purpose: The aim of the study was to assess the value of MR angiography (MRA) in sagittal technique compared to DSA in the evaluation of carotid artery stenosis. Methods: 80 Carotid arteries in 40 symptomatic patients were prospectively studied with DSA and MRA. MRA was carried out by means of 3D time-of-flight technique with a FISP sequence (T E 6 ms/T R 80 ms, flip angle 25 , FOV 240x210 mm, matrix 157x256 mm, in-plane resolution 1.34x0.94 mm, partition thickness 1.32 mm, slab thickness 45 mm, acquisition time 7 min) using a new head-neck coil. Data acquisition was performed in sagittal orientation with the 'double-slab' technique. Imaging quality of the extracranial carotid arteries and correctness of quantification of stenosis was performed. Results: Imaging quality was good at the origin of the carotid arteries in 65%, at the bifurcation region in 98% and near the skull base in 81%. The agreement of DSA and MRA was 96% of the normal arteries (24/25), 90% of the severe stenoses (28/31) and 100% of the occluded arteries (9/9). Conclusion: MRA in sagittal 'double-slab' technique is a noninvasive technique allowing to detect normal arteries and candidates for surgery with high degree of certainity. (orig.) [de

  8. Acquisition War-Gaming Technique for Acquiring Future Complex Systems: Modeling and Simulation Results for Cost Plus Incentive Fee Contract

    Directory of Open Access Journals (Sweden)

    Tien M. Nguyen

    2018-03-01

    Full Text Available This paper provides a high-level discussion and propositions of frameworks and models for acquisition strategy of complex systems. In particular, it presents an innovative system engineering approach to model the Department of Defense (DoD acquisition process and offers several optimization modules including simulation models using game theory and war-gaming concepts. Our frameworks employ Advanced Game-based Mathematical Framework (AGMF and Unified Game-based Acquisition Framework (UGAF, and related advanced simulation and mathematical models that include a set of War-Gaming Engines (WGEs implemented in MATLAB statistical optimization models. WGEs are defined as a set of algorithms, characterizing the Program and Technical Baseline (PTB, technology enablers, architectural solutions, contract type, contract parameters and associated incentives, and industry bidding position. As a proof of concept, Aerospace, in collaboration with the North Carolina State University (NCSU and University of Hawaii (UH, successfully applied and extended the proposed frameworks and decision models to determine the optimum contract parameters and incentives for a Cost Plus Incentive Fee (CPIF contract. As a result, we can suggest a set of acquisition strategies that ensure the optimization of the PTB.

  9. 3D Hyperpolarized C-13 EPI with Calibrationless Parallel Imaging

    DEFF Research Database (Denmark)

    Gordon, Jeremy W.; Hansen, Rie Beck; Shin, Peter J.

    2018-01-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and tem...... strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism....

  10. Late gadolinium enhancement cardiac imaging on a 3T scanner with parallel RF transmission technique: prospective comparison of 3D-PSIR and 3D-IR

    International Nuclear Information System (INIS)

    Schultz, Anthony; Caspar, Thibault; Schaeffer, Mickael; Labani, Aissam; Jeung, Mi-Young; El Ghannudi, Soraya; Roy, Catherine; Ohana, Mickael

    2016-01-01

    To qualitatively and quantitatively compare different late gadolinium enhancement (LGE) sequences acquired at 3T with a parallel RF transmission technique. One hundred and sixty participants prospectively enrolled underwent a 3T cardiac MRI with 3 different LGE sequences: 3D Phase-Sensitive Inversion-Recovery (3D-PSIR) acquired 5 minutes after injection, 3D Inversion-Recovery (3D-IR) at 9 minutes and 3D-PSIR at 13 minutes. All LGE-positive patients were qualitatively evaluated both independently and blindly by two radiologists using a 4-level scale, and quantitatively assessed with measurement of contrast-to-noise ratio and LGE maximal surface. Statistical analyses were calculated under a Bayesian paradigm using MCMC methods. Fifty patients (70 % men, 56yo ± 19) exhibited LGE (62 % were post-ischemic, 30 % related to cardiomyopathy and 8 % post-myocarditis). Early and late 3D-PSIR were superior to 3D-IR sequences (global quality, estimated coefficient IR > early-PSIR: -2.37 CI = [-3.46; -1.38], prob(coef > 0) = 0 % and late-PSIR > IR: 3.12 CI = [0.62; 4.41], prob(coef > 0) = 100 %), LGE surface estimated coefficient IR > early-PSIR: -0.09 CI = [-1.11; -0.74], prob(coef > 0) = 0 % and late-PSIR > IR: 0.96 CI = [0.77; 1.15], prob(coef > 0) = 100 %. Probabilities for late PSIR being superior to early PSIR concerning global quality and CNR were over 90 %, regardless of the aetiological subgroup. In 3T cardiac MRI acquired with parallel RF transmission technique, 3D-PSIR is qualitatively and quantitatively superior to 3D-IR. (orig.)

  11. Arthroscopically assisted stabilization of acute high-grade acromioclavicular joint separations in a coracoclavicular Double-TightRope technique: V-shaped versus parallel drill hole orientation.

    Science.gov (United States)

    Kraus, Natascha; Haas, Norbert P; Scheibel, Markus; Gerhardt, Christian

    2013-10-01

    The arthroscopically assisted Double-TightRope technique has recently been reported to yield good to excellent clinical results in the treatment of acute, high-grade acromioclavicular dislocation. However, the orientation of the transclavicular-transcoracoidal drill holes remains a matter of debate. A V-shaped drill hole orientation leads to better clinical and radiologic results and provides a higher vertical and horizontal stability compared to parallel drill hole placement. This was a cohort study; level of evidence, 2b. Two groups of patients with acute high-grade acromioclavicular joint instability (Rockwood type V) were included in this prospective, non-randomized cohort study. 15 patients (1 female/14 male) with a mean age of 37.7 (18-66) years were treated with a Double-TightRope technique using a V-shaped orientation of the drill holes (group 1). 13 patients (1 female/12 male) with a mean age of 40.9 (21-59) years were treated with a Double-TightRope technique with a parallel drill hole placement (group 2). After 2 years, the final evaluation consisted of a complete physical examination of both shoulders, evaluation of the Subjective Shoulder Value (SSV), Constant Score (CS), Taft Score (TF) and Acromioclavicular Joint Instability Score (ACJI) as well as a radiologic examination including bilateral anteroposterior stress views and bilateral Alexander views. After a mean follow-up of 2 years, all patients were free of shoulder pain at rest and during daily activities. Range of motion did not differ significantly between both groups (p > 0.05). Patients in group 1 reached on average 92.4 points in the CS, 96.2 % in the SSV, 10.5 points in the TF and 75.9 points in the ACJI. Patients in group 2 scored 90.5 points in the CS, 93.9 % in the SSV, 10.5 points in the TF and 84.5 points in the ACJI (p > 0.05). Radiographically, the coracoclavicular distance was found to be 13.9 mm (group 1) and 13.4 mm (group 2) on the affected side and 9.3 mm (group 1

  12. Interdependencies of acquisition, detection, and reconstruction techniques on the accuracy of iodine quantification in varying patient sizes employing dual-energy CT

    Energy Technology Data Exchange (ETDEWEB)

    Marin, Daniele; Pratts-Emanuelli, Jose J.; Mileto, Achille; Bashir, Mustafa R.; Nelson, Rendon C.; Boll, Daniel T. [Duke University Medical Center, Department of Radiology, Durham, NC (United States); Husarik, Daniela B. [University Hospital Zurich, Diagnostic and Interventional Radiology, Zurich (Switzerland)

    2014-10-03

    To assess the impact of patient habitus, acquisition parameters, detector efficiencies, and reconstruction techniques on the accuracy of iodine quantification using dual-source dual-energy CT (DECT). Two phantoms simulating small and large patients contained 20 iodine solutions mimicking vascular and parenchymal enhancement from saline isodensity to 400 HU and 30 iodine solutions simulating enhancement of the urinary collecting system from 400 to 2,000 HU. DECT acquisition (80/140 kVp and 100/140 kVp) was performed using two DECT systems equipped with standard and integrated electronics detector technologies. DECT raw datasets were reconstructed using filtered backprojection (FBP), and iterative reconstruction (SAFIRE I/V). Accuracy for iodine quantification was significantly higher for the small compared to the large phantoms (9.2 % ± 7.5 vs. 24.3 % ± 26.1, P = 0.0001), the integrated compared to the conventional detectors (14.8 % ± 20.6 vs. 18.8 % ± 20.4, respectively; P = 0.006), and SAFIRE V compared to SAFIRE I and FBP reconstructions (15.2 % ± 18.1 vs. 16.1 % ± 17.6 and 18.9 % ± 20.4, respectively; P ≤ 0.003). A significant synergism was observed when the most effective detector and reconstruction techniques were combined with habitus-adapted dual-energy pairs. In a second-generation dual-source DECT system, the accuracy of iodine quantification can be substantially improved by an optimal choice and combination of acquisition parameters, detector, and reconstruction techniques. (orig.)

  13. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  14. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  15. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    Science.gov (United States)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  16. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  17. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  18. Machine-assisted verification of latent fingerprints: first results for nondestructive contact-less optical acquisition techniques with a CWL sensor

    Science.gov (United States)

    Hildebrandt, Mario; Kiltz, Stefan; Krapyvskyy, Dmytro; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus

    2011-11-01

    A machine-assisted analysis of traces from crime scenes might be possible with the advent of new high-resolution non-destructive contact-less acquisition techniques for latent fingerprints. This requires reliable techniques for the automatic extraction of fingerprint features from latent and exemplar fingerprints for matching purposes using pattern recognition approaches. Therefore, we evaluate the NIST Biometric Image Software for the feature extraction and verification of contact-lessly acquired latent fingerprints to determine potential error rates. Our exemplary test setup includes 30 latent fingerprints from 5 people in two test sets that are acquired from different surfaces using a chromatic white light sensor. The first test set includes 20 fingerprints on two different surfaces. It is used to determine the feature extraction performance. The second test set includes one latent fingerprint on 10 different surfaces and an exemplar fingerprint to determine the verification performance. This utilized sensing technique does not require a physical or chemical visibility enhancement of the fingerprint residue, thus the original trace remains unaltered for further investigations. No particular feature extraction and verification techniques have been applied to such data, yet. Hence, we see the need for appropriate algorithms that are suitable to support forensic investigations.

  19. Uma interface lab-made para aquisição de sinais analógicos instrumentais via porta paralela do microcomputador A lab-made interface for acquisition of instrumental analog signals at the parallel port of a microcomputer

    Directory of Open Access Journals (Sweden)

    Edvaldo da Nóbrega Gaião

    2004-10-01

    Full Text Available A lab-made interface for acquisition of instrumental analog signals between 0 and 5 V at a frequency up to 670 kHz at the parallel port of a microcomputer is described. Since it uses few and small components, it was built into the connector of a printer parallel cable. Its performance was evaluated by monitoring the signals of four different instruments and similar analytical curves were obtained with the interface and from readings from the instrument' displays. Because the components are cheap (~U$35,00 and easy to get, the proposed interface is a simple and economical alternative for data acquisition in small laboratories for routine work, research and teaching.

  20. Problem Based Learning Technique and Its Effect on Acquisition of Linear Programming Skills by Secondary School Students in Kenya

    Science.gov (United States)

    Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice

    2015-01-01

    The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…

  1. Smart acquisition EELS

    International Nuclear Information System (INIS)

    Sader, Kasim; Schaffer, Bernhard; Vaughan, Gareth; Brydson, Rik; Brown, Andy; Bleloch, Andrew

    2010-01-01

    We have developed a novel acquisition methodology for the recording of electron energy loss spectra (EELS) using a scanning transmission electron microscope (STEM): 'Smart Acquisition'. Smart Acquisition allows the independent control of probe scanning procedures and the simultaneous acquisition of analytical signals such as EELS. The original motivation for this work arose from the need to control the electron dose experienced by beam-sensitive specimens whilst maintaining a sufficiently high signal-to-noise ratio in the EEL signal for the extraction of useful analytical information (such as energy loss near edge spectral features) from relatively undamaged areas. We have developed a flexible acquisition framework which separates beam position data input, beam positioning, and EELS acquisition. In this paper we demonstrate the effectiveness of this technique on beam-sensitive thin films of amorphous aluminium trifluoride. Smart Acquisition has been used to expose lines to the electron beam, followed by analysis of the structures created by line-integrating EELS acquisitions, and the results are compared to those derived from a standard EELS linescan. High angle annular dark-field images show clear reductions in damage for the Smart Acquisition areas compared to the conventional linescan, and the Smart Acquisition low loss EEL spectra are more representative of the undamaged material than those derived using a conventional linescan. Atomically resolved EELS of all four elements of CaNdTiO show the high resolution capabilities of Smart Acquisition.

  2. Smart acquisition EELS

    Energy Technology Data Exchange (ETDEWEB)

    Sader, Kasim, E-mail: k.sader@leeds.ac.uk [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Schaffer, Bernhard [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Department of Physics and Astronomy, University of Glasgow (United Kingdom); Vaughan, Gareth [Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Brydson, Rik [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Brown, Andy [Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Bleloch, Andrew [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Department of Engineering, University of Liverpool, Liverpool (United Kingdom)

    2010-07-15

    We have developed a novel acquisition methodology for the recording of electron energy loss spectra (EELS) using a scanning transmission electron microscope (STEM): 'Smart Acquisition'. Smart Acquisition allows the independent control of probe scanning procedures and the simultaneous acquisition of analytical signals such as EELS. The original motivation for this work arose from the need to control the electron dose experienced by beam-sensitive specimens whilst maintaining a sufficiently high signal-to-noise ratio in the EEL signal for the extraction of useful analytical information (such as energy loss near edge spectral features) from relatively undamaged areas. We have developed a flexible acquisition framework which separates beam position data input, beam positioning, and EELS acquisition. In this paper we demonstrate the effectiveness of this technique on beam-sensitive thin films of amorphous aluminium trifluoride. Smart Acquisition has been used to expose lines to the electron beam, followed by analysis of the structures created by line-integrating EELS acquisitions, and the results are compared to those derived from a standard EELS linescan. High angle annular dark-field images show clear reductions in damage for the Smart Acquisition areas compared to the conventional linescan, and the Smart Acquisition low loss EEL spectra are more representative of the undamaged material than those derived using a conventional linescan. Atomically resolved EELS of all four elements of CaNdTiO show the high resolution capabilities of Smart Acquisition.

  3. EX6AFS: A data acquisition system for high-speed dispersive EXAFS measurements implemented using object-oriented programming techniques

    International Nuclear Information System (INIS)

    Jennings, G.; Lee, P.L.

    1995-01-01

    In this paper we describe the design and implementation of a computerized data-acquisition system for high-speed energy-dispersive EXAFS experiments on the X6A beamline at the National Synchrotron Light Source. The acquisition system drives the stepper motors used to move the components of the experimental setup and controls the readout of the EXAFS spectra. The system runs on a Macintosh IIfx computer and is written entirely in the object-oriented language C++. Large segments of the system are implemented by means of commercial class libraries, specifically the MacApp application framework from Apple, the Rogue Wave class library, and the Hierarchical Data Format datafile format library from the National Center for Supercomputing Applications. This reduces the amount of code that must be written and enhances reliability. The system makes use of several advanced features of C++: Multiple inheritance allows the code to be decomposed into independent software components and the use of exception handling allows the system to be much more reliable in the event of unexpected errors. Object-oriented techniques allow the program to be extended easily as new requirements develop. All sections of the program related to a particular concept are located in a small set of source files. The program will also be used as a prototype for future software development plans for the Basic Energy Science Synchrotron Radiation Center Collaborative Access Team beamlines being designed and built at the Advanced Photon Source

  4. High speed data acquisition

    International Nuclear Information System (INIS)

    Cooper, P.S.

    1997-07-01

    A general introduction to high speed data acquisition system techniques in modern particle physics experiments is given. Examples are drawn from the SELEX(E78 1) high statistics charmed baryon production and decay experiment now taking data at Fermilab

  5. Effect of Saline Water on Yield and Nitrogen Acquisition by Sugar Beet (Beta vulgaris L.) Using 15N Technique

    International Nuclear Information System (INIS)

    Gadalla, A. M.; Galal, Y. G. M.; Abdel Aziz, A.; Hamdy, A.

    2007-01-01

    Sugar beet growth response to the interactive effects of salinity and N-fertilization was investigated using 15N tracer technique under greenhouse condition. Data showed that dry matter yield of sugar beet shoots and roots were frequently affected by N and water regime. Total N uptake by leaves was increased under almost water salinity treatments in spite of increasing salinity levels. It appears that in case of W I , N I I the N-uptake by roots was significantly decreased along with raising salinity levels from 4 to 8 dS/m. The portions of N derived from fertilizer (whole plant) showed that the trend was affected by salinity level of irrigation water, and fertilization treatments. The highest amount of N derived from fertilizer was obtained with the 4 dS/m level under N I I with the two water regimes. The efficient use of fertilizer-N was slightly but positively affected by raising salinity levels of irrigation water. Sugar percent was increased with increasing salinity levels of irrigation water under both N I and N I I treatments, but it was higher in case of N I than NII under different salinity levels. Generally, Irrigation with saline water in combination with water regime of 75-80% of field capacity and splitting nitrogen technique are better for enhancement of sugar beet production grown under such adverse conditions

  6. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    Science.gov (United States)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  7. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  8. Triple Arterial Phase MR Imaging with Gadoxetic Acid Using a Combination of Contrast Enhanced Time Robust Angiography, Keyhole, and Viewsharing Techniques and Two-Dimensional Parallel Imaging in Comparison with Conventional Single Arterial Phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Lee, Jeong Min [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of); Yu, Mi Hye [Department of Radiology, Konkuk University Medical Center, Seoul 05030 (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul 04342 (Korea, Republic of); Han, Joon Koo [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of)

    2016-11-01

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ{sup 2} test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  9. Triple arterial phase MR imaging with gadoxetic acid using a combination of contrast enhanced time robust angiography, keyhole, and viewsharing techniques and two-dimensional parallel imaging in comparison with conventional single arterial phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee; Lee, Jeong Min; Han, Joon Koo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Yu, Mi Hye [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul (Korea, Republic of)

    2016-07-15

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ2 test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  10. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  11. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  12. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  13. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  14. Whole-organ perfusion of the pancreas using dynamic volume CT in patients with primary pancreas carcinoma: acquisition technique, post-processing and initial results

    International Nuclear Information System (INIS)

    Kandel, Sonja; Kloeters, Christian; Meyer, Henning; Hein, Patrick; Rogalla, Patrik; Hilbig, Andreas

    2009-01-01

    The purpose of this study was to evaluate a whole-organ perfusion protocol of the pancreas in patients with primary pancreas carcinoma and to analyse perfusion differences between normal and diseased pancreatic tissue. Thirty patients with primary pancreatic malignancy were imaged on a 320-slice CT unit. Twenty-nine cancers were histologically proven. CT data acquisition was started manually after contrast-material injection (8 ml/s, 350 mg iodine/ml) and dynamic density measurements in the right ventricle. After image registration, perfusion was determined with the gradient-relationship technique and volume regions-of-interest were defined for perfusion measurements. Contrast time-density curves and perfusion maps were generated. Statistical analysis was performed using the Kolmogorov-Smirnov test for analysis of normal distribution and Kruskal-Wallis test (nonparametric ANOVA) with Bonferroni correction for multiple stacked comparisons. In all 30 patients the entire pancreas was imaged, and registration could be completed in all cases. Perfusion of pancreatic carcinomas was significantly lower than of normal pancreatic tissue (P < 0.001) and could be visualized on colored perfusion maps. The 320-slice CT allows complete dynamic visualization of the pancreas and enables calculation of whole-organ perfusion maps. Perfusion imaging carries the potential to improve detection of pancreatic cancers due to the perfusion differences. (orig.)

  15. Sodium magnetic resonance imaging. Development of a 3D radial acquisition technique with optimized k-space sampling density and high SNR-efficiency

    International Nuclear Information System (INIS)

    Nagel, Armin Michael

    2009-01-01

    A 3D radial k-space acquisition technique with homogenous distribution of the sampling density (DA-3D-RAD) is presented. This technique enables short echo times (TE 23 Na-MRI, and provides a high SNR-efficiency. The gradients of the DA-3D-RAD-sequence are designed such that the average sampling density in each spherical shell of k-space is constant. The DA-3D-RAD-sequence provides 34% more SNR than a conventional 3D radial sequence (3D-RAD) if T 2 * -decay is neglected. This SNR-gain is enhanced if T 2 * -decay is present, so a 1.5 to 1.8 fold higher SNR is measured in brain tissue with the DA-3D-RAD-sequence. Simulations and experimental measurements show that the DA-3D-RAD sequence yields a better resolution in the presence of T 2 * -decay and less image artefacts when B 0 -inhomogeneities exist. Using the developed sequence, T 1 -, T 2 * - and Inversion-Recovery- 23 Na-image contrasts were acquired for several organs and 23 Na-relaxation times were measured (brain tissue: T 1 =29.0±0.3 ms; T 2s * ∼4 ms; T 2l * ∼31 ms; cerebrospinal fluid: T 1 =58.1±0.6 ms; T 2 * =55±3 ms (B 0 =3 T)). T 1 - und T 2 * -relaxation times of cerebrospinal fluid are independent of the selected magnetic field strength (B0 = 3T/7 T), whereas the relaxation times of brain tissue increase with field strength. Furthermore, 23 Na-signals of oedemata were suppressed in patients and thus signals from different tissue compartments were selectively measured. (orig.)

  16. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  17. An investigation into the accuracy, stability and parallel performance of a highly stable explicit technique for stiff reaction-transport PDEs

    Energy Technology Data Exchange (ETDEWEB)

    Franz, A., LLNL

    1998-02-17

    The numerical simulation of chemically reacting flows is a topic, that has attracted a great deal of current research At the heart of numerical reactive flow simulations are large sets of coupled, nonlinear Partial Differential Equations (PDES). Due to the stiffness that is usually present, explicit time differencing schemes are not used despite their inherent simplicity and efficiency on parallel and vector machines, since these schemes require prohibitively small numerical stepsizes. Implicit time differencing schemes, although possessing good stability characteristics, introduce a great deal of computational overhead necessary to solve the simultaneous algebraic system at each timestep. This thesis examines an algorithm based on a preconditioned time differencing scheme. The algorithm is explicit and permits a large stable time step. An investigation of the algorithm`s accuracy, stability and performance on a parallel architecture is presented

  18. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  19. Parallel imaging enhanced MR colonography using a phantom model.

    LENUS (Irish Health Repository)

    Morrin, Martina M

    2008-09-01

    To compare various Array Spatial and Sensitivity Encoding Technique (ASSET)-enhanced T2W SSFSE (single shot fast spin echo) and T1-weighted (T1W) 3D SPGR (spoiled gradient recalled echo) sequences for polyp detection and image quality at MR colonography (MRC) in a phantom model. Limitations of MRC using standard 3D SPGR T1W imaging include the long breath-hold required to cover the entire colon within one acquisition and the relatively low spatial resolution due to the long acquisition time. Parallel imaging using ASSET-enhanced T2W SSFSE and 3D T1W SPGR imaging results in much shorter imaging times, which allows for increased spatial resolution.

  20. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  1. D Models for All: Low-Cost Acquisition Through Mobile Devices in Comparison with Image Based Techniques. Potentialities and Weaknesses in Cultural Heritage Domain

    Science.gov (United States)

    Santagati, C.; Lo Turco, M.; Bocconcino, M. M.; Donato, V.; Galizia, M.

    2017-11-01

    Nowadays, 3D digital imaging proposes effective solutions for preserving the expression of human creativity across the centuries, as well as is a great tool to guarantee global dissemination of knowledge and wide access to these invaluable resources of the past. Nevertheless, in several cases, a massive digitalisation of cultural heritage items (from the archaeological site up to the monument and museum collections) could be unworkable due to the still high costs in terms of equipment and human resources: 3D acquisition technologies and the need of skilled team within cultural institutions. Therefore, it is necessary to explore new possibilities offered by growing technologies: the lower costs of these technologies as well as their attractive visual quality constitute a challenge for researchers. Besides these possibilities, it is also important to consider how information is spread through graphic representation of knowledge. The focus of this study is to explore the potentialities and weaknesses of a newly released low cost device in the cultural heritage domain, trying to understand its effective usability in museum collections. The aim of the research is to test their usability, critically analysing the final outcomes of this entry level technology in relation to the other better assessed low cost technologies for 3D scanning, such as Structure from Motion (SfM) techniques (also produced by the same device) combined with dataset generated by a professional digital camera. The final outcomes were compared in terms of quality definition, time processing and file size. The specimens of the collections of the Civic Museum Castello Ursino in Catania have been chosen as the site of experimentation.

  2. 3D MODELS FOR ALL: LOW-COST ACQUISITION THROUGH MOBILE DEVICES IN COMPARISON WITH IMAGE BASED TECHNIQUES. POTENTIALITIES AND WEAKNESSES IN CULTURAL HERITAGE DOMAIN

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2017-11-01

    Full Text Available Nowadays, 3D digital imaging proposes effective solutions for preserving the expression of human creativity across the centuries, as well as is a great tool to guarantee global dissemination of knowledge and wide access to these invaluable resources of the past. Nevertheless, in several cases, a massive digitalisation of cultural heritage items (from the archaeological site up to the monument and museum collections could be unworkable due to the still high costs in terms of equipment and human resources: 3D acquisition technologies and the need of skilled team within cultural institutions. Therefore, it is necessary to explore new possibilities offered by growing technologies: the lower costs of these technologies as well as their attractive visual quality constitute a challenge for researchers. Besides these possibilities, it is also important to consider how information is spread through graphic representation of knowledge. The focus of this study is to explore the potentialities and weaknesses of a newly released low cost device in the cultural heritage domain, trying to understand its effective usability in museum collections. The aim of the research is to test their usability, critically analysing the final outcomes of this entry level technology in relation to the other better assessed low cost technologies for 3D scanning, such as Structure from Motion (SfM techniques (also produced by the same device combined with dataset generated by a professional digital camera. The final outcomes were compared in terms of quality definition, time processing and file size. The specimens of the collections of the Civic Museum Castello Ursino in Catania have been chosen as the site of experimentation.

  3. Data acquisition

    International Nuclear Information System (INIS)

    Clout, P.N.

    1982-01-01

    Data acquisition systems are discussed for molecular biology experiments using synchrotron radiation sources. The data acquisition system requirements are considered. The components of the solution are described including hardwired solutions and computer-based solutions. Finally, the considerations for the choice of the computer-based solution are outlined. (U.K.)

  4. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  5. A complex dissected chronic occlusion: targeted balloon dilatation of false lumen to access true lumen, combined localized subintimal tracking and reentry, parallel wire, contralateral injection and a useful antegrade lumen re-entry technique

    Directory of Open Access Journals (Sweden)

    James W. Tam

    2012-02-01

    Full Text Available Chronic total occlusion (CTO angioplasty is one of the most challenging procedures remaining for the interventional operator. Recanalizing CTOs can improve exercise capacity, symptoms, left ventricular function and possibly reduce mortality. Multiple strategies such as escalating wire, parallel wire, seesaw, contralateral injection, subintimal tracking and re-entry (STAR, retrograde wire techniques (controlled antegrade retrograde subintimal tracking, CART, reverse CART, confluent balloon, rendezvous in coronary, and other techniques have all been described. Selection of the most appropriate approach is based on assessment of vessel course, length of occluded segment, presence of bridging collaterals, presence of bifurcating side branches at the occlusion site, and other variables. Today, with significant operator expertise and the use of available techniques, the literature reports a 50-95% success rate for recanalizing CTOs.

  6. Compiling Scientific Programs for Scalable Parallel Systems

    National Research Council Canada - National Science Library

    Kennedy, Ken

    2001-01-01

    ...). The research performed in this project included new techniques for recognizing implicit parallelism in sequential programs, a powerful and precise set-based framework for analysis and transformation...

  7. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  8. The Chateau de Cristal data acquisition system

    International Nuclear Information System (INIS)

    Villard, M.M.

    1987-05-01

    This data acquisition system is built on several dedicated data transfer busses: ADC data readout through the FERA bus, parallel data processing in two VME crates. High data rates and selectivities are performed via this acquisition structure and new developed processing units. The system modularity allows various experiments with additional detectors

  9. Seismic data acquisition systems

    International Nuclear Information System (INIS)

    Kolvankar, V.G.; Nadre, V.N.; Rao, D.S.

    1989-01-01

    Details of seismic data acquisition systems developed at the Bhabha Atomic Research Centre, Bombay are reported. The seismic signals acquired belong to different signal bandwidths in the band from 0.02 Hz to 250 Hz. All these acquisition systems are built around a unique technique of recording multichannel data on to a single track of an audio tape and in digital form. Techniques of how these signals in different bands of frequencies were acquired and recorded are described. Method of detecting seismic signals and its performance is also discussed. Seismic signals acquired in different set-ups are illustrated. Time indexing systems for different set-ups and multichannel waveform display systems which form essential part of the data acquisition systems are also discussed. (author). 13 refs., 6 figs., 1 tab

  10. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  11. Real-Time Spaceborne Synthetic Aperture Radar Float-Point Imaging System Using Optimized Mapping Methodology and a Multi-Node Parallel Accelerating Technique

    Science.gov (United States)

    Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long

    2018-01-01

    With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637

  12. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    Science.gov (United States)

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  13. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array−Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique

    Directory of Open Access Journals (Sweden)

    Chen Yang

    2017-06-01

    Full Text Available With the development of satellite load technology and very large scale integrated (VLSI circuit technology, onboard real-time synthetic aperture radar (SAR imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT, which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array−application-specific integrated circuit (FPGA-ASIC hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  14. Development of gas-liquid two-phase flow measurement technique in narrow channel. Application of micro wire-mesh sensor to the flow between parallel plates

    International Nuclear Information System (INIS)

    Ito, Daisuke; Kikura, Hiroshige; Aritomi, Masanori

    2009-01-01

    A novel two-phase flow measuring technique based on local electrical conductivity measurement was developed for clarifications of three-dimensional flow structure in gas-liquid two-phase flow in a narrow channel. The measuring method applies the principle of conventional wire-mesh tomography, which can measure the instantaneous void fraction distributions in a cross-section of a flow channel. In this technique, the electrodes are fixed on the inside of the walls facing each other, and the local void fractions were obtained by the electrical conductivity measurement between electrodes arranged on each wall. Therefore, the flow structure and the bubble behavior can be investigated by three-dimensional void fraction distributions in the channel with narrow gap. In this paper, a micro Wire-Mesh Sensor (μWMS) which has the gap of 3 mm was developed, and the instantaneous void fraction distributions were measured. From the measured distributions, three-dimensional bubble distributions were reconstructed, and bubble volumes and bubble velocities were estimated. (author)

  15. Simulation of neutron transport equation using parallel Monte Carlo for deep penetration problems

    International Nuclear Information System (INIS)

    Bekar, K. K.; Tombakoglu, M.; Soekmen, C. N.

    2001-01-01

    Neutron transport equation is simulated using parallel Monte Carlo method for deep penetration neutron transport problem. Monte Carlo simulation is parallelized by using three different techniques; direct parallelization, domain decomposition and domain decomposition with load balancing, which are used with PVM (Parallel Virtual Machine) software on LAN (Local Area Network). The results of parallel simulation are given for various model problems. The performances of the parallelization techniques are compared with each other. Moreover, the effects of variance reduction techniques on parallelization are discussed

  16. Mergers + acquisitions.

    Science.gov (United States)

    Hoppszallern, Suzanna

    2002-05-01

    The hospital sector in 2001 led the health care field in mergers and acquisitions. Most deals involved a network augmenting its presence within a specific region or in a market adjacent to its primary service area. Analysts expect M&A activity to increase in 2002.

  17. The FINUDA data acquisition system

    International Nuclear Information System (INIS)

    Cerello, P.; Marcello, S.; Filippini, V.; Fiore, L.; Gianotti, P.; Raimondo, A.

    1996-07-01

    A parallel scalable Data Acquisition System, based on VME, has been developed to be used in the FINUDA experiment, scheduled to run at the DAPHNE machine at Frascati starting from 1997. The acquisition software runs on embedded RTPC 8067 processors using the LynxOS operating system. The readout of event fragments is coordinated by a suitable trigger Supervisor. data read by different controllers are transported via dedicated bus to a Global Event Builder running on a UNIX machine. Commands from and to VME processors are sent via socket based network protocols. The network hardware is presently ethernet, but it can easily changed to optical fiber

  18. SU-E-J-11: Measurement of Eye Lens Dose for Varian On-Board Imaging with Different CBCT Acquisition Techniques

    International Nuclear Information System (INIS)

    Deshpande, S; Dhote, D; Kumar, R; Thakur, K

    2015-01-01

    Purpose: To measure actual patient eye lens dose for different cone beam computed tomography (CBCT) acquisition protocol of Varian’s On Board Imagining (OBI) system using Optically Stimulated Luminescence (OSL) dosimeter and study the eye lens dose with patient geometry and distance of isocenter to the eye lens Methods: OSL dosimeter was used to measure eye lens dose of patient. OSL dosimeter was placed on patient forehead center during CBCT image acquisition to measure eye lens dose. For three different cone beam acquisition protocol (standard dose head, low dose head and high quality head) of Varian On-Board Imaging, eye lens doses were measured. Measured doses were correlated with patient geometry and distance between isocenter to eye lens. Results: Measured eye lens dose for standard dose head was in the range of 1.8 mGy to 3.2 mGy, for high quality head protocol dose was in range of 4.5mGy to 9.9 mGy whereas for low dose head was in the range of 0.3mGy to 0.7mGy. Dose to eye lens is depends upon position of isocenter. For posterioraly located tumor eye lens dose is less. Conclusion: From measured doses it can be concluded that by proper selection of imagining protocol and frequency of imaging, it is possible to restrict the eye lens dose below the new limit set by ICRP. However, undoubted advantages of imaging system should be counter balanced by careful consideration of imaging protocol especially for very intense imaging sequences for Adoptive Radiotherapy or IMRT

  19. Three-dimensional seismic survey planning based on the newest data acquisition design technique; Saishin no data shutoku design ni motozuku sanjigen jishin tansa keikaku

    Energy Technology Data Exchange (ETDEWEB)

    Minehara, M; Nakagami, K; Tanaka, H [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1996-10-01

    Theory of parameter setting for data acquisition is arranged, mainly as to the seismic generating and receiving geometry. This paper also introduces an example of survey planning for three-dimensional land seismic exploration in progress. For the design of data acquisition, fundamental parameters are firstly determined on the basis of the characteristics of reflection records at a given district, and then, the layout of survey is determined. In this study, information through modeling based on the existing interpretation of geologic structures is also utilized, to reflect them for survey specifications. Land three-dimensional seismic survey was designed. Ground surface of the surveyed area consists of rice fields and hilly regions. The target was a nose-shaped structure in the depth about 2,500 m underground. A survey area of 4km{times}5km was set. Records in the shallow layers could not obtained when near offset was not ensured. Quality control of this distribution was important for grasping the shallow structure required. In this survey, the seismic generating point could be ensured more certainly than initially expected, which resulted in the sufficient security of near offset. 2 refs., 2 figs.

  20. Mergers & Acquisitions

    DEFF Research Database (Denmark)

    Fomcenco, Alex

    This dissertation is a legal dogmatic thesis, the goal of which is to describe and analyze the current state of law in Europe in regard to some relevant selected elements related to mergers and acquisitions, and the adviser’s counsel in this regard. Having regard to the topic of the dissertation...... and fiscal neutrality, group-related issues, holding-structure issues, employees, stock exchange listing issues, and corporate nationality....

  1. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  2. Fast MR image reconstruction for partially parallel imaging with arbitrary k-space trajectories.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Lin, Wei; Huang, Feng

    2011-03-01

    Both acquisition and reconstruction speed are crucial for magnetic resonance (MR) imaging in clinical applications. In this paper, we present a fast reconstruction algorithm for SENSE in partially parallel MR imaging with arbitrary k-space trajectories. The proposed method is a combination of variable splitting, the classical penalty technique and the optimal gradient method. Variable splitting and the penalty technique reformulate the SENSE model with sparsity regularization as an unconstrained minimization problem, which can be solved by alternating two simple minimizations: One is the total variation and wavelet based denoising that can be quickly solved by several recent numerical methods, whereas the other one involves a linear inversion which is solved by the optimal first order gradient method in our algorithm to significantly improve the performance. Comparisons with several recent parallel imaging algorithms indicate that the proposed method significantly improves the computation efficiency and achieves state-of-the-art reconstruction quality.

  3. Feasibility and Diagnostic Accuracy of Whole Heart Coronary MR Angiography Using Free-Breathing 3D Balanced Turbo-Field-Echo with SENSE and the Half-Fourier Acquisition Technique

    International Nuclear Information System (INIS)

    Kim, Young Jin; Seo, Jae Seung; Choi, Byoung Wook; Choe, Kyu Ok; Jang, Yang Soo; Ko, Young Guk

    2006-01-01

    We wanted to assess the feasibility and diagnostic accuracy of whole heart coronary magnetic resonance angiography (MRA) with using 3D balanced turbo-field-echo (b-TFE) with SENSE and the half-Fourier acquisition technique for identifying stenoses of the coronary artery. Twenty-one patients who underwent both whole heart coronary MRA examinations and conventional catheter coronary angiography examinations were enrolled in the study. The whole heart coronary MRA images were acquired using a navigator gated 3D b-TFE sequence with SENSE and the half-Fourier acquisition technique to reduce the acquisition time. The imaging slab covered the whole heart (80 contiguous slices with a reconstructed slice thickness of 1.5 mm) along the transverse axis. The quality of the images was evaluated by using a 5-point scale (0 - uninterpretable, 1 - poor, 2 - fair, 3 - good, 4 - excellent). Ten coronary segments of the heart were evaluated in each case; the left main coronary artery (LM), and the proximal, middle and distal segments of the left anterior descending (LAD), the left circumflex (LCX) and the right coronary artery (RCA). The diagnostic accuracy of whole heart coronary MRA for detecting significant coronary artery stenosis was determined on the segment-bysegment basis, and it was compared with the results obtained by conventional catheter angiography, which is the gold standard. The mean image quality was 3.7 in the LM, 3.2 in the LAD, 2.5 in the LCX, and 3.3 in the RCA, respectively (the overall image quality was 3.0 ± 0.1). 168 (84%) of the 201 segments had an acceptable image quality (≥ grade 2). The sensitivity, specificity, accuracy, negative predictive value and positive predictive value of the whole heart coronary MRA images for detecting significant stenosis were 81.3%, 92.1%, 91.1%, 97.9%, and 52.0%, respectively. The mean coronary MRA acquisition time was 9 min 22 sec (± 125 sec). Whole heart coronary MRA is a feasible technique, and it has good potential to

  4. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  5. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  6. Quantitative diffusion MRI using reduced field-of-view and multi-shot acquisition techniques: Validation in phantoms and prostate imaging.

    Science.gov (United States)

    Zhang, Yuxin; Holmes, James; Rabanillo, Iñaki; Guidon, Arnaud; Wells, Shane; Hernando, Diego

    2018-04-17

    To evaluate the reproducibility of quantitative diffusion measurements obtained with reduced Field of View (rFOV) and Multi-shot EPI (msEPI) acquisitions, using single-shot EPI (ssEPI) as a reference. Diffusion phantom experiments, and prostate diffusion-weighted imaging in healthy volunteers and patients with known or suspected prostate cancer were performed across the three different sequences. Quantitative diffusion measurements of apparent diffusion coefficient, and diffusion kurtosis parameters (healthy volunteers), were obtained and compared across diffusion sequences (rFOV, msEPI, and ssEPI). Other possible confounding factors like b-value combinations and acquisition parameters were also investigated. Both msEPI and rFOV have shown reproducible quantitative diffusion measurements relative to ssEPI; no significant difference in ADC was observed across pulse sequences in the standard diffusion phantom (p = 0.156), healthy volunteers (p ≥ 0.12) or patients (p ≥ 0.26). The ADC values within the non-cancerous central gland and peripheral zone of patients were 1.29 ± 0.17 × 10 -3  mm 2 /s and 1.74 ± 0.23 × 10 -3  mm 2 /s respectively. However, differences in quantitative diffusion parameters were observed across different number of averages for rFOV, and across b-value groups and diffusion models for all the three sequences. Both rFOV and msEPI have the potential to provide high image quality with reproducible quantitative diffusion measurements in prostate diffusion MRI. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Ultrasound Vector Flow Imaging: Part II: Parallel Systems

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav Ivanov; Yu, Alfred C. H.

    2016-01-01

    The paper gives a review of the current state-of-theart in ultrasound parallel acquisition systems for flow imaging using spherical and plane waves emissions. The imaging methods are explained along with the advantages of using these very fast and sensitive velocity estimators. These experimental...... ultrasound imaging for studying brain function in animals. The paper explains the underlying acquisition and estimation methods for fast 2-D and 3-D velocity imaging and gives a number of examples. Future challenges and the potentials of parallel acquisition systems for flow imaging are also discussed....

  8. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  9. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  10. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  11. Influence of high magnetic field strengths and parallel acquisition strategies on image quality in cardiac 2D CINE magnetic resonance imaging: comparison of 1.5 T vs. 3.0 T

    International Nuclear Information System (INIS)

    Gutberlet, Matthias; Schwinge, Kerstin; Freyhardt, Patrick; Spors, Birgit; Grothoff, Matthias; Denecke, Timm; Luedemann, Lutz; Felix, Roland; Noeske, Ralph; Niendorf, Thoralf

    2005-01-01

    The aim of this paper is to examine signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and image quality of cardiac CINE imaging at 1.5 T and 3.0 T. Twenty volunteers underwent cardiac magnetic resonance imaging (MRI) examinations using a 1.5-T and a 3.0-T scanner. Three different sets of breath-held, electrocardiogram-gated (ECG) CINE imaging techniques were employed, including: (1) unaccelerated SSFP (steady state free precession), (2) accelerated SSFP imaging and (3) gradient-echo-based myocardial tagging. Two-dimensional CINE SSFP at 3.0 T revealed an SNR improvement of 103% and a CNR increase of 19% as compared to the results obtained at 1.5 T. The SNR reduction in accelerated 2D CINE SSFP imaging was larger at 1.5 T (37%) compared to 3.0 T (26%). The mean SNR and CNR increase at 3.0 T obtained for the tagging sequence was 88% and 187%, respectively. At 3.0 T, the duration of the saturation bands persisted throughout the entire cardiac cycle. For comparison, the saturation bands were significantly diminished at 1.5 T during end-diastole. For 2D CINE SSFP imaging, no significant difference in the left ventricular volumetry and in the overall image quality was obtained. For myocardial tagging, image quality was significantly improved at 3.0 T. The SNR reduction in accelerated SSFP imaging was overcompensated by the increase in the baseline SNR at 3.0 T and did not result in any image quality degradation. For cardiac tagging techniques, 3.0 T was highly beneficial, which holds the promise to improve its diagnostic value. (orig.)

  12. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  13. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  14. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  15. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  16. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  17. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.; He, Z.; Xiao, M.; Zhang, Z.

    2014-01-01

    is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI

  18. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  19. Amplitudes, acquisition and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Bloor, Robert

    1998-12-31

    Accurate seismic amplitude information is important for the successful evaluation of many prospects and the importance of such amplitude information is increasing with the advent of time lapse seismic techniques. It is now widely accepted that the proper treatment of amplitudes requires seismic imaging in the form of either time or depth migration. A key factor in seismic imaging is the spatial sampling of the data and its relationship to the imaging algorithms. This presentation demonstrates that acquisition caused spatial sampling irregularity can affect the seismic imaging and perturb amplitudes. Equalization helps to balance the amplitudes, and the dealing strategy improves the imaging further when there are azimuth variations. Equalization and dealiasing can also help with the acquisition irregularities caused by shot and receiver dislocation or missing traces. 2 refs., 2 figs.

  20. Rapid acquisition of operant conditioning in 5-day-old rat pups: a new technique articulating suckling-related motor activity and milk reinforcement.

    Science.gov (United States)

    Arias, Carlos; Spear, Norman E; Molina, Juan Carlos; Molina, Agustin; Molina, Juan Carlos

    2007-09-01

    Newborn rats are capable of obtaining milk by attaching to a surrogate nipple. During this procedure pups show a gradual increase in head and forelimb movements oriented towards the artificial device that are similar to those observed during nipple attachment. In the present study the probability of execution of these behaviors was analyzed as a function of their contingency with intraoral milk infusion using brief training procedures (15 min). Five-day-old pups were positioned in a smooth surface having access to a touch-sensitive sensor. Physical contact with the sensor activated an infusion pump which served to deliver intraoral milk reinforcement (Paired group). Yoked controls received the reinforcer when Paired neonates touched the sensor. Paired pups trained under a continuous reinforcement schedule emitted significantly more responses than Yoked controls following two (Experiment 1) or one training session (Experiment 2). These differences were also observed during an extinction session conducted immediately after training. The level of maternal deprivation before training (3 or 6 hr) or the volume of milk delivered (1.0 or 1.5 microl per pulse) did not affect acquisition or extinction performances. In addition, it was observed that the rate of responding of Paired pups during the early phase of the extinction session significantly predicted subsequent levels of acceptance of the reinforcer. These results indicate that the frequency of suckling-related behaviors can be rapidly modified by means of associative operant processes. The operant procedure here described represents an alternative tool for the ontogenetic analysis of self-administration or behavior processes of seeking. .

  1. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  2. Pediatric bowel MRI - accelerated parallel imaging in a single breathhold

    International Nuclear Information System (INIS)

    Hohl, C.; Honnef, D.; Krombach, G.; Muehlenbruch, G.; Guenther, R.W.; Niendorf, T.; Ocklenburg, C.; Wenzl, T.G.

    2008-01-01

    Purpose: to compare highly accelerated parallel MRI of the bowel with conventional balanced FFE sequences in children with inflammatory bowel disease (IBD). Materials and methods: 20 children with suspected or proven IBD underwent MRI using a 1.5 T scanner after oral administration of 700-1000 ml of a Mannitol solution and an additional enema. The examination started with a 4-channel receiver coil and a conventional balanced FFE sequence in axial (2.5 s/slice) and coronal (4.7 s/slice) planes. Afterwards highly accelerated (R = 5) balanced FFE sequences in axial (0.5 s/slice) and coronal (0.9 s/slice) were performed using a 32-channel receiver coil and parallel imaging (SENSE). Both receiver coils achieved a resolution of 0.88 x 0.88 mm with a slice thickness of 5 mm (coronal) and 6 mm (axial) respectively. Using the conventional imaging technique, 4 - 8 breathholds were needed to cover the whole abdomen, while parallel imaging shortened the acquisition time down to a single breathhold. Two blinded radiologists did a consensus reading of the images regarding pathological findings, image quality, susceptibility to artifacts and bowel distension. The results for both coil systems were compared using the kappa-(κ)-coefficient, differences in the susceptibility to artifacts were checked with the Wilcoxon signed rank test. Statistical significance was assumed for p = 0.05. Results: 13 of the 20 children had inflammatory bowel wall changes at the time of the examination, which could be correctly diagnosed with both coil systems in 12 of 13 cases (92%). The comparison of both coil systems showed a good agreement for pathological findings (κ = 0.74 - 1.0) and the image quality. Using parallel imaging significantly more artifacts could be observed (κ = 0.47)

  3. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  4. Microcomputer data acquisition and control.

    Science.gov (United States)

    East, T D

    1986-01-01

    In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence.

  5. Development and application of efficient strategies for parallel magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Breuer, F.

    2006-07-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image

  6. Development and application of efficient strategies for parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Breuer, F.

    2006-01-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image artifacts

  7. Continued Data Acquisition Development

    Energy Technology Data Exchange (ETDEWEB)

    Schwellenbach, David [National Security Technologies, LLC. (NSTec), Mercury, NV (United States)

    2017-11-27

    This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muon tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.

  8. High temporal resolution functional MRI using parallel echo volumar imaging

    International Nuclear Information System (INIS)

    Rabrait, C.; Ciuciu, P.; Ribes, A.; Poupon, C.; Dehaine-Lambertz, G.; LeBihan, D.; Lethimonnier, F.; Le Roux, P.; Dehaine-Lambertz, G.

    2008-01-01

    Purpose: To combine parallel imaging with 3D single-shot acquisition (echo volumar imaging, EVI) in order to acquire high temporal resolution volumar functional MRI (fMRI) data. Materials and Methods: An improved EVI sequence was associated with parallel acquisition and field of view reduction in order to acquire a large brain volume in 200 msec. Temporal stability and functional sensitivity were increased through optimization of all imaging parameters and Tikhonov regularization of parallel reconstruction. Two human volunteers were scanned with parallel EVI in a 1.5 T whole-body MR system, while submitted to a slow event-related auditory paradigm. Results: Thanks to parallel acquisition, the EVI volumes display a low level of geometric distortions and signal losses. After removal of low-frequency drifts and physiological artifacts,activations were detected in the temporal lobes of both volunteers and voxel-wise hemodynamic response functions (HRF) could be computed. On these HRF different habituation behaviors in response to sentence repetition could be identified. Conclusion: This work demonstrates the feasibility of high temporal resolution 3D fMRI with parallel EVI. Combined with advanced estimation tools,this acquisition method should prove useful to measure neural activity timing differences or study the nonlinearities and non-stationarities of the BOLD response. (authors)

  9. Self-calibrated multiple-echo acquisition with radial trajectories using the conjugate gradient method (SMART-CG).

    Science.gov (United States)

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F

    2011-04-01

    To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.

  10. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  11. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  12. A wavelet-based regularized reconstruction algorithm for SENSE parallel MRI with applications to neuroimaging

    International Nuclear Information System (INIS)

    Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.

    2011-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)

  13. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  14. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  15. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  16. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  17. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  18. 2017 NAIP Acquisition Map

    Data.gov (United States)

    Farm Service Agency, Department of Agriculture — Planned States for 2017 NAIP acquisition and acquisition status layer (updated daily). Updates to the acquisition seasons may be made during the season to...

  19. Simultaneous acquisition of three NMR spectra in a single ...

    Indian Academy of Sciences (India)

    Simultaneous acquisition of three NMR spectra in a single experiment ... set, which is based on a combination of different fast data acquisition techniques such as G-matrix ..... The sign and intensity of the CHn resonance depends on the delay.

  20. Model-based Sensor Data Acquisition and Management

    OpenAIRE

    Aggarwal, Charu C.; Sathe, Saket; Papaioannou, Thanasis G.; Jeung, Ho Young; Aberer, Karl

    2012-01-01

    In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition...

  1. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2017-01-01

    Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.

  2. Parallel sparse direct solver for integrated circuit simulation

    CERN Document Server

    Chen, Xiaoming; Yang, Huazhong

    2017-01-01

    This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques. · Introduces complicated algorithms of sparse linear solvers, using concise principles and simple examples, without complex theory or lengthy derivations; · Describes a parallel sparse direct solver that can be adopted to accelerate any SPICE-like integrated circuit simulato...

  3. Data acquisition systems at Fermilab

    International Nuclear Information System (INIS)

    Votava, M.

    1999-01-01

    Experiments at Fermilab require an ongoing program of development for high speed, distributed data acquisition systems. The physics program at the lab has recently started the operation of a Fixed Target run in which experiments are running the DART[1] data acquisition system. The CDF and D experiments are preparing for the start of the next Collider run in mid 2000. Each will read out on the order of 1 million detector channels. In parallel, future experiments such as BTeV R ampersand D and Minos have already started prototype and test beam work. BTeV in particular has challenging data acquisition system requirements with an input rate of 1500 Gbytes/sec into Level 1 buffers and a logging rate of 200 Mbytes/sec. This paper will present a general overview of these data acquisition systems on three fronts those currently in use, those to be deployed for the Collider Run in 2000, and those proposed for future experiments. It will primarily focus on the CDF and D architectures and tools

  4. Syntax acquisition.

    Science.gov (United States)

    Crain, Stephen; Thornton, Rosalind

    2012-03-01

    Every normal child acquires a language in just a few years. By 3- or 4-years-old, children have effectively become adults in their abilities to produce and understand endlessly many sentences in a variety of conversational contexts. There are two alternative accounts of the course of children's language development. These different perspectives can be traced back to the nature versus nurture debate about how knowledge is acquired in any cognitive domain. One perspective dates back to Plato's dialog 'The Meno'. In this dialog, the protagonist, Socrates, demonstrates to Meno, an aristocrat in Ancient Greece, that a young slave knows more about geometry than he could have learned from experience. By extension, Plato's Problem refers to any gap between experience and knowledge. How children fill in the gap in the case of language continues to be the subject of much controversy in cognitive science. Any model of language acquisition must address three factors, inter alia: 1. The knowledge children accrue; 2. The input children receive (often called the primary linguistic data); 3. The nonlinguistic capacities of children to form and test generalizations based on the input. According to the famous linguist Noam Chomsky, the main task of linguistics is to explain how children bridge the gap-Chomsky calls it a 'chasm'-between what they come to know about language, and what they could have learned from experience, even given optimistic assumptions about their cognitive abilities. Proponents of the alternative 'nurture' approach accuse nativists like Chomsky of overestimating the complexity of what children learn, underestimating the data children have to work with, and manifesting undue pessimism about children's abilities to extract information based on the input. The modern 'nurture' approach is often referred to as the usage-based account. We discuss the usage-based account first, and then the nativist account. After that, we report and discuss the findings of several

  5. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  6. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  7. Data Acquisition and Flux Calculations

    DEFF Research Database (Denmark)

    Rebmann, C.; Kolle, O; Heinesch, B

    2012-01-01

    In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation....

  8. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  9. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  10. Data acquisition system for SLD

    International Nuclear Information System (INIS)

    Sherden, D.J.

    1985-05-01

    This paper describes the data acquisition system planned for the SLD detector which is being constructed for use with the SLAC Linear Collider (SLC). An exclusively FASTBUS front-end system is used together with a VAX-based host system. While the volume of data transferred does not challenge the band-width capabilities of FASTBUS, extensive use is made of the parallel processing capabilities allowed by FASTBUS to reduce the data to a size which can be handled by the host system. The low repetition rate of the SLC allows a relatively simple software-based trigger. The principal components and overall architecture of the hardware and software are described

  11. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  12. Automated, parallel mass spectrometry imaging and structural identification of lipids

    DEFF Research Database (Denmark)

    Ellis, Shane R.; Paine, Martin R.L.; Eijkel, Gert B.

    2018-01-01

    We report a method that enables automated data-dependent acquisition of lipid tandem mass spectrometry data in parallel with a high-resolution mass spectrometry imaging experiment. The method does not increase the total image acquisition time and is combined with automatic structural assignments....... This lipidome-per-pixel approach automatically identified and validated 104 unique molecular lipids and their spatial locations from rat cerebellar tissue....

  13. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  14. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  15. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  16. Language constructs for modular parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  17. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  18. Parallel data grabbing card based on PCI bus RS422

    International Nuclear Information System (INIS)

    Zhang Zhenghui; Shen Ji; Wei Dongshan; Chen Ziyu

    2005-01-01

    This article briefly introduces the developments of the parallel data grabbing card based on RS422 and PCI bus. It could be applied for grabbing the 14 bits parallel data in high speed, coming from the devices with RS422 interface. The methods of data acquisition which bases on the PCI protocol, the functions and their usages of the chips employed, the ideas and principles of the hardware and software designing are presented. (authors)

  19. Improving image quality of parallel phase-shifting digital holography

    International Nuclear Information System (INIS)

    Awatsuji, Yasuhiro; Tahara, Tatsuki; Kaneko, Atsushi; Koyama, Takamasa; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2008-01-01

    The authors propose parallel two-step phase-shifting digital holography to improve the image quality of parallel phase-shifting digital holography. The proposed technique can increase the effective number of pixels of hologram twice in comparison to the conventional parallel four-step technique. The increase of the number of pixels makes it possible to improve the image quality of the reconstructed image of the parallel phase-shifting digital holography. Numerical simulation and preliminary experiment of the proposed technique were conducted and the effectiveness of the technique was confirmed. The proposed technique is more practical than the conventional parallel phase-shifting digital holography, because the composition of the digital holographic system based on the proposed technique is simpler.

  20. Multi spectral scaling data acquisition system

    International Nuclear Information System (INIS)

    Behere, Anita; Patil, R.D.; Ghodgaonkar, M.D.; Gopalakrishnan, K.R.

    1997-01-01

    In nuclear spectroscopy applications, it is often desired to acquire data at high rate with high resolution. With the availability of low cost computers, it is possible to make a powerful data acquisition system with minimum hardware and software development, by designing a PC plug-in acquisition board. But in using the PC processor for data acquisition, the PC can not be used as a multitasking node. Keeping this in view, PC plug-in acquisition boards with on-board processor find tremendous applications. Transputer based data acquisition board has been designed which can be configured as a high count rate pulse height MCA or as a Multi Spectral Scaler. Multi Spectral Scaling (MSS) is a new technique, in which multiple spectra are acquired in small time frames and are then analyzed. This paper describes the details of this multi spectral scaling data acquisition system. 2 figs

  1. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  2. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  3. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    Science.gov (United States)

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. Microwave tomography global optimization, parallelization and performance evaluation

    CERN Document Server

    Noghanian, Sima; Desell, Travis; Ashtari, Ali

    2014-01-01

    This book provides a detailed overview on the use of global optimization and parallel computing in microwave tomography techniques. The book focuses on techniques that are based on global optimization and electromagnetic numerical methods. The authors provide parallelization techniques on homogeneous and heterogeneous computing architectures on high performance and general purpose futuristic computers. The book also discusses the multi-level optimization technique, hybrid genetic algorithm and its application in breast cancer imaging.

  5. Speed in Acquisitions

    DEFF Research Database (Denmark)

    Meglio, Olimpia; King, David R.; Risberg, Annette

    2017-01-01

    The advantage of speed is often invoked by academics and practitioners as an essential condition during post-acquisition integration, frequently without consideration of the impact earlier decisions have on acquisition speed. In this article, we examine the role speed plays in acquisitions across...... the acquisition process using research organized around characteristics that display complexity with respect to acquisition speed. We incorporate existing research with a process perspective of acquisitions in order to present trade-offs, and consider the influence of both stakeholders and the pre......-deal-completion context on acquisition speed, as well as the organization’s capabilities to facilitating that speed. Observed trade-offs suggest both that acquisition speed often requires longer planning time before an acquisition and that associated decisions require managerial judgement. A framework for improving...

  6. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  7. Language Acquisition without an Acquisition Device

    Science.gov (United States)

    O'Grady, William

    2012-01-01

    Most explanatory work on first and second language learning assumes the primacy of the acquisition phenomenon itself, and a good deal of work has been devoted to the search for an "acquisition device" that is specific to humans, and perhaps even to language. I will consider the possibility that this strategy is misguided and that language…

  8. LAMPF nuclear chemistry data acquisition system

    International Nuclear Information System (INIS)

    Giesler, G.C.

    1983-01-01

    The LAMPF Nuclear Chemistry Data Acquisition System (DAS) is designed to provide both real-time control of data acquisition and facilities for data processing for a large variety of users. It consists of a PDP-11/44 connected to a parallel CAMAC branch highway as well as to a large number of peripherals. The various types of radiation counters and spectrometers and their connections to the system will be described. Also discussed will be the various methods of connection considered and their advantages and disadvantages. The operation of the system from the standpoint of both hardware and software will be described as well as plans for the future

  9. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    Science.gov (United States)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  10. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Kuojun, E-mail: kuojunyang@gmail.com; Guo, Lianping [School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu (China); School of Electrical and Electronic Engineering, Nanyang Technological University (Singapore); Tian, Shulin; Zeng, Hao [School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu (China); Qiu, Lei [School of Electrical and Electronic Engineering, Nanyang Technological University (Singapore)

    2014-04-15

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  11. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    Science.gov (United States)

    Yang, Kuojun; Tian, Shulin; Zeng, Hao; Qiu, Lei; Guo, Lianping

    2014-04-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  12. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    International Nuclear Information System (INIS)

    Yang, Kuojun; Guo, Lianping; Tian, Shulin; Zeng, Hao; Qiu, Lei

    2014-01-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition

  13. Portable parallel programming in a Fortran environment

    International Nuclear Information System (INIS)

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs

  14. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  15. The HyperCP data acquisition system

    International Nuclear Information System (INIS)

    Kaplan, D.M.

    1997-06-01

    For the HyperCP experiment at Fermilab, we have assembled a data acquisition system that records on up to 45 Exabyte 8505 tape drives in parallel at up to 17 MB/s. During the beam spill, data are acquired from the front-end digitization systems at ∼ 60 MB/s via five parallel data paths. The front-end systems achieve typical readout deadtime of ∼ 1 micros per event, allowing operation at 75-kHz trigger rate with approx-lt 30% deadtime. Event building and tapewriting are handled by 15 Motorola MVME167 processors in 5 VME crates

  16. Processing optimization with parallel computing for the J-PET scanner

    Directory of Open Access Journals (Sweden)

    Krzemień Wojciech

    2015-12-01

    Full Text Available The Jagiellonian Positron Emission Tomograph (J-PET collaboration is developing a prototype time of flight (TOF-positron emission tomograph (PET detector based on long polymer scintillators. This novel approach exploits the excellent time properties of the plastic scintillators, which permit very precise time measurements. The very fast field programmable gate array (FPGA-based front-end electronics and the data acquisition system, as well as low- and high-level reconstruction algorithms were specially developed to be used with the J-PET scanner. The TOF-PET data processing and reconstruction are time and resource demanding operations, especially in the case of a large acceptance detector that works in triggerless data acquisition mode. In this article, we discuss the parallel computing methods applied to optimize the data processing for the J-PET detector. We begin with general concepts of parallel computing and then we discuss several applications of those techniques in the J-PET data processing.

  17. Calo trigger acquisition system

    CERN Multimedia

    Franchini, Matteo

    2016-01-01

    Calo trigger acquisition system - Evolution of the acquisition system from a multiple boards system (upper, orange cables) to a single board one (below, light blue cables) where all the channels are collected in a single board.

  18. Modelling live forensic acquisition

    CSIR Research Space (South Africa)

    Grobler, MM

    2009-06-01

    Full Text Available This paper discusses the development of a South African model for Live Forensic Acquisition - Liforac. The Liforac model is a comprehensive model that presents a range of aspects related to Live Forensic Acquisition. The model provides forensic...

  19. Playing at Serial Acquisitions

    NARCIS (Netherlands)

    J.T.J. Smit (Han); T. Moraitis (Thras)

    2010-01-01

    textabstractBehavioral biases can result in suboptimal acquisition decisions-with the potential for errors exacerbated in consolidating industries, where consolidators design serial acquisition strategies and fight escalating takeover battles for platform companies that may determine their future

  20. Pattern recognition with parallel associative memory

    Science.gov (United States)

    Toth, Charles K.; Schenk, Toni

    1990-01-01

    An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.

  1. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  2. Mergers and Acquisitions

    OpenAIRE

    Frasch, Manfred; Leptin, Maria

    2000-01-01

    Mergers and acquisitions (M&As) are booming a strategy of choice for organizations attempting to maintain a competitive advantage. Previous research on mergers and acquisitions declares that acquirers do not normally benefit from acquisitions. Targets, on the other hand, have a tendency of gaining positive returns in the few days surrounding merger announcements due to several characteristic on the acquisitions deal. The announcement period wealth effect on acquiring firms, however, is as cle...

  3. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  4. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  5. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  6. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  7. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  8. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  9. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  10. Data acquisition for the D0 experiment

    International Nuclear Information System (INIS)

    Cutts, D.; Hoftun, J.S.; Johnson, C.R.; Zeller, R.T.; Trojak, T.; Van Berg, R.

    1985-01-01

    We describe the acquisition system for the D0 experiment at Fermilab, focusing primarily on the second level, which is based on a large parallel array of MicroVAX-II's. In this design data flows from the detector readout crates at a maximum rate of 320 Mbytes/sec into dual-port memories associated with one selected processor in which a VAXELIN based program performs the filter analysis of a complete event

  11. The UA1 VME data acquisition system

    International Nuclear Information System (INIS)

    Cittolin, S.

    1988-01-01

    The data acquisition system of a large-scale experiment such as UA1, running at the CERN proton-antiproton collider, has to cope with very high data rates and to perform sophisticated triggering and filtering in order to analyze interesting events. These functions are performed by a variety of programmable units organized in a parallel multiprocessor system whose central architecture is based on the industry-standard VME/VMXbus. (orig.)

  12. Kalman Filter Tracking on Parallel Architectures

    International Nuclear Information System (INIS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-01-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment

  13. Heuristic framework for parallel sorting computations | Nwanze ...

    African Journals Online (AJOL)

    Parallel sorting techniques have become of practical interest with the advent of new multiprocessor architectures. The decreasing cost of these processors will probably in the future, make the solutions that are derived thereof to be more appealing. Efficient algorithms for sorting scheme that are encountered in a number of ...

  14. Dual isotope, single acquisition parathyroid imaging

    International Nuclear Information System (INIS)

    Triantafillou, M.; McDonald, H.J.

    1998-01-01

    Full text: Nuclear Medicine parathyroid imaging using Thallium-201(TI) and Technetium-99m(Tc) is an often used imaging modality for the detection of parathyroid adenomas and hyper parathyroidism. The conventional Tl/Tc subtraction technique requires 2 separate injections and acquisitions which are then normalised and subtracted from each other. This lengthy technique is uncomfortable for patients and can result in false positive scan results due to patient movement between and during the acquisition process. We propose a simplified injection and single acquisition technique, that reduces the chance of movement and thus reduces the chance of false positive scan results. The technique involves the injection of Tc followed by the Tl injection 10 minutes later. After a further 10 min wait, imaging is performed using a dual isotope acquisition, with window (W) 1 set on 140 keV 20%W 5% off peak and W2 peaked for 70 keV 20%W., acquired for 10 minutes. We have imaged 27 patients with this technique, 15 had positive parathyroid imaging. Of the 15, 11 had positive ultrasound correlation. Of the remaining 4, 2 have had positive surgical findings for adenomas, the other 2 are awaiting follow-up. Of the 12 patients with negative parathyroid imaging, 2 have been shown to be false - negative with surgery. In conclusion, the single acquisition technique suggested by us is a valid method of imaging parathyroids that reduces the chance of false positive results due to movement

  15. A SPECT reconstruction method for extending parallel to non-parallel geometries

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2010-01-01

    Due to its simplicity, parallel-beam geometry is usually assumed for the development of image reconstruction algorithms. The established reconstruction methodologies are then extended to fan-beam, cone-beam and other non-parallel geometries for practical application. This situation occurs for quantitative SPECT (single photon emission computed tomography) imaging in inverting the attenuated Radon transform. Novikov reported an explicit parallel-beam formula for the inversion of the attenuated Radon transform in 2000. Thereafter, a formula for fan-beam geometry was reported by Bukhgeim and Kazantsev (2002 Preprint N. 99 Sobolev Institute of Mathematics). At the same time, we presented a formula for varying focal-length fan-beam geometry. Sometimes, the reconstruction formula is so implicit that we cannot obtain the explicit reconstruction formula in the non-parallel geometries. In this work, we propose a unified reconstruction framework for extending parallel-beam geometry to any non-parallel geometry using ray-driven techniques. Studies by computer simulations demonstrated the accuracy of the presented unified reconstruction framework for extending parallel-beam to non-parallel geometries in inverting the attenuated Radon transform.

  16. Technological Similarity, Post-acquisition R&D Reorganization, and Innovation Performance in Horizontal Acquisition

    DEFF Research Database (Denmark)

    Colombo, Massimo G.; Rabbiosi, Larissa

    2014-01-01

    This paper aims to disentangle the mechanisms through which technological similarity between acquiring and acquired firms influences innovation in horizontal acquisitions. We develop a theoretical model that links technological similarity to: (i) two key aspects of post-acquisition reorganization...... of acquired R&D operations – the rationalization of the R&D operations and the replacement of the R&D top manager, and (ii) two intermediate effects that are closely associated with the post-acquisition innovation performance of the combined firm – improvements in R&D productivity and disruptions in R......&D personnel. We rely on PLS techniques to test our theoretical model using detailed information on 31 horizontal acquisitions in high- and medium-tech industries. Our results indicate that in horizontal acquisitions, technological similarity negatively affects post-acquisition innovation performance...

  17. High-Resolution DCE-MRI of the Pituitary Gland Using Radial k-Space Acquisition with Compressed Sensing Reconstruction.

    Science.gov (United States)

    Rossi Espagnet, M C; Bangiyev, L; Haber, M; Block, K T; Babb, J; Ruggiero, V; Boada, F; Gonen, O; Fatterpekar, G M

    2015-08-01

    The pituitary gland is located outside of the blood-brain barrier. Dynamic T1 weighted contrast enhanced sequence is considered to be the gold standard to evaluate this region. However, it does not allow assessment of intrinsic permeability properties of the gland. Our aim was to demonstrate the utility of radial volumetric interpolated brain examination with the golden-angle radial sparse parallel technique to evaluate permeability characteristics of the individual components (anterior and posterior gland and the median eminence) of the pituitary gland and areas of differential enhancement and to optimize the study acquisition time. A retrospective study was performed in 52 patients (group 1, 25 patients with normal pituitary glands; and group 2, 27 patients with a known diagnosis of microadenoma). Radial volumetric interpolated brain examination sequences with golden-angle radial sparse parallel technique were evaluated with an ROI-based method to obtain signal-time curves and permeability measures of individual normal structures within the pituitary gland and areas of differential enhancement. Statistical analyses were performed to assess differences in the permeability parameters of these individual regions and optimize the study acquisition time. Signal-time curves from the posterior pituitary gland and median eminence demonstrated a faster wash-in and time of maximum enhancement with a lower peak of enhancement compared with the anterior pituitary gland (P pituitary gland evaluation. In the absence of a clinical history, differences in the signal-time curves allow easy distinction between a simple cyst and a microadenoma. This retrospective study confirms the ability of the golden-angle radial sparse parallel technique to evaluate the permeability characteristics of the pituitary gland and establishes 120 seconds as the ideal acquisition time for dynamic pituitary gland imaging. © 2015 by American Journal of Neuroradiology.

  18. An original approach to data acquisition CHADAC

    CERN Document Server

    CERN. Geneva

    1981-01-01

    Many labs try to boost existing data acquisition systems by inserting high performance intelligent devices in the important nodes of the system's structure. This strategy finds its limits in the system's architecture. The CHADAC project proposes a simple and efficient solution to this problem, using a multiprocessor modular architecture. CHADAC main features are: parallel acquisition of data; CHADAC is fast, it dedicates one processor per branch and each processor can read and store one 16 bit word in 800 ns; original structure; each processor can work in its own private memory, in its own shared memory (double access) and in the shared memory of any other processor. Simple and fast communications between processors are also provided by local DMAs; flexibility; each processor is autonomous and may be used as an independent acquisition system for a branch, by connecting local peripherals to it. Adjunction of fast trigger logic is possible. By its architecture and performances, CHADAC is designed to provide a g...

  19. Dynamic Liver Magnetic Resonance Imaging in Free-Breathing: Feasibility of a Cartesian T1-Weighted Acquisition Technique With Compressed Sensing and Additional Self-Navigation Signal for Hard-Gated and Motion-Resolved Reconstruction.

    Science.gov (United States)

    Kaltenbach, Benjamin; Bucher, Andreas M; Wichmann, Julian L; Nickel, Dominik; Polkowski, Christoph; Hammerstingl, Renate; Vogl, Thomas J; Bodelle, Boris

    2017-11-01

    The aim of this study was to assess the feasibility of a free-breathing dynamic liver imaging technique using a prototype Cartesian T1-weighted volumetric interpolated breathhold examination (VIBE) sequence with compressed sensing and simultaneous acquisition of a navigation signal for hard-gated and motion state-resolved reconstruction. A total of 43 consecutive oncologic patients (mean age, 66 ± 11 years; 44% female) underwent free-breathing dynamic liver imaging for the evaluation of liver metastases from colorectal cancer using a prototype Cartesian VIBE sequence (field of view, 380 × 345 mm; image matrix, 320 × 218; echo time/repetition time, 1.8/3.76 milliseconds; flip angle, 10 degrees; slice thickness, 3.0 mm; acquisition time, 188 seconds) with continuous data sampling and additionally acquired self-navigation signal. Data were iteratively reconstructed using 2 different approaches: first, a hard-gated reconstruction only using data associated to the dominating motion state (CS VIBE, Compressed Sensing VIBE), and second, a motion-resolved reconstruction with 6 different motion states as additional image dimension (XD VIBE, eXtended dimension VIBE). Continuous acquired data were grouped in 16 subsequent time increments with 11.57 seconds each to resolve arterial and venous contrast phases. For image quality assessment, both CS VIBE and XD VIBE were compared with the patient's last staging dynamic liver magnetic resonance imaging including a breathhold (BH) VIBE as reference standard 4.5 ± 1.2 months before. Representative quality parameters including respiratory artifacts were evaluated for arterial and venous phase images independently, retrospectively and blindly by 3 experienced radiologists, with higher scores indicating better examination quality. To assess diagnostic accuracy, same readers evaluated the presence of metastatic lesions for XD VIBE and CS VIBE compared with reference BH examination in a second session. Compared with CS VIBE, XD VIBE

  20. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  1. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  2. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  3. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  4. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  5. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  6. Analysis of the SIAM Infrared Acquisition System

    Energy Technology Data Exchange (ETDEWEB)

    Varnado, S.G.

    1974-02-01

    This report describes and presents the results of an analysis of the performance of the infrared acquisition system for a Self-Initiated Antiaircraft Missile (SIAM). A description of the optical system is included, and models of target radiant intensity, atmospheric transmission, and background radiance are given. Acquisition probabilities are expressed in terms of the system signal-to-noise ratio. System performance against aircraft and helicopter targets is analyzed, and background discrimination techniques are discussed. 17 refs., 22 figs., 6 tabs.

  7. An environment for parallel structuring of Fortran programs

    International Nuclear Information System (INIS)

    Sridharan, K.; McShea, M.; Denton, C.; Eventoff, B.; Browne, J.C.; Newton, P.; Ellis, M.; Grossbard, D.; Wise, T.; Clemmer, D.

    1990-01-01

    The paper describes and illustrates an environment for interactive support of the detection and implementation of macro-level parallelism in Fortran programs. The approach couples algorithms for dependence analysis with both innovative techniques for complexity management and capabilities for the measurement and analysis of the parallel computation structures generated through use of the environment. The resulting environment is complementary to the more common approach of seeking local parallelism by loop unrolling, either by an automatic compiler or manually. (orig.)

  8. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  9. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  10. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  11. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    Science.gov (United States)

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  12. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  13. Front-end data processing the SLD data acquisition system

    International Nuclear Information System (INIS)

    Nielsen, B.S.

    1986-07-01

    The data acquisition system for the SLD detector will make extensive use of parallel at the front-end level. Fastbus acquisition modules are being built with powerful processing capabilities for calibration, data reduction and further pre-processing of the large amount of analog data handled by each module. This paper describes the read-out electronics chain and data pre-processing system adapted for most of the detector channels, exemplified by the central drift chamber waveform digitization and processing system

  14. Keldysh formalism for multiple parallel worlds

    International Nuclear Information System (INIS)

    Ansari, M.; Nazarov, Y. V.

    2016-01-01

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  15. Keldysh formalism for multiple parallel worlds

    Science.gov (United States)

    Ansari, M.; Nazarov, Y. V.

    2016-03-01

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  16. Keldysh formalism for multiple parallel worlds

    Energy Technology Data Exchange (ETDEWEB)

    Ansari, M.; Nazarov, Y. V., E-mail: y.v.nazarov@tudelft.nl [Delft University of Technology, Kavli Institute of Nanoscience (Netherlands)

    2016-03-15

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  17. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  18. Time-resolved 3D pulmonary perfusion MRI: comparison of different k-space acquisition strategies at 1.5 and 3 T.

    Science.gov (United States)

    Attenberger, Ulrike I; Ingrisch, Michael; Dietrich, Olaf; Herrmann, Karin; Nikolaou, Konstantin; Reiser, Maximilian F; Schönberg, Stefan O; Fink, Christian

    2009-09-01

    Time-resolved pulmonary perfusion MRI requires both high temporal and spatial resolution, which can be achieved by using several nonconventional k-space acquisition techniques. The aim of this study is to compare the image quality of time-resolved 3D pulmonary perfusion MRI with different k-space acquisition techniques in healthy volunteers at 1.5 and 3 T. Ten healthy volunteers underwent contrast-enhanced time-resolved 3D pulmonary MRI on 1.5 and 3 T using the following k-space acquisition techniques: (a) generalized autocalibrating partial parallel acquisition (GRAPPA) with an internal acquisition of reference lines (IRS), (b) GRAPPA with a single "external" acquisition of reference lines (ERS) before the measurement, and (c) a combination of GRAPPA with an internal acquisition of reference lines and view sharing (VS). The spatial resolution was kept constant at both field strengths to exclusively evaluate the influences of the temporal resolution achieved with the different k-space sampling techniques on image quality. The temporal resolutions were 2.11 seconds IRS, 1.31 seconds ERS, and 1.07 VS at 1.5 T and 2.04 seconds IRS, 1.30 seconds ERS, and 1.19 seconds VS at 3 T.Image quality was rated by 2 independent radiologists with regard to signal intensity, perfusion homogeneity, artifacts (eg, wrap around, noise), and visualization of pulmonary vessels using a 3 point scale (1 = nondiagnostic, 2 = moderate, 3 = good). Furthermore, the signal-to-noise ratio in the lungs was assessed. At 1.5 T the lowest image quality (sum score: 154) was observed for the ERS technique and the highest quality for the VS technique (sum score: 201). In contrast, at 3 T images acquired with VS were hampered by strong artifacts and image quality was rated significantly inferior (sum score: 137) compared with IRS (sum score: 180) and ERS (sum score: 174). Comparing 1.5 and 3 T, in particular the overall rating of the IRS technique (sum score: 180) was very similar at both field

  19. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  20. Parallel discrete ordinates algorithms on distributed and common memory systems

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.; Brickner, R.G.

    1987-01-01

    The S/sub n/ algorithm employs iterative techniques in solving the linear Boltzmann equation. These methods, both ordered and chaotic, were compared on both the Denelcor HEP and the Intel hypercube. Strategies are linked to the organization and accessibility of memory (common memory versus distributed memory architectures), with common concern for acquisition of global information. Apart from this, the inherent parallelism of the algorithm maps directly onto the two architectures. Results comparing execution times, speedup, and efficiency are based on a representative 16-group (full upscatter and downscatter) sample problem. Calculations were performed on both the Los Alamos National Laboratory (LANL) Denelcor HEP and the LANL Intel hypercube. The Denelcor HEP is a 64-bit multi-instruction, multidate MIMD machine consisting of up to 16 process execution modules (PEMs), each capable of executing 64 processes concurrently. Each PEM can cooperate on a job, or run several unrelated jobs, and share a common global memory through a crossbar switch. The Intel hypercube, on the other hand, is a distributed memory system composed of 128 processing elements, each with its own local memory. Processing elements are connected in a nearest-neighbor hypercube configuration and sharing of data among processors requires execution of explicit message-passing constructs

  1. Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

    International Nuclear Information System (INIS)

    Pedron, Antoine

    2013-01-01

    This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterise possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform. Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purpose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms. The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed. (author) [fr

  2. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  3. An in vitro biomechanical comparison of equine proximal interphalangeal joint arthrodesis techniques: an axial positioned dynamic compression plate and two abaxial transarticular cortical screws inserted in lag fashion versus three parallel transarticular cortical screws inserted in lag fashion.

    Science.gov (United States)

    Sod, Gary A; Riggs, Laura M; Mitchell, Colin F; Hubert, Jeremy D; Martin, George S

    2010-01-01

    To compare in vitro monotonic biomechanical properties of an axial 3-hole, 4.5 mm narrow dynamic compression plate (DCP) using 5.5 mm cortical screws in conjunction with 2 abaxial transarticular 5.5 mm cortical screws inserted in lag fashion (DCP-TLS) with 3 parallel transarticular 5.5 mm cortical screws inserted in lag fashion (3-TLS) for the equine proximal interphalangeal (PIP) joint arthrodesis. Paired in vitro biomechanical testing of 2 methods of stabilizing cadaveric adult equine forelimb PIP joints. Cadaveric adult equine forelimbs (n=15 pairs). For each forelimb pair, 1 PIP joint was stabilized with an axial 3-hole narrow DCP (4.5 mm) using 5.5 mm cortical screws in conjunction with 2 abaxial transarticular 5.5 mm cortical screws inserted in lag fashion and 1 with 3 parallel transarticular 5.5 mm cortical screws inserted in lag fashion. Five matching pairs of constructs were tested in single cycle to failure under axial compression, 5 construct pairs were tested for cyclic fatigue under axial compression, and 5 construct pairs were tested in single cycle to failure under torsional loading. Mean values for each fixation method were compared using a paired t-test within each group with statistical significance set at Pcycle to failure, of the DCP-TLS fixation were significantly greater than those of the 3-TLS fixation. Mean cycles to failure in axial compression of the DCP-TLS fixation was significantly greater than that of the 3-TLS fixation. The DCP-TLS was superior to the 3-TLS in resisting the static overload forces and in resisting cyclic fatigue. The results of this in vitro study may provide information to aid in the selection of a treatment modality for arthrodesis of the equine PIP joint.

  4. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  5. Parallel optoelectronic trinary signed-digit division

    Science.gov (United States)

    Alam, Mohammad S.

    1999-03-01

    The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.

  6. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  7. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  8. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  9. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  10. Simple smoothing technique to reduce data scattering in physics experiments

    International Nuclear Information System (INIS)

    Levesque, L

    2008-01-01

    This paper describes an experiment involving motorized motion and a method to reduce data scattering from data acquisition. Jitter or minute instrumental vibrations add noise to a detected signal, which often renders small modulations of a graph very difficult to interpret. Here we describe a method to reduce scattering amongst data points from the signal measured by a photodetector that is motorized and scanned in a direction parallel to the plane of a rectangular slit during a computer-controlled diffraction experiment. The smoothing technique is investigated using subsets of many data points from the data acquisition. A limit for the number of data points in a subset is determined from the results based on the trend of the small measured signal to avoid severe changes in the shape of the signal from the averaging procedure. This simple smoothing method can be achieved using any type of spreadsheet software

  11. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  12. A progress report of the switch-based data acquisition system prototype project and the application of switches from industry to high-energy physics event building

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.

    1990-01-01

    A prototype of a data acquisition system based on a new scalable, highly-parallel, open-system architecture is being developed at Fermilab. The major component of the new architecture, the parallel event builder, is based on a telecommunications industry technique used in the implementation of switching systems, a barrel-shift switch. The architecture is scalable both in the expandability of the number of input channels and in the throughput of the system. Because of its scalability, the system is well suited for low to high-rate experiments, test beams and all SSC detectors. The architecture is open in that as new technologies are developed and made into commercial products (e.g., arrays of processors and workstations and standard data links), these new products can be easily integrated into the system with minimal system modifications and no modifications to the system's basic architecture. Scalability and openness should guarantee that the data acquisition system does not become obsolete during the lifetime of the experiment. The paper first gives a description of the architecture and the prototype project and then details both the prototype project's software and hardware status including details of some architecture simulation studies. Suggestions for future R and D work on the new data acquisition system architecture are then described. The paper concludes by examining interconnection networks from industry and their application to event building and to other areas of high-energy physics data acquisition systems

  13. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  14. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  15. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  16. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  17. Acquisition Research Program Homepage

    OpenAIRE

    2015-01-01

    Includes an image of the main page on this date and compressed file containing additional web pages. Established in 2003, Naval Postgraduate School’s (NPS) Acquisition Research Program provides leadership in innovation, creative problem solving and an ongoing dialogue, contributing to the evolution of Department of Defense acquisition strategies.

  18. Making Acquisition Measurable

    Science.gov (United States)

    2011-04-30

    Corporation. All rights reserved End Users Administrator/ Maintainer (A/M) Subject Matter Expert ( SME ) Trainer/ Instructor Manager, Evaluator, Supervisor... CMMI ) - Acquisition (AQ) © 2011 The MITRE Corporation. All rights reserved 13 CMMI -Development Incremental iterative development (planning & execution...objectives Constructing games highlighting particular aspects of proposed CCOD® acquisition, and conducting exercises with Subject Matter Experts ( SMEs

  19. An embedded control and acquisition system for multichannel detectors

    International Nuclear Information System (INIS)

    Gori, L.; Tommasini, R.; Cautero, G.; Giuressi, D.; Barnaba, M.; Accardo, A.; Carrato, S.; Paolucci, G.

    1999-01-01

    We present a pulse counting multichannel data acquisition system, characterized by the high number of high speed acquisition channels, and by the modular, embedded system architecture. The former leads to very fast acquisitions and allows to obtain sequences of snapshots, for the study of time dependent phenomena. The latter, thanks to the integration of a CPU into the system, provides high computational capabilities, so that the interfacing with the user computer is very simple and user friendly. Moreover, the user computer is free from control and acquisition tasks. The system has been developed for one of the beamlines of the third generation synchrotron radiation sources ELETTRA, and because of the modular architecture can be useful in various other kinds of experiments, where parallel acquisition, high data rates, and user friendliness are required. First experimental results on a double pass hemispherical electron analyser provided with a 96 channel detector confirm the validity of the approach. (author)

  20. Integrative Dynamic Reconfiguration in a Parallel Stream Processing Engine

    DEFF Research Database (Denmark)

    Madsen, Kasper Grud Skat; Zhou, Yongluan; Cao, Jianneng

    2017-01-01

    Load balancing, operator instance collocations and horizontal scaling are critical issues in Parallel Stream Processing Engines to achieve low data processing latency, optimized cluster utilization and minimized communication cost respectively. In previous work, these issues are typically tackled...... solution called ALBIC, which support general jobs. We implement the proposed techniques on top of Apache Storm, an open-source Parallel Stream Processing Engine. The extensive experimental results over both synthetic and real datasets show that our techniques clearly outperform existing approaches....

  1. Mergers and Acquisitions

    DEFF Research Database (Denmark)

    Risberg, Annette

    Introduction to the study of mergers and acquisitions. This book provides an understanding of the mergers and acquisitions process, how and why they occur, and also the broader implications for organizations. It presents issues including motives and planning, partner selection, integration......, employee experiences and communication. Mergers and acquisitions remain one of the most common forms of growth, yet they present considerable challenges for the companies and management involved. The effects on stakeholders, including shareholders, managers and employees, must be considered as well...... by editorial commentaries and reflects the important organizational and behavioural aspects which have often been ignored in the past. By providing this in-depth understanding of the mergers and acquisitions process, the reader understands not only how and why mergers and acquisitions occur, but also...

  2. Data Acquisition System

    International Nuclear Information System (INIS)

    Cirstea, C.D.; Buda, S.I.; Constantin, F.

    2005-01-01

    This paper deals with a multi parametric acquisition system developed for a four input Analog to Digital Converter working in CAMAC Standard. The acquisition software is built in MS Visual C++ on a standard PC with a USB interface. It has a visual interface which permits Start/Stop of the acquisition, setting the type of acquisition (True/Live time), the time and various menus for primary data acquisition. The spectrum is dynamically visualized with a moving cursor indicating the content and position. The microcontroller PIC16C765 is used for data transfer from ADC to PC; The microcontroller and the software create an embedded system which emulates the CAMAC protocol programming the 4 input ADC for operating modes ('zero suppression', 'addressed' and 'sequential') and handling the data transfers from ADC to its internal memory. From its memory the data is transferred into the PC by the USB interface. The work is in progress. (authors)

  3. Data acquisition system

    International Nuclear Information System (INIS)

    Cirstea, D.C.; Buda, S.I.; Constantin, F.

    2005-01-01

    The topic of this paper deals with a multi parametric acquisition system developed around a four input Analog to Digital Converter working in CAMAC Standard. The acquisition software is built in MS Visual C++ on a standard PC with a USB interface. It has a visual interface which permits Start/Stop of the acquisition, setting the type of acquisition (True/Live time), the time and various menus for primary data acquisition. The spectrum is dynamically visualized with a moving cursor indicating the content and position. The microcontroller PIC16C765 is used for data transfer from ADC to PC; The microcontroller and the software create an embedded system which emulates the CAMAC protocol programming, the 4 input ADC for operating modes ('zero suppression', 'addressed' and 'sequential') and handling the data transfers from ADC to its internal memory. From its memory the data is transferred into the PC by the USB interface. The work is in progress. (authors)

  4. MRI of degenerative lumbar spine disease: comparison of non-accelerated and parallel imaging

    International Nuclear Information System (INIS)

    Noelte, Ingo; Gerigk, Lars; Brockmann, Marc A.; Kemmling, Andre; Groden, Christoph

    2008-01-01

    Parallel imaging techniques such as GRAPPA have been introduced to optimize image quality and acquisition time. For spinal imaging in a clinical setting no data exist on the equivalency of conventional and parallel imaging techniques. The purpose of this study was to determine whether T1- and T2-weighted GRAPPA sequences are equivalent to conventional sequences for the evaluation of degenerative lumbar spine disease in terms of image quality and artefacts. In patients with clinically suspected degenerative lumbar spine disease two neuroradiologists independently compared sagittal GRAPPA (acceleration factor 2, time reduction approximately 50%) and non-GRAPPA images (25 patients) and transverse GRAPPA (acceleration factor 2, time reduction approximately 50%) and non-GRAPPA images (23 lumbar segments in six patients). Comparative analyses included the minimal diameter of the spinal canal, disc abnormalities, foraminal stenosis, facet joint degeneration, lateral recess, nerve root compression and osteochondrotic vertebral and endplate changes. Image inhomogeneity was evaluated by comparing the nonuniformity in the two techniques. Image quality was assessed by grading the delineation of pathoanatomical structures. Motion and aliasing artefacts were classified from grade 1 (severe) to grade 5 (absent). There was no significant difference between GRAPPA and non-accelerated MRI in the evaluation of degenerative lumbar spine disease (P > 0.05), and there was no difference in the delineation of pathoanatomical structures. For inhomogeneity there was a trend in favour of the conventional sequences. No significant artefacts were observed with either technique. The GRAPPA technique can be used effectively to reduce scanning time in patients with degenerative lumbar spine disease while preserving image quality. (orig.)

  5. On the Automatic Parallelization of Sparse and Irregular Fortran Programs

    Directory of Open Access Journals (Sweden)

    Yuan Lin

    1999-01-01

    Full Text Available Automatic parallelization is usually believed to be less effective at exploiting implicit parallelism in sparse/irregular programs than in their dense/regular counterparts. However, not much is really known because there have been few research reports on this topic. In this work, we have studied the possibility of using an automatic parallelizing compiler to detect the parallelism in sparse/irregular programs. The study with a collection of sparse/irregular programs led us to some common loop patterns. Based on these patterns new techniques were derived that produced good speedups when manually applied to our benchmark codes. More importantly, these parallelization methods can be implemented in a parallelizing compiler and can be applied automatically.

  6. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  7. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  8. Research in Parallel Algorithms and Software for Computational Aerosciences

    Science.gov (United States)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  9. Inductive acquisition of expert knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Muggleton, S.H.

    1986-01-01

    Expert systems divide neatly into two categories: those in which (1) the expert decisions result in changes to some external environment (control systems), and (2) the expert decisions merely seek to describe the environment (classification systems). Both the explanation of computer-based reasoning and the bottleneck (Feigenbaum, 1979) of knowledge acquisition are major issues in expert-systems research. The author contributed to these areas of research in two ways: 1. He implemented an expert-system shell, the Mugol environment, which facilitates knowledge acquisition by inductive inference and provides automatic explanation of run-time reasoning on demand. RuleMaster, a commercial version of this environment, was used to advantage industrially in the construction and testing of two large classification systems. 2. He investigated a new techniques called 'sequence induction' that can be used in construction of control systems. Sequence induction is based on theoretical work in grammatical learning. He improved existing grammatical learning algorithms as well as suggesting and theoretically characterizing new ones. These algorithms were successfully applied to acquisition of knowledge for a diverse set of control systems, including inductive construction of robot plans and chess end-gam strategies.

  10. Future data acquisition at ISIS

    International Nuclear Information System (INIS)

    Pulford, W.C.A.; Quinton, S.P.H.; Johnson, M.W.; Norris, J.

    1989-01-01

    Over the past year ISIS beam intensity has increased steadily to 100 microamps during periods of good running. With the instrument users finding it comparatively easy to set up data-collection runs, we are facing an ever increasing volume of incoming data. Greatly improved detector technology, mainly involving large areas of zinc sulfide phosphor, are expected to contribute much to the capacity of new diffractometers as well as provide an enhancement path for many of the existing ones. It is clear that we are fast reaching the point where if we continue to use our current technology data collection techniques, our computer systems will no longer be able to migrate the data to long-term storage, let alone enable their analysis at a speed compatible with continuous use of the ISIS instruments. The most effect method to improve this situation is to reduce the volume of data flowing between the data acquisition electronics and the front end minicomputers, and to provide facilities to monitor data acquisition within the data acquisition electronics. Processing power must be incorporated closer to the point of data collection. Ways of doing this are discussed and evaluated. (author)

  11. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  12. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Dynamic surface-pressure instrumentation for rods in parallel flow

    International Nuclear Information System (INIS)

    Mulcahy, T.M.; Lawrence, W.

    1979-01-01

    Methods employed and experience gained in measuring random fluid boundary layer pressures on the surface of a small diameter cylindrical rod subject to dense, nonhomogeneous, turbulent, parallel flow in a relatively noise-contaminated flow loop are described. Emphasis is placed on identification of instrumentation problems; description of transducer construction, mounting, and waterproofing; and the pretest calibration required to achieve instrumentation capable of reliable data acquisition

  14. Indexing mergers and acquisitions

    OpenAIRE

    Gang, Jianhua; Guo, Jie (Michael); Hu, Nan; Li, Xi

    2017-01-01

    We measure the efficiency of mergers and acquisitions by putting forward an index (the ‘M&A Index’) based on stochastic frontier analysis. The M&A Index is calculated for each takeover deal and is standardized between 0 and 1. An acquisition with a higher index encompasses higher efficiency. We find that takeover bids with higher M&A Indices are more likely to succeed. Moreover, the M&A Index shows a strong and positive relation with the acquirers’ post-acquisition stock perfo...

  15. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  16. ENHANCING THE INTERNATIONALIZATION OF THE GLOBAL INSURANCE MARKET: CHANGING DRIVERS OF MERGERS AND ACQUISITIONS

    Directory of Open Access Journals (Sweden)

    D. Rasshyvalov

    2014-03-01

    Full Text Available One-third of worldwide mergers and acquisitions involving firms from different countries make M&A one of the key drivers of internationalization. Over the past five years insurance cross-border merger and acquisition activities have globally paralleled deep financial crisis.

  17. Bootstrapping language acquisition.

    Science.gov (United States)

    Abend, Omri; Kwiatkowski, Tom; Smith, Nathaniel J; Goldwater, Sharon; Steedman, Mark

    2017-07-01

    The semantic bootstrapping hypothesis proposes that children acquire their native language through exposure to sentences of the language paired with structured representations of their meaning, whose component substructures can be associated with words and syntactic structures used to express these concepts. The child's task is then to learn a language-specific grammar and lexicon based on (probably contextually ambiguous, possibly somewhat noisy) pairs of sentences and their meaning representations (logical forms). Starting from these assumptions, we develop a Bayesian probabilistic account of semantically bootstrapped first-language acquisition in the child, based on techniques from computational parsing and interpretation of unrestricted text. Our learner jointly models (a) word learning: the mapping between components of the given sentential meaning and lexical words (or phrases) of the language, and (b) syntax learning: the projection of lexical elements onto sentences by universal construction-free syntactic rules. Using an incremental learning algorithm, we apply the model to a dataset of real syntactically complex child-directed utterances and (pseudo) logical forms, the latter including contextually plausible but irrelevant distractors. Taking the Eve section of the CHILDES corpus as input, the model simulates several well-documented phenomena from the developmental literature. In particular, the model exhibits syntactic bootstrapping effects (in which previously learned constructions facilitate the learning of novel words), sudden jumps in learning without explicit parameter setting, acceleration of word-learning (the "vocabulary spurt"), an initial bias favoring the learning of nouns over verbs, and one-shot learning of words and their meanings. The learner thus demonstrates how statistical learning over structured representations can provide a unified account for these seemingly disparate phenomena. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Acquisition Workforce Annual Report 2006

    Data.gov (United States)

    General Services Administration — This is the Federal Acquisition Institute's (FAI's) Annual demographic report on the Federal acquisition workforce, showing trends by occupational series, employment...

  19. Acquisition Workforce Annual Report 2008

    Data.gov (United States)

    General Services Administration — This is the Federal Acquisition Institute's (FAI's) Annual demographic report on the Federal acquisition workforce, showing trends by occupational series, employment...

  20. A Survey of Model-based Sensor Data Acquisition and Management

    OpenAIRE

    Aggarwal, Charu C.; Sathe, Saket; Papaioannou, Thanasis; Jeung, Hoyoung; Aberer, Karl

    2013-01-01

    In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition...

  1. The Acquisition of Particles

    African Journals Online (AJOL)

    process of language acquisition on the basis of linguistic evidence the child is exposed to. ..... particle verbs are recognized in language processing differs from the way morphologically ..... In Natural Language and Linguistic Theory 11.

  2. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  3. Parallel 3-D method of characteristics in MPACT

    International Nuclear Information System (INIS)

    Kochunas, B.; Dovvnar, T. J.; Liu, Z.

    2013-01-01

    A new parallel 3-D MOC kernel has been developed and implemented in MPACT which makes use of the modular ray tracing technique to reduce computational requirements and to facilitate parallel decomposition. The parallel model makes use of both distributed and shared memory parallelism which are implemented with the MPI and OpenMP standards, respectively. The kernel is capable of parallel decomposition of problems in space, angle, and by characteristic rays up to 0(104) processors. Initial verification of the parallel 3-D MOC kernel was performed using the Takeda 3-D transport benchmark problems. The eigenvalues computed by MPACT are within the statistical uncertainty of the benchmark reference and agree well with the averages of other participants. The MPACT k eff differs from the benchmark results for rodded and un-rodded cases by 11 and -40 pcm, respectively. The calculations were performed for various numbers of processors and parallel decompositions up to 15625 processors; all producing the same result at convergence. The parallel efficiency of the worst case was 60%, while very good efficiency (>95%) was observed for cases using 500 processors. The overall run time for the 500 processor case was 231 seconds and 19 seconds for the case with 15625 processors. Ongoing work is focused on developing theoretical performance models and the implementation of acceleration techniques to minimize the number of iterations to converge. (authors)

  4. Extended data acquisition support at GSI

    International Nuclear Information System (INIS)

    Marinescu, D.C.; Busch, F.; Hultzsch, H.; Lowsky, J.; Richter, M.

    1984-01-01

    The Experiment Data Acquisition and Analysis System (EDAS) of GSI, designed to support the data processing associated with nuclear physics experiments, provides three modes of operation: real-time, interactive replay and batch replay. The real-time mode is used for data acquisition and data analysis during an experiment performed at the heavy ion accelerator at GSI. An experiment may be performed either in Stand Alone Mode, using only the Experiment Computers, or in Extended Mode using all computing resources available. The Extended Mode combines the advantages of the real-time response of a dedicated minicomputer with the availability of computing resources in a large computing environment. This paper first gives an overview of EDAS and presents the GSI High Speed Data Acquisition Network. Data Acquisition Modes and the Extended Mode are then introduced. The structure of the system components, their implementation and the functions pertinent to the Extended Mode are presented. The control functions of the Experiment Computer sub-system are discussed in detail. Two aspects of the design of the sub-system running on the mainframe are stressed, namely the use of a multi-user installation for real-time processing and the use of a high level programming language, PL/I, as an implementation language for a system which uses parallel processing. The experience accumulated is summarized in a number of conclusions

  5. An original approach to data acquisition: CHADAC

    International Nuclear Information System (INIS)

    Huppert, M.; Nayman, P.; Rivoal, M.

    1981-01-01

    Many labs try to boost existing data acquisition systems by inserting high performance intelligent devices in the important nodes of the system's structure. This strategy finds its limits in the system's architecture. The CHADAC project proposes a simple and efficient solution to this problem, using a multiprocessor modular architecture. CHADAC main features are: a) Parallel acquisition of data: CHADAC is fast; it dedicates one processor per branch; each processor can read and store one 16 bit word in 800 ns. b) Original structure: each processor can work in its own private memory, in its own shared memory (double access) and in the shared memory of any other processor (this feature being particulary useful to avoid wasteful data transfers). Simple and fast communications between processors are also provided by local DMA'S. c) Flexibility: each processor is autonomous and may be used as an independent acquisition system for a branch, by connecting local peripherals to it. Adjunction of fast trigger logic is possible. By its architecture and performances, CHADAC is designed to provide a good support for local intelligent devices and transfer operators developped elsewhere, providing a way to implement systems well fitted to various types of data acquisition. (orig.)

  6. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  7. Cross-border Mergers and Acquisitions

    DEFF Research Database (Denmark)

    Wang, Daojuan

    This paper focuses on three topics in cross-border mergers and acquisitions (CBM&As) field: motivations for CBM&As, valuation techniques and CBM&A performance (assessment and the determinants). By taking an overview of what have been found so far in academic field and investigating...

  8. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  9. Neuroimaging and Research into Second Language Acquisition

    Science.gov (United States)

    Sabourin, Laura

    2009-01-01

    Neuroimaging techniques are becoming not only more and more sophisticated but are also coming to be increasingly accessible to researchers. One thing that one should take note of is the potential of neuroimaging research within second language acquisition (SLA) to contribute to issues pertaining to the plasticity of the adult brain and to general…

  10. Accelerated cardiovascular magnetic resonance of the mouse heart using self-gated parallel imaging strategies does not compromise accuracy of structural and functional measures

    Directory of Open Access Journals (Sweden)

    Dörries Carola

    2010-07-01

    Full Text Available Abstract Background Self-gated dynamic cardiovascular magnetic resonance (CMR enables non-invasive visualization of the heart and accurate assessment of cardiac function in mouse models of human disease. However, self-gated CMR requires the acquisition of large datasets to ensure accurate and artifact-free reconstruction of cardiac cines and is therefore hampered by long acquisition times putting high demands on the physiological stability of the animal. For this reason, we evaluated the feasibility of accelerating the data collection using the parallel imaging technique SENSE with respect to both anatomical definition and cardiac function quantification. Results Findings obtained from accelerated data sets were compared to fully sampled reference data. Our results revealed only minor differences in image quality of short- and long-axis cardiac cines: small anatomical structures (papillary muscles and the aortic valve and left-ventricular (LV remodeling after myocardial infarction (MI were accurately detected even for 3-fold accelerated data acquisition using a four-element phased array coil. Quantitative analysis of LV cardiac function (end-diastolic volume (EDV, end-systolic volume (ESV, stroke volume (SV, ejection fraction (EF and LV mass in healthy and infarcted animals revealed no substantial deviations from reference (fully sampled data for all investigated acceleration factors with deviations ranging from 2% to 6% in healthy animals and from 2% to 8% in infarcted mice for the highest acceleration factor of 3.0. CNR calculations performed between LV myocardial wall and LV cavity revealed a maximum CNR decrease of 50% for the 3-fold accelerated data acquisition when compared to the fully-sampled acquisition. Conclusions We have demonstrated the feasibility of accelerated self-gated retrospective CMR in mice using the parallel imaging technique SENSE. The proposed method led to considerably reduced acquisition times, while preserving high

  11. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  12. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  13. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  14. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  15. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  16. Acoustic window planning for ultrasound acquisition.

    Science.gov (United States)

    Göbl, Rüdiger; Virga, Salvatore; Rackerseder, Julia; Frisch, Benjamin; Navab, Nassir; Hennersperger, Christoph

    2017-06-01

    Autonomous robotic ultrasound has recently gained considerable interest, especially for collaborative applications. Existing methods for acquisition trajectory planning are solely based on geometrical considerations, such as the pose of the transducer with respect to the patient surface. This work aims at establishing acoustic window planning to enable autonomous ultrasound acquisitions of anatomies with restricted acoustic windows, such as the liver or the heart. We propose a fully automatic approach for the planning of acquisition trajectories, which only requires information about the target region as well as existing tomographic imaging data, such as X-ray computed tomography. The framework integrates both geometrical and physics-based constraints to estimate the best ultrasound acquisition trajectories with respect to the available acoustic windows. We evaluate the developed method using virtual planning scenarios based on real patient data as well as for real robotic ultrasound acquisitions on a tissue-mimicking phantom. The proposed method yields superior image quality in comparison with a naive planning approach, while maintaining the necessary coverage of the target. We demonstrate that by taking image formation properties into account acquisition planning methods can outperform naive plannings. Furthermore, we show the need for such planning techniques, since naive approaches are not sufficient as they do not take the expected image quality into account.

  17. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  18. Design strategies for irregularly adapting parallel applications

    International Nuclear Information System (INIS)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Sing, Jaswinder Pal

    2000-01-01

    Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance of dynamically adapting computations. In this work, we examine two major classes of adaptive applications, under five competing programming methodologies and four leading parallel architectures. Results indicate that it is possible to achieve message-passing performance using shared-memory programming techniques by carefully following the same high level strategies. Adaptive applications have computational work loads and communication patterns which change unpredictably at runtime, requiring dynamic load balancing to achieve scalable performance on parallel machines. Efficient parallel implementations of such adaptive applications are therefore a challenging task. This work examines the implementation of two typical adaptive applications, Dynamic Remeshing and N-Body, across various programming paradigms and architectural platforms. We compare several critical factors of the parallel code development, including performance, programmability, scalability, algorithmic development, and portability

  19. Improving quality of arterial spin labeling MR imaging at 3 Tesla with a 32-channel coil and parallel imaging.

    Science.gov (United States)

    Ferré, Jean-Christophe; Petr, Jan; Bannier, Elise; Barillot, Christian; Gauvrit, Jean-Yves

    2012-05-01

    To compare 12-channel and 32-channel phased-array coils and to determine the optimal parallel imaging (PI) technique and factor for brain perfusion imaging using Pulsed Arterial Spin labeling (PASL) at 3 Tesla (T). Twenty-seven healthy volunteers underwent 10 different PASL perfusion PICORE Q2TIPS scans at 3T using 12-channel and 32-channel coils without PI and with GRAPPA or mSENSE using factor 2. PI with factor 3 and 4 were used only with the 32-channel coil. Visual quality was assessed using four parameters. Quantitative analyses were performed using temporal noise, contrast-to-noise and signal-to-noise ratios (CNR, SNR). Compared with 12-channel acquisition, the scores for 32-channel acquisition were significantly higher for overall visual quality, lower for noise and higher for SNR and CNR. With the 32-channel coil, artifact compromise achieved the best score with PI factor 2. Noise increased, SNR and CNR decreased with PI factor. However mSENSE 2 scores were not always significantly different from acquisition without PI. For PASL at 3T, the 32-channel coil at 3T provided better quality than the 12-channel coil. With the 32-channel coil, mSENSE 2 seemed to offer the best compromise for decreasing artifacts without significantly reducing SNR, CNR. Copyright © 2012 Wiley Periodicals, Inc.

  20. Massively parallel whole genome amplification for single-cell sequencing using droplet microfluidics.

    Science.gov (United States)

    Hosokawa, Masahito; Nishikawa, Yohei; Kogawa, Masato; Takeyama, Haruko

    2017-07-12

    Massively parallel single-cell genome sequencing is required to further understand genetic diversities in complex biological systems. Whole genome amplification (WGA) is the first step for single-cell sequencing, but its throughput and accuracy are insufficient in conventional reaction platforms. Here, we introduce single droplet multiple displacement amplification (sd-MDA), a method that enables massively parallel amplification of single cell genomes while maintaining sequence accuracy and specificity. Tens of thousands of single cells are compartmentalized in millions of picoliter droplets and then subjected to lysis and WGA by passive droplet fusion in microfluidic channels. Because single cells are isolated in compartments, their genomes are amplified to saturation without contamination. This enables the high-throughput acquisition of contamination-free and cell specific sequence reads from single cells (21,000 single-cells/h), resulting in enhancement of the sequence data quality compared to conventional methods. This method allowed WGA of both single bacterial cells and human cancer cells. The obtained sequencing coverage rivals those of conventional techniques with superior sequence quality. In addition, we also demonstrate de novo assembly of uncultured soil bacteria and obtain draft genomes from single cell sequencing. This sd-MDA is promising for flexible and scalable use in single-cell sequencing.

  1. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    Science.gov (United States)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  2. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  3. High speed, locally controlled data acquisition system for TFTR

    International Nuclear Information System (INIS)

    Feng, H.K.; Bradish, G.J.

    1983-01-01

    A high speed, locally controlled, data acquisition and transmission system has been developed by the CICADA (Central Instrumentation Control and Data Acquisition) Group for extracting certain timecritical data during a TFTR pulse and passing it to the control room, 1000 feet distant, to satisfy realtime requirements of frequently sampled variables. The system is designed to utilize any or all of the standard CAMAC (Computer Automated Measurement and Control) modules now employed on the CAMAC links for retrieval of the main body of data, but to operate them in a much faster manner than in a standard CAMAC system. To do this, a pre-programmable ROM sequencer is employed as a controller to transmit commands to the modules at intervals down to one microsecond, replacing the usual CAMAC dedicated computer, and increasing the command rate by an order of magnitude over what could be sent down a Branch Highway. Data coming from any number of channels originating within a single CAMAC ''crate'' is then time-multiplexed and transmitted over a single conductor pair in bi-phase at a 2.5 MHz bit rate using Manchester coding techniques. Benefits gained from this approach include: Reduction in the number of conductors required, elimination of line-to-line skew found in parallel transmission systems, and the capability of being transformer coupled or transmitted over a fiber optic cable to avoid safety hazards and ground loops. The main application for this system so far has been as the feedback path in this closed loop control of currents through the Tokamak's field coils. The paper will treat the system's various applications

  4. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  5. Hybrid parallel execution model for logic-based specification languages

    CERN Document Server

    Tsai, Jeffrey J P

    2001-01-01

    Parallel processing is a very important technique for improving the performance of various software development and maintenance activities. The purpose of this book is to introduce important techniques for parallel executation of high-level specifications of software systems. These techniques are very useful for the construction, analysis, and transformation of reliable large-scale and complex software systems. Contents: Current Approaches; Overview of the New Approach; FRORL Requirements Specification Language and Its Decomposition; Rewriting and Data Dependency, Control Flow Analysis of a Lo

  6. Post-Acquisition IT Integration

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Yetton, Philip

    2013-01-01

    The extant research on post-acquisition IT integration analyzes how acquirers realize IT-based value in individual acquisitions. However, serial acquirers make 60% of acquisitions. These acquisitions are not isolated events, but are components in growth-by-acquisition programs. To explain how...... serial acquirers realize IT-based value, we develop three propositions on the sequential effects on post-acquisition IT integration in acquisition programs. Their combined explanation is that serial acquirers must have a growth-by-acquisition strategy that includes the capability to improve...... IT integration capabilities, to sustain high alignment across acquisitions and to maintain a scalable IT infrastructure with a flat or decreasing cost structure. We begin the process of validating the three propositions by investigating a longitudinal case study of a growth-by-acquisition program....

  7. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  8. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  9. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  10. Architecture of an acquisition system-multiprocessors

    International Nuclear Information System (INIS)

    Postec, H.

    1987-07-01

    To follow the huge increasing of concerned parameters in nuclear detection systems, acquisition systems become bigger and have to present very good rapidity performance. At Ganil, four detection systems have been set in Nautilus reaction chamber, that lead to experiment configurations with 700 parameters to process. In front of present acquisition system limitation, a device more relevant to lecture of a large number of channels show off necessary. Functionalities already operating in other systems and hardware already used have been chosen; specific technical solutions were aldo developed to use the most recent techniques and to take in account the four detection system structure of the device [fr

  11. Parallel Evolutionary Optimization for Neuromorphic Network Training

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Disney, Adam [University of Tennessee (UT); Singh, Susheela [North Carolina State University (NCSU), Raleigh; Bruer, Grant [University of Tennessee (UT); Mitchell, John Parker [University of Tennessee (UT); Klibisz, Aleksander [University of Tennessee (UT); Plank, James [University of Tennessee (UT)

    2016-01-01

    One of the key impediments to the success of current neuromorphic computing architectures is the issue of how best to program them. Evolutionary optimization (EO) is one promising programming technique; in particular, its wide applicability makes it especially attractive for neuromorphic architectures, which can have many different characteristics. In this paper, we explore different facets of EO on a spiking neuromorphic computing model called DANNA. We focus on the performance of EO in the design of our DANNA simulator, and on how to structure EO on both multicore and massively parallel computing systems. We evaluate how our parallel methods impact the performance of EO on Titan, the U.S.'s largest open science supercomputer, and BOB, a Beowulf-style cluster of Raspberry Pi's. We also focus on how to improve the EO by evaluating commonality in higher performing neural networks, and present the result of a study that evaluates the EO performed by Titan.

  12. Parallel GPU implementation of iterative PCA algorithms.

    Science.gov (United States)

    Andrecut, M

    2009-11-01

    Principal component analysis (PCA) is a key statistical technique for multivariate data analysis. For large data sets, the common approach to PCA computation is based on the standard NIPALS-PCA algorithm, which unfortunately suffers from loss of orthogonality, and therefore its applicability is usually limited to the estimation of the first few components. Here we present an algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA algorithms. The numerical results show that the GPU parallel optimized versions, based on CUBLAS (NVIDIA), are substantially faster (up to 12 times) than the CPU optimized versions based on CBLAS (GNU Scientific Library).

  13. Impact analysis on a massively parallel computer

    International Nuclear Information System (INIS)

    Zacharia, T.; Aramayo, G.A.

    1994-01-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper

  14. On Shaft Data Acquisition System (OSDAS)

    Science.gov (United States)

    Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles

    2012-01-01

    On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test

  15. LEGS data acquisition facility

    International Nuclear Information System (INIS)

    LeVine, M.J.

    1985-01-01

    The data acquisition facility for the LEGS medium energy photonuclear beam line is composed of an auxiliary crate controller (ACC) acting as a front-end processor, loosely coupled to a time-sharing host computer based on a UNIX-like environment. The ACC services all real-time demands in the CAMAC crate: it responds to LAMs generated by data acquisition modules, to keyboard commands, and it refreshes the graphics display at frequent intervals. The host processor is needed only for printing histograms and recording event buffers on magnetic tape. The host also provides the environment for software development. The CAMAC crate is interfaced by a VERSAbus CAMAC branch driver

  16. Acquisition IT Integration

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Øhrgaard, Christian

    2015-01-01

    of temporary agency workers. Following an analytic induction approach, theoretically grounded in the re-source-based view of the firm, we identify the complimentary and supplementary roles consultants can assume in acquisition IT integration. Through case studies of three acquirers, we investigate how...... the acquirers appropriate the use of agency workers as part of its acquisition strategy. For the investigated acquirers, assigning roles to agency workers is contingent on balancing the needs of knowledge induction and knowledge retention, as well as experience richness and in-depth under-standing. Composition...

  17. Disentangling value creation mechanism in cross-border acquisitions

    DEFF Research Database (Denmark)

    Wang, Daojuan; Sørensen, Olav Jull; Moini, Hamid

    2016-01-01

    This study investigates the value creation mechanism in cross-border acquisitions ( CBAs ) by employing a structural equation modeling technique and surveying 103 CBAs performed by Nordic firms. The results reveal that resource possession, resource picking, and resource utilization are three impo...... in this study, is an important step forward in merger and acquisition (M&A) research. Moreover, numerous research findings offer tactical implications for international acquirers.......This study investigates the value creation mechanism in cross-border acquisitions ( CBAs ) by employing a structural equation modeling technique and surveying 103 CBAs performed by Nordic firms. The results reveal that resource possession, resource picking, and resource utilization are three...... important strategic dimensions for realizing synergy and creating value in CBAs . Furthermore, mediation analysis shows that the two acquisition-based dynamic capabilities—value identification and resource reconfiguration—act as important mediators in how the joining firms’ resource base impacts acquisition...

  18. Solving the Stokes problem on a massively parallel computer

    DEFF Research Database (Denmark)

    Axelsson, Owe; Barker, Vincent A.; Neytcheva, Maya

    2001-01-01

    boundary value problem for each velocity component, are solved by the conjugate gradient method with a preconditioning based on the algebraic multi‐level iteration (AMLI) technique. The velocity is found from the computed pressure. The method is optimal in the sense that the computational work...... is proportional to the number of unknowns. Further, it is designed to exploit a massively parallel computer with distributed memory architecture. Numerical experiments on a Cray T3E computer illustrate the parallel performance of the method....

  19. MINARET: Towards a time-dependent neutron transport parallel solver

    International Nuclear Information System (INIS)

    Baudron, A.M.; Lautard, J.J.; Maday, Y.; Mula, O.

    2013-01-01

    We present the newly developed time-dependent 3D multigroup discrete ordinates neutron transport solver that has recently been implemented in the MINARET code. The solver is the support for a study about computing acceleration techniques that involve parallel architectures. In this work, we will focus on the parallelization of two of the variables involved in our equation: the angular directions and the time. This last variable has been parallelized by a (time) domain decomposition method called the para-real in time algorithm. (authors)

  20. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  1. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  2. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  3. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  4. Parallel Computing for Brain Simulation.

    Science.gov (United States)

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. Scheduling Parallel Jobs Using Migration and Consolidation in the Cloud

    Directory of Open Access Journals (Sweden)

    Xiaocheng Liu

    2012-01-01

    Full Text Available An increasing number of high performance computing parallel applications leverages the power of the cloud for parallel processing. How to schedule the parallel applications to improve the quality of service is the key to the successful host of parallel applications in the cloud. The large scale of the cloud makes the parallel job scheduling more complicated as even simple parallel job scheduling problem is NP-complete. In this paper, we propose a parallel job scheduling algorithm named MEASY. MEASY adopts migration and consolidation to enhance the most popular EASY scheduling algorithm. Our extensive experiments on well-known workloads show that our algorithm takes very good care of the quality of service. For two common parallel job scheduling objectives, our algorithm produces an up to 41.1% and an average of 23.1% improvement on the average response time; an up to 82.9% and an average of 69.3% improvement on the average slowdown. Our algorithm is robust even in terms that it allows inaccurate CPU usage estimation and high migration cost. Our approach involves trivial modification on EASY and requires no additional technique; it is practical and effective in the cloud environment.

  6. Future data acquisition at ISIS

    International Nuclear Information System (INIS)

    Pulford, W.C.A.; Quinton, S.P.H.; Johnson, M.W.; Norris, J.

    1989-01-01

    Data collection techniques at ISIS are fast reaching the point where the current computer systems will no longer be able to migrate the data to long-term storage, let alone enable their analysis at a speed compatible with continuous use of the ISIS instruments. The current data acquisition electronics (DAE 1) and migration path work effectively but have a number of inherent difficulties: (1) Seven instruments are equipped with VAX computers as their Front End Minicomputers (FEM). Unfortunately these machines usually possess insufficient processor power to perform some of the more complex data reduction. This means that the raw data have necessarily to be networked to the HUB computer before analysis. (2) The size of bulk store memory is restricted to 16 Mbytes by the 24 bit address field of Multibus. (3) The DAE error detection and analysis system of FEM is crude. It is clear that the most effective method to improve on this situation is to reduce the data volume flowing between the DAE and the FEM and to provide facilities to monitor data acquisition within the DAE. For these purposes processing power must be incorporated closer to the point of data collection. It has been decided to implement processing elements within DAE 2 (the next generation of DAE) in the form of intelligent memory boards. 6 figs., 1 tab

  7. Data acquisition system for MEGHA

    International Nuclear Information System (INIS)

    Chappell, S.P.G.; Hunt, R.A.; Smith, D.; Rae, W.D.M.; Clarke, N.M.; Freer, M.; Fulton, B.R.; Jagpal, S.S.; Singer, S.M.; Watson, D.L.

    2000-01-01

    A multi-channel data acquisition system has been commissioned for the Charissa 'MEGHA' detector array. It is designed to read multiparameter events where there are many potential channels (320) but where only a fraction of these are active in any typical event. Custom-built pre- and main amplifiers process the amplitude (energy) signal from each detector and the system records both amplitude and time of arrival for each signal within an event. The signal amplitude is converted to time using the standard Wilkinson technique and then combined with its time of arrival into a single time trace. These traces are converted by multi-hit TDCs, which only convert the active channels and thus reduce the processing load. Additional custom-built CAMAC modules organise the TDC output into a suitable form for storage and transmission to a network of processor terminals over standard ethernet. This paper presents a description of the data acquisition system from preamplifier through to final storage in a VME-based system and subsequent distribution to a network of Sun terminals over ethernet. The system performance is illustrated with results from heavy-ion elastic scattering recorded with position sensitive strip detectors

  8. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  9. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  10. Non-Stationary Rician Noise Estimation in Parallel MRI Using a Single Image: A Variance-Stabilizing Approach.

    Science.gov (United States)

    Pieciak, Tomasz; Aja-Fernandez, Santiago; Vegas-Sanchez-Ferrero, Gonzalo

    2017-10-01

    Parallel magnetic resonance imaging (pMRI) techniques have gained a great importance both in research and clinical communities recently since they considerably accelerate the image acquisition process. However, the image reconstruction algorithms needed to correct the subsampling artifacts affect the nature of noise, i.e., it becomes non-stationary. Some methods have been proposed in the literature dealing with the non-stationary noise in pMRI. However, their performance depends on information not usually available such as multiple acquisitions, receiver noise matrices, sensitivity coil profiles, reconstruction coefficients, or even biophysical models of the data. Besides, some methods show an undesirable granular pattern on the estimates as a side effect of local estimation. Finally, some methods make strong assumptions that just hold in the case of high signal-to-noise ratio (SNR), which limits their usability in real scenarios. We propose a new automatic noise estimation technique for non-stationary Rician noise that overcomes the aforementioned drawbacks. Its effectiveness is due to the derivation of a variance-stabilizing transformation designed to deal with any SNR. The method was compared to the main state-of-the-art methods in synthetic and real scenarios. Numerical results confirm the robustness of the method and its better performance for the whole range of SNRs.

  11. A 32-channel photon counting module with embedded auto/cross-correlators for real-time parallel fluorescence correlation spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Gong, S.; Labanca, I.; Rech, I.; Ghioni, M. [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2014-10-15

    Fluorescence correlation spectroscopy (FCS) is a well-established technique to study binding interactions or the diffusion of fluorescently labeled biomolecules in vitro and in vivo. Fast FCS experiments require parallel data acquisition and analysis which can be achieved by exploiting a multi-channel Single Photon Avalanche Diode (SPAD) array and a corresponding multi-input correlator. This paper reports a 32-channel FPGA based correlator able to perform 32 auto/cross-correlations simultaneously over a lag-time ranging from 10 ns up to 150 ms. The correlator is included in a 32 × 1 SPAD array module, providing a compact and flexible instrument for high throughput FCS experiments. However, some inherent features of SPAD arrays, namely afterpulsing and optical crosstalk effects, may introduce distortions in the measurement of auto- and cross-correlation functions. We investigated these limitations to assess their impact on the module and evaluate possible workarounds.

  12. A 32-channel photon counting module with embedded auto/cross-correlators for real-time parallel fluorescence correlation spectroscopy

    International Nuclear Information System (INIS)

    Gong, S.; Labanca, I.; Rech, I.; Ghioni, M.

    2014-01-01

    Fluorescence correlation spectroscopy (FCS) is a well-established technique to study binding interactions or the diffusion of fluorescently labeled biomolecules in vitro and in vivo. Fast FCS experiments require parallel data acquisition and analysis which can be achieved by exploiting a multi-channel Single Photon Avalanche Diode (SPAD) array and a corresponding multi-input correlator. This paper reports a 32-channel FPGA based correlator able to perform 32 auto/cross-correlations simultaneously over a lag-time ranging from 10 ns up to 150 ms. The correlator is included in a 32 × 1 SPAD array module, providing a compact and flexible instrument for high throughput FCS experiments. However, some inherent features of SPAD arrays, namely afterpulsing and optical crosstalk effects, may introduce distortions in the measurement of auto- and cross-correlation functions. We investigated these limitations to assess their impact on the module and evaluate possible workarounds

  13. ACQUISITIONS LIST, MAY 1966.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Graduate School of Education.

    THIS ACQUISITIONS LIST IS A BIBLIOGRAPHY OF MATERIAL ON VARIOUS ASPECTS OF EDUCATION. OVER 300 UNANNOTATED REFERENCES ARE PROVIDED FOR DOCUMENTS DATING MAINLY FROM 1960 TO 1966. BOOKS, JOURNALS, REPORT MATERIALS, AND UNPUBLISHED MANUSCRIPTS ARE LISTED UNDER THE FOLLOWING HEADINGS--(1) ACHIEVEMENT, (2) ADOLESCENCE, (3) CHILD DEVELOPMENT, (4)…

  14. MAST data acquisition system

    International Nuclear Information System (INIS)

    Shibaev, S.; Counsell, G.; Cunningham, G.; Manhood, S.J.; Thomas-Davies, N.; Waterhouse, J.

    2006-01-01

    The data acquisition system of the Mega-Amp Spherical Tokamak (MAST) presently collects up to 400 MB of data in about 3000 data items per shot, and subsequent fast growth is expected. Since the start of MAST operations (in 1999) the system has changed dramatically. Though we continue to use legacy CAMAC hardware, newer VME, PCI, and PXI based sub-systems collect most of the data now. All legacy software has been redesigned and new software has been developed. Last year a major system improvement was made-replacement of the message distribution system. The new message system provides easy connection of any sub-system independently of its platform and serves as a framework for many new applications. A new data acquisition controller provides full control of common sub-systems, central error logging, and data acquisition alarms for the MAST plant. A number of new sub-systems using Linux and Windows OSs on VME, PCI, and PXI platforms have been developed. A new PXI unit has been designed as a base sub-system accommodating any type of data acquisition and control devices. Several web applications for the real-time MAST monitoring and data presentation have been developed

  15. Surviving mergers & acquisitions.

    Science.gov (United States)

    Dixon, Diane L

    2002-01-01

    Mergers and acquisitions are never easy to implement. The health care landscape is a minefield of failed mergers and uneasy alliances generating great turmoil and pain. But some mergers have been successful, creating health systems that benefit the communities they serve. Five prominent leaders offer their advice on minimizing the difficulties of M&As.

  16. General image acquisition parameters

    International Nuclear Information System (INIS)

    Teissier, J.M.; Lopez, F.M.; Langevin, J.F.

    1993-01-01

    The general parameters are of primordial importance to achieve image quality in terms of spatial resolution and contrast. They also play a role in the acquisition time for each sequence. We describe them separately, before associating them in a decision tree gathering the various options that are possible for diagnosis

  17. Decentralized Blended Acquisition

    NARCIS (Netherlands)

    Berkhout, A.J.

    2013-01-01

    The concept of blending and deblending is reviewed, making use of traditional and dispersed source arrays. The network concept of distributed blended acquisition is introduced. A million-trace robot system is proposed, illustrating that decentralization may bring about a revolution in the way we

  18. MPS Data Acquisition System

    International Nuclear Information System (INIS)

    Eiseman, S.E.; Miller, W.J.

    1975-01-01

    A description is given of the data acquisition system used with the multiparticle spectrometer facility at Brookhaven. Detailed information is provided on that part of the system which connects the detectors to the data handler; namely, the detector electronics, device controller, and device port optical isolator

  19. [Acquisition of arithmetic knowledge].

    Science.gov (United States)

    Fayol, Michel

    2008-01-01

    The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3).

  20. Second Language Acquisition.

    Science.gov (United States)

    McLaughlin, Barry; Harrington, Michael

    1989-01-01

    A distinction is drawn between representational and processing models of second-language acquisition. The first approach is derived primarily from linguistics, the second from psychology. Both fields, it is argued, need to collaborate more fully, overcoming disciplinary narrowness in order to achieve more fruitful research. (GLR)

  1. Performance-Based Service Acquisition (PBSA) Study and Graduate Level Course Material

    National Research Council Canada - National Science Library

    Kennedy, Penny S; McClure, Joe T

    2005-01-01

    .... It is important to understand that the PBSA contract form involves acquisition strategies, methods, and techniques that define and communicate measurable performance expectations in terms of outcomes...

  2. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  3. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  4. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  5. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  6. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  7. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  8. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  9. Parallel beam dynamics simulation of linear accelerators

    International Nuclear Information System (INIS)

    Qiang, Ji; Ryne, Robert D.

    2002-01-01

    In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies

  10. Lattice gauge theory using parallel processors

    International Nuclear Information System (INIS)

    Lee, T.D.; Chou, K.C.; Zichichi, A.

    1987-01-01

    The book's contents include: Lattice Gauge Theory Lectures: Introduction and Current Fermion Simulations; Monte Carlo Algorithms for Lattice Gauge Theory; Specialized Computers for Lattice Gauge Theory; Lattice Gauge Theory at Finite Temperature: A Monte Carlo Study; Computational Method - An Elementary Introduction to the Langevin Equation, Present Status of Numerical Quantum Chromodynamics; Random Lattice Field Theory; The GF11 Processor and Compiler; and The APE Computer and First Physics Results; Columbia Supercomputer Project: Parallel Supercomputer for Lattice QCD; Statistical and Systematic Errors in Numerical Simulations; Monte Carlo Simulation for LGT and Programming Techniques on the Columbia Supercomputer; Food for Thought: Five Lectures on Lattice Gauge Theory

  11. Fast parallel algorithm for CT image reconstruction.

    Science.gov (United States)

    Flores, Liubov A; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2012-01-01

    In X-ray computed tomography (CT) the X rays are used to obtain the projection data needed to generate an image of the inside of an object. The image can be generated with different techniques. Iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions and from a small number of projections. Their use may be important in portable scanners for their functionality in emergency situations. However, in practice, these methods are not widely used due to the high computational cost of their implementation. In this work we analyze iterative parallel image reconstruction with the Portable Extensive Toolkit for Scientific computation (PETSc).

  12. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  13. Data acquisition system for a proton imaging apparatus

    CERN Document Server

    Sipala, V; Bruzzi, M; Bucciolini, M; Candiano, G; Capineri, L; Cirrone, G A P; Civinini, C; Cuttone, G; Lo Presti, D; Marrazzo, L; Mazzaglia, E; Menichelli, D; Randazzo, N; Talamonti, C; Tesi, M; Valentini, S

    2009-01-01

    New developments in the proton-therapy field for cancer treatments, leaded Italian physics researchers to realize a proton imaging apparatus consisting of a silicon microstrip tracker to reconstruct the proton trajectories and a calorimeter to measure their residual energy. For clinical requirements, the detectors used and the data acquisition system should be able to sustain about 1 MHz proton rate. The tracker read-out, using an ASICs developed by the collaboration, acquires the signals detector and sends data in parallel to an FPGA. The YAG:Ce calorimeter generates also the global trigger. The data acquisition system and the results obtained in the calibration phase are presented and discussed.

  14. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  15. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  16. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  17. Parallel computing in genomic research: advances and applications

    Directory of Open Access Journals (Sweden)

    Ocaña K

    2015-11-01

    Full Text Available Kary Ocaña,1 Daniel de Oliveira2 1National Laboratory of Scientific Computing, Petrópolis, Rio de Janeiro, 2Institute of Computing, Fluminense Federal University, Niterói, Brazil Abstract: Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. Keywords: high-performance computing, genomic research, cloud computing, grid computing, cluster computing, parallel computing

  18. The JET fast central acquisition and trigger system

    International Nuclear Information System (INIS)

    Blackler, K.; Edwards, A.W.

    1994-01-01

    This paper describes a new data acquisition system at JET which uses Texas TMS320C40 parallel digital signal processors and the HELIOS parallel operating system to reduce the large amounts of experimental data produced by fast diagnostics. This unified system features a two level trigger system which performs real-time activity detection together with asynchronous event classification and selection. This provides automated data reduction during an experiment. The system's application to future fusion machines which have almost continuous operation is discussed

  19. An inherently parallel method for solving discretized diffusion equations

    International Nuclear Information System (INIS)

    Eccleston, B.R.; Palmer, T.S.

    1999-01-01

    A Monte Carlo approach to solving linear systems of equations is being investigated in the context of the solution of discretized diffusion equations. While the technique was originally devised decades ago, changes in computer architectures (namely, massively parallel machines) have driven the authors to revisit this technique. There are a number of potential advantages to this approach: (1) Analog Monte Carlo techniques are inherently parallel; this is not necessarily true to today's more advanced linear equation solvers (multigrid, conjugate gradient, etc.); (2) Some forms of this technique are adaptive in that they allow the user to specify locations in the problem where resolution is of particular importance and to concentrate the work at those locations; and (3) These techniques permit the solution of very large systems of equations in that matrix elements need not be stored. The user could trade calculational speed for storage if elements of the matrix are calculated on the fly. The goal of this study is to compare the parallel performance of Monte Carlo linear solvers to that of a more traditional parallelized linear solver. The authors observe the linear speedup that they expect from the Monte Carlo algorithm, given that there is no domain decomposition to cause significant communication overhead. Overall, PETSc outperforms the Monte Carlo solver for the test problem. The PETSc parallel performance improves with larger numbers of unknowns for a given number of processors. Parallel performance of the Monte Carlo technique is independent of the size of the matrix and the number of processes. They are investigating modifications to the scheme to accommodate matrix problems with positive off-diagonal elements. They are also currently coding an on-the-fly version of the algorithm to investigate the solution of very large linear systems

  20. Parallel fuzzy connected image segmentation on GPU.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  1. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  2. Parallel Artificial Intelligence Search Techniques for Real Time Applications.

    Science.gov (United States)

    1987-12-01

    list) (cond ((atom e) e) ((setq a-list (match ’((> v)) e nil)) (inf-to-pre (match-value ’v a-list))) ((setq a-list (match ’((+ 1) (restrict ? oneplus ...defun oneplus (x) 2 (equal x ’) :,- ""find the value of a key into an association list. 7,. :" (defun match-value (key a-list) : : (cadr (assoc key a

  3. Automatic parallelization of while-Loops using speculative execution

    International Nuclear Information System (INIS)

    Collard, J.F.

    1995-01-01

    Automatic parallelization of imperative sequential programs has focused on nests of for-loops. The most recent of them consist in finding an affine mapping with respect to the loop indices to simultaneously capture the temporal and spatial properties of the parallelized program. Such a mapping is usually called a open-quotes space-time transformation.close quotes This work describes an extension of these techniques to while-loops using speculative execution. We show that space-time transformations are a good framework for summing up previous restructuration techniques of while-loop, such as pipelining. Moreover, we show that these transformations can be derived and applied automatically

  4. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  5. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  6. Data acquisition for PLT

    International Nuclear Information System (INIS)

    Thompson, P.A.

    1975-01-01

    DA/PLT, the data acquisition system for the Princeton Large Torus (PLT) fusion research device, consists of a PDP-10 host computer, five satellite PDP-11s connected to the host by a special high-speed interface, miscellaneous other minicomputers and commercially supplied instruments, and much PPPL produced hardware. The software consists of the standard PDP-10 monitor with local modifications and the special systems and applications programs to customize the DA/PLT for the specific job of supporting data acquisition, analysis, display, and archiving, with concurrent off-line analysis, program development, and, in the background, general batch and timesharing. Some details of the over-all architecture are presented, along with a status report of the different PLT experiments being supported

  7. Knowledge Transfers following Acquisition

    DEFF Research Database (Denmark)

    Gammelgaard, Jens

    2001-01-01

    Prior relations between the acquiring firm and the target company pave the way for knowledge transfers subsequent to the acquisitions. One major reason is that through the market-based relations the two actors build up mutual trust and simultaneously they learn how to communicate. An empirical...... study of 54 Danish acquisitions taking place abroad from 1994 to 1998 demonstrated that when there was a high level of trust between the acquiring firm and the target firm before the take-over, then medium and strong tie-binding knowledge transfer mechanisms, such as project groups and job rotation......, were used more intensively. Further, the degree of stickiness was significantly lower in the case of prior trust-based relations....

  8. Data acquisition instruments: Psychopharmacology

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III

    1998-01-01

    This report contains the results of a Direct Assistance Project performed by Lockheed Martin Energy Systems, Inc., for Dr. K. O. Jobson. The purpose of the project was to perform preliminary analysis of the data acquisition instruments used in the field of psychiatry, with the goal of identifying commonalities of data and strategies for handling and using the data in the most advantageous fashion. Data acquisition instruments from 12 sources were provided by Dr. Jobson. Several commonalities were identified and a potentially useful data strategy is reported here. Analysis of the information collected for utility in performing diagnoses is recommended. In addition, further work is recommended to refine the commonalities into a directly useful computer systems structure.

  9. Parallel processing method for high-speed real time digital pulse processing for gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Fernandes, A.M.; Pereira, R.C.; Sousa, J.; Neto, A.; Carvalho, P.; Batista, A.J.N.; Carvalho, B.B.; Varandas, C.A.F.; Tardocchi, M.; Gorini, G.

    2010-01-01

    A new data acquisition (DAQ) system was developed to fulfil the requirements of the gamma-ray spectrometer (GRS) JET-EP2 (joint European Torus enhancement project 2), providing high-resolution spectroscopy at very high-count rate (up to few MHz). The system is based on the Advanced Telecommunications Computing Architecture TM (ATCA TM ) and includes a transient record (TR) module with 8 channels of 14 bits resolution at 400 MSamples/s (MSPS) sampling rate, 4 GB of local memory, and 2 field programmable gate array (FPGA) able to perform real time algorithms for data reduction and digital pulse processing. Although at 400 MSPS only fast programmable devices such as FPGAs can be used either for data processing and data transfer, FPGA resources also present speed limitation at some specific tasks, leading to an unavoidable data lost when demanding algorithms are applied. To overcome this problem and foreseeing an increase of the algorithm complexity, a new digital parallel filter was developed, aiming to perform real time pulse processing in the FPGAs of the TR module at the presented sampling rate. The filter is based on the conventional digital time-invariant trapezoidal shaper operating with parallelized data while performing pulse height analysis (PHA) and pile up rejection (PUR). The incoming sampled data is successively parallelized and fed into the processing algorithm block at one fourth of the sampling rate. The following data processing and data transfer is also performed at one fourth of the sampling rate. The algorithm based on data parallelization technique was implemented and tested at JET facilities, where a spectrum was obtained. Attending to the observed results, the PHA algorithm will be improved by implementing the pulse pile up discrimination.

  10. First Language Acquisition and Teaching

    Science.gov (United States)

    Cruz-Ferreira, Madalena

    2011-01-01

    "First language acquisition" commonly means the acquisition of a single language in childhood, regardless of the number of languages in a child's natural environment. Language acquisition is variously viewed as predetermined, wondrous, a source of concern, and as developing through formal processes. "First language teaching" concerns schooling in…

  11. The Performance of an Object-Oriented, Parallel Operating System

    Directory of Open Access Journals (Sweden)

    David R. Kohr, Jr.

    1994-01-01

    Full Text Available The nascent and rapidly evolving state of parallel systems often leaves parallel application developers at the mercy of inefficient, inflexible operating system software. Given the relatively primitive state of parallel systems software, maximizing the performance of parallel applications not only requires judicious tuning of the application software, but occasionally, the replacement of specific system software modules with others that can more readily respond to the imposed pattern of resource demands. To assess the feasibility of application and performance tuning via malleable system software and to understand the performance penalties for detailed operating system performance data capture, we describe a set of performance instrumentation techniques for parallel, object-oriented operating systems and a set of performance experiments with Choices, an experimental, object-oriented operating system designed for use with parallel sys- tems. These performance experiments show that (a the performance overhead for operating system data capture is modest, (b the penalty for malleable, object-oriented operating systems is negligible, but (c techniques are needed to strictly enforce adherence of implementation to design if operating system modules are to be replaced.

  12. Multiprocessor data acquisition system

    International Nuclear Information System (INIS)

    Haumann, J.R.; Crawford, R.K.

    1987-01-01

    A multiprocessor data acquisition system has been built to replace the single processor systems at the Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory. The multiprocessor system was needed to accommodate the higher data rates at IPNS brought about by improvements in the source and changes in instrument configurations. This paper describes the hardware configuration of the system and the method of task sharing and compares results to the single processor system

  13. Implementing acquisition strategies

    International Nuclear Information System (INIS)

    Montgomery, G. K.

    1997-01-01

    The objective of this paper is to address some of the strategies necessary to effect a successful asset or corporate acquisition. Understanding the corporate objective, the full potential of the asset, the specific strategies to be employed, the value of time, and most importantly the interaction of all these are crucial, for missed steps are likely to result in missed opportunities. The amount of factual information that can be obtained and utilized in a timely fashion is the largest single hurdle to the capture of value in the asset or corporate acquisition. Fact, familiarity and experience are key in this context. The importance of the due diligence process prior to title or data transfer cannot be overemphasized. Some of the most important assets acquired in a merger may be the people. To maximize effectiveness, it is essential to merge both existing staff and those that came with the new acquisition as soon as possible. By thinking together as a unit, knowledge and experience can be applied to realize the potential of the asset. Hence team building is one of the challenges, doing it quickly is usually the most effective. Developing new directions for the new enlarged company by combining the strengths of the old and the new creates more value, as well as a more efficient operation. Equally important to maximizing the potential of the new acquisition is the maintenance of the momentum generated by the need to grow that gave the impetus to acquiring new assets in the first place. In brief, the right mix of vision, facts and perceptions, quick enactment of the post-close strategies and keeping the momentum alive, are the principal ingredients of a focused strategy

  14. Internationalize Mergers and Acquisitions

    OpenAIRE

    Zhou, Lili

    2017-01-01

    As globalization processes, an increasing number of companies use mergers and acquisitions as a tool to achieve company growth in the international business world. The purpose of this thesis is to investigate the process of an international M&A and analyze the factors leading to success. The research started with reviewing different academic theory. The important aspects in both pre-M&A phase and post-M&A phase have been studied in depth. Because of the complexity in international...

  15. Data Acquisition System

    International Nuclear Information System (INIS)

    Watwood, D.; Beatty, J.

    1991-01-01

    The Data Acquisition System (DAS) is comprised of a Hewlett-Packard (HP) model 9816, Series 200 Computer System with the appropriate software to acquire, control, and archive data from a Data Acquisition/Control Unit, models HP3497A and HP3498A. The primary storage medium is an HP9153 16-megabyte hard disc. The data is backed-up on three floppy discs. One floppy disc drive is contained in the HP9153 chassis; the other two comprise an HP9122 dual disc drive. An HP82906A line printer supplies hard copy backup. A block diagram of the hardware setup is shown. The HP3497A/3498A Data Acquisition/Control Units read each input channel and transmit the raw voltage reading to the HP9816 CPU via the HPIB bus. The HP9816 converts this voltage to the appropriate engineering units using the calibration curves for the sensor being read. The HP9816 archives both the raw and processed data along with the time and the readings were taken to hard and floppy discs. The processed values and reading time are printed on the line printer. This system is designed to accommodate several types of sensors; each type is discussed in the following sections

  16. Complexity in language acquisition.

    Science.gov (United States)

    Clark, Alexander; Lappin, Shalom

    2013-01-01

    Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form. Copyright © 2013 Cognitive Science Society, Inc.

  17. MDSplus data acquisition system

    International Nuclear Information System (INIS)

    Stillerman, J.A.; Fredian, T.W.; Klare, K.; Manduchi, G.

    1997-01-01

    MDSplus, a tree based, distributed data acquisition system, was developed in collaboration with the ZTH Group at Los Alamos National Lab and the RFX Group at CNR in Padua, Italy. It is currently in use at MIT, RFX in Padua, TCV at EPFL in Lausanne, and KBSI in South Korea. MDSplus is made up of a set of X/motif based tools for data acquisition and display, as well as diagnostic configuration and management. It is based on a hierarchical experiment description which completely describes the data acquisition and analysis tasks and contains the results from these operations. These tools were designed to operate in a distributed, client/server environment with multiple concurrent readers and writers to the data store. While usually used over a Local Area Network, these tools can be used over the Internet to provide access for remote diagnosticians and even machine operators. An interface to a relational database is provided for storage and management of processed data. IDL is used as the primary data analysis and visualization tool. IDL is a registered trademark of Research Systems Inc. copyright 1996 American Institute of Physics

  18. Frames of reference in spatial language acquisition.

    Science.gov (United States)

    Shusterman, Anna; Li, Peggy

    2016-08-01

    Languages differ in how they encode spatial frames of reference. It is unknown how children acquire the particular frame-of-reference terms in their language (e.g., left/right, north/south). The present paper uses a word-learning paradigm to investigate 4-year-old English-speaking children's acquisition of such terms. In Part I, with five experiments, we contrasted children's acquisition of novel word pairs meaning left-right and north-south to examine their initial hypotheses and the relative ease of learning the meanings of these terms. Children interpreted ambiguous spatial terms as having environment-based meanings akin to north and south, and they readily learned and generalized north-south meanings. These studies provide the first direct evidence that children invoke geocentric representations in spatial language acquisition. However, the studies leave unanswered how children ultimately acquire "left" and "right." In Part II, with three more experiments, we investigated why children struggle to master body-based frame-of-reference words. Children successfully learned "left" and "right" when the novel words were systematically introduced on their own bodies and extended these words to novel (intrinsic and relative) uses; however, they had difficulty learning to talk about the left and right sides of a doll. This difficulty was paralleled in identifying the left and right sides of the doll in a non-linguistic memory task. In contrast, children had no difficulties learning to label the front and back sides of a doll. These studies begin to paint a detailed account of the acquisition of spatial terms in English, and provide insights into the origins of diverse spatial reference frames in the world's languages. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  20. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  1. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  2. High spatial and temporal resolution retrospective cine cardiovascular magnetic resonance from shortened free breathing real-time acquisitions.

    Science.gov (United States)

    Xue, Hui; Kellman, Peter; Larocca, Gina; Arai, Andrew E; Hansen, Michael S

    2013-11-14

    Cine cardiovascular magnetic resonance (CMR) is challenging in patients who cannot perform repeated breath holds. Real-time, free-breathing acquisition is an alternative, but image quality is typically inferior. There is a clinical need for techniques that achieve similar image quality to the segmented cine using a free breathing acquisition. Previously, high quality retrospectively gated cine images have been reconstructed from real-time acquisitions using parallel imaging and motion correction. These methods had limited clinical applicability due to lengthy acquisitions and volumetric measurements obtained with such methods have not previously been evaluated systematically. This study introduces a new retrospective reconstruction scheme for real-time cine imaging which aims to shorten the required acquisition. A real-time acquisition of 16-20s per acquired slice was inputted into a retrospective cine reconstruction algorithm, which employed non-rigid registration to remove respiratory motion and SPIRiT non-linear reconstruction with temporal regularization to fill in missing data. The algorithm was used to reconstruct cine loops with high spatial (1.3-1.8 × 1.8-2.1 mm²) and temporal resolution (retrospectively gated, 30 cardiac phases, temporal resolution 34.3 ± 9.1 ms). Validation was performed in 15 healthy volunteers using two different acquisition resolutions (256 × 144/192 × 128 matrix sizes). For each subject, 9 to 12 short axis and 3 long axis slices were imaged with both segmented and real-time acquisitions. The retrospectively reconstructed real-time cine images were compared to a traditional segmented breath-held acquisition in terms of image quality scores. Image quality scoring was performed by two experts using a scale between 1 and 5 (poor to good). For every subject, LAX and three SAX slices were selected and reviewed in the random order. The reviewers were blinded to the reconstruction approach and acquisition protocols and

  3. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    Science.gov (United States)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  4. Non-Cartesian Parallel Imaging Reconstruction of Undersampled IDEAL Spiral 13C CSI Data

    DEFF Research Database (Denmark)

    Hansen, Rie Beck; Hanson, Lars G.; Ardenkjær-Larsen, Jan Henrik

    scan times based on spatial information inherent to each coil element. In this work, we explored the combination of non-cartesian parallel imaging reconstruction and spatially undersampled IDEAL spiral CSI1 acquisition for efficient encoding of multiple chemical shifts within a large FOV with high...

  5. D0 experiment: its trigger, data acquisition, and computers

    International Nuclear Information System (INIS)

    Cutts, D.; Zeller, R.; Schamberger, D.; Van Berg, R.

    1984-05-01

    The new collider facility to be built at Fermilab's Tevatron-I D0 region is described. The data acquisition requirements are discussed, as well as the hardware and software triggers designed to meet these needs. An array of MicroVAX computers running VAXELN will filter in parallel (a complete event in each microcomputer) and transmit accepted events via Ethernet to a host. This system, together with its subsequent offline needs, is briefly presented

  6. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  7. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  8. Simulation and modeling of data acquisition systems for future high energy physics experiments

    International Nuclear Information System (INIS)

    Booth, A.; Black, D.; Walsh, D.; Bowden, M.; Barsotti, E.

    1990-01-01

    With the ever-increasing complexity of detectors and their associated data acquisition (DAQ) systems, it is important to bring together a set of tools to enable system designers, both hardware and software, to understand the behavorial aspects of the system as a whole, as well as the interaction between different functional units within the system. For complex systems, human intuition is inadequate since there are simply too many variables for system designers to begin to predict how varying any subset of them affects the total system. On the other hand, exact analysis, even to the extent of investing in disposable hardware prototypes, is much too time consuming and costly. Simulation bridges the gap between physical intuition and exact analysis by providing a learning vehicle in which the effects of varying many parameters can be analyzed and understood. Simulation techniques are being used in the development of the Scalable Parallel Open Architecture Data Acquisition System at Fermilab. This paper describes the work undertaken at Fermilab in which several sophisticated tools have been brought together to provide an integrated systems engineering environment specifically aimed at designing DAQ systems. Also presented are results of simulation experiments in which the effects of varying trigger rates, event sizes and event distribution over processors, are clearly seen in terms of throughput and buffer usage in an event-building switch

  9. Simulation and modeling of data acquisition systems for future high energy physics experiments

    International Nuclear Information System (INIS)

    Booth, A.; Black, D.; Walsh, D.; Bowden, M.; Barsotti, E.

    1991-01-01

    With the ever-increasing complexity of detectors and their associated data acquisition (DAQ) systems, it is important to bring together a set of tools to enable system designers, both hardware and software, to understand the behavioral aspects of the system was a whole, as well as the interaction between different functional units within the system. For complex systems, human intuition is inadequate since there are simply too many variables for system designers to begin to predict how varying any subset of them affects the total system. On the other hand, exact analysis, even to the extent of investing in disposable hardware prototypes, is much too time consuming and costly. Simulation bridges the gap between physical intuition and exact analysis by providing a learning vehicle in which the effects of varying many parameters can be analyzed and understood. Simulation techniques are being used in the development of the Scalable Parallel Open Architecture Data Acquisition System at Fermilab in which several sophisticated tools have been brought together to provide an integrated systems engineering environment specifically aimed at designing, DAQ systems. Also presented are results of simulation experiments in which the effects of varying trigger rates, event sizes and event distribution over processors, are clearly seen in terms of throughput and buffer usage in an event-building switch

  10. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  11. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  12. Parallel algorithms for online trackfinding at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.

  13. Xyce parallel electronic simulator : users' guide.

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is

  14. Rubus: A compiler for seamless and extensible parallelism.

    Directory of Open Access Journals (Sweden)

    Muhammad Adnan

    Full Text Available Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU, originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84

  15. Data acquisition and real-time bolometer tomography using LabVIEW RT

    International Nuclear Information System (INIS)

    Giannone, L.; Eich, T.; Fuchs, J.C.; Ravindran, M.; Ruan, Q.; Wenzel, L.; Cerna, M.; Concezzi, S.

    2011-01-01

    The currently available multi-core PCI Express systems running LabVIEW RT (real-time), equipped with FPGA cards for data acquisition and real-time parallel signal processing, greatly shorten the design and implementation cycles of large-scale, real-time data acquisition and control systems. This paper details a data acquisition and real-time tomography system using LabVIEW RT for the bolometer diagnostic on the ASDEX Upgrade tokamak (Max Planck Institute for Plasma Physics, Garching, Germany). The transformation matrix for tomography is pre-computed based on the geometry of distributed radiation sources and sensors. A parallelized iterative algorithm is adapted to solve a constrained linear system for the reconstruction of the radiated power density. Real-time bolometer tomography is performed with LabVIEW RT. Using multi-core machines to execute the parallelized algorithm, a cycle time well below 1 ms is reached.

  16. A fast data acquisition system for PHA and MCS measurements

    International Nuclear Information System (INIS)

    Eijk, P.J.A. van; Keyser, C.J.; Rigterink, B.J.; Hasper, H.

    1985-01-01

    A microprocessor controlled data acquisition system for pulse height analysis and multichannel scaling is described. A 4K x 24 bit static memory is used to obtain a fast data acquisition rate. The system can store 12 bit ADC or TDC data within 150 ns. Operating commands can be entered via a small keyboard or by a RS-232-C interface. An oscilloscope is used to display a spectrum. The display of a spectrum or the transmission of spectrum data to an external computer causes only a short interruption of a measurement in progress and is accomplished by using a DMA circuit. The program is written in Modular Pascal and is divided into 15 modules. These implement 9 parallel processes which are synchronized by using semaphores. Hardware interrupts from the data acquisition, DMA, keyboard and RS-232-C circuits are used to signal these processes. (orig.)

  17. War-gaming application for future space systems acquisition: MATLAB implementation of war-gaming acquisition models and simulation results

    Science.gov (United States)

    Vienhage, Paul; Barcomb, Heather; Marshall, Karel; Black, William A.; Coons, Amanda; Tran, Hien T.; Nguyen, Tien M.; Guillen, Andy T.; Yoh, James; Kizer, Justin; Rogers, Blake A.

    2017-05-01

    The paper describes the MATLAB (MathWorks) programs that were developed during the REU workshop1 to implement The Aerospace Corporation developed Unified Game-based Acquisition Framework and Advanced Game - based Mathematical Framework (UGAF-AGMF) and its associated War-Gaming Engine (WGE) models. Each game can be played from the perspectives of the Department of Defense Acquisition Authority (DAA) or of an individual contractor (KTR). The programs also implement Aerospace's optimum "Program and Technical Baseline (PTB) and associated acquisition" strategy that combines low Total Ownership Cost (TOC) with innovative designs while still meeting warfighter needs. The paper also describes the Bayesian Acquisition War-Gaming approach using Monte Carlo simulations, a numerical analysis technique to account for uncertainty in decision making, which simulate the PTB development and acquisition processes and will detail the procedure of the implementation and the interactions between the games.

  18. PC based 8-parameter data acquisition system

    International Nuclear Information System (INIS)

    Gupta, J.D.; Naik, K.V.; Jain, S.K.; Pathak, R.V.; Suman, B.

    1989-01-01

    Multiparameter data acquisition (MPA) systems which analyse nuclear events with respect to more than one property of the event are essential tools for the study of some complex nuclear phenomena requiring analysis of time coincident spectra. For better throughput and accuracy each parameter is digitized by its own ADC. A stand alone low cost IBM PC based 8-parameter data acquisition system developed by the authors makes use of Address Recording technique for acquiring data from eight 12 bit ADC's in the PC Memory. Two memory buffers in the PC memory are used in ping-pong fashion so that data acquisition in one bank and dumping of data onto PC disk from the other bank can proceed simultaneously. Data is acquired in the PC memory through DMA mode for realising high throughput and hardware interrupt is used for switching banks for data acquisition. A comprehensive software package developed in Turbo-Pascal offers a set of menu-driven interactive commands to the user for setting-up system parameters and control of the system. The system is to be used with pelletron accelerator. (author). 5 figs

  19. The NUSTAR data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Loeher, B.; Toernqvist, H.T. [TU Darmstadt (Germany); GSI (Germany); Agramunt, J. [IFIC, CSIC (Spain); Bendel, M.; Gernhaeuser, R.; Le Bleis, T.; Winkel, M. [TU Muenchen (Germany); Charpy, A.; Heinz, A.; Johansson, H.T. [Chalmers University of Technology (Sweden); Coleman-Smith, P.; Lazarus, I.H.; Pucknell, V.F.E. [STFC Daresbury (United Kingdom); Czermak, A. [IFJ (Poland); Kurz, N.; Nociforo, C.; Pietri, S.; Schaffner, H.; Simon, H. [GSI (Germany); Scheit, H. [TU Darmstadt (Germany); Taieb, J. [CEA (France)

    2015-07-01

    The diversity of upcoming experiments within the NUSTAR collaboration, including experiments in storage rings, reactions at relativistic energies and high-precision spectroscopy, is reflected in the diversity of the required detection systems. A challenging task is to incorporate the different needs of individual detectors within the unified NUSTAR Data AcQuisition (NDAQ). NDAQ takes up this challenge by providing a high degree of availability via continuously running systems, high flexibility via experiment-specific configuration files for data streams and trigger logic, distributed timestamps and trigger information on km distances, all built on the solid basis of the GSI Multi-Branch System. NDAQ ensures interoperability between individual NUSTAR detectors and allows merging of formerly separate data streams according to the needs of all experiments, increasing reliability in NUSTAR data acquisition. An overview of the NDAQ infrastructure and the current progress is presented. The NUSTAR (NUclear STructure, Astrophysics and Reactions) collaboration represents one of the four pillars motivating the construction of the international FAIR facility. The diversity of NUSTAR experiments, including experiments in storage rings, reactions at relativistic energies and high-precision spectroscopy, is reflected in the diversity of the required detection systems. A challenging task is to incorporate the different needs of individual detectors and components under the umbrella of the unified NUSTAR Data AQuisition (NDAQ) infrastructure. NDAQ takes up this challenge by providing a high degree of availability via continuously running systems, high flexibility via experiment-specific configuration files for data streams and trigger logic, and distributed time stamps and trigger information on km distances, all built on the solid basis of the GSI Multi-Branch System (MBS). NDAQ ensures interoperability between individual NUSTAR detectors and allows merging of formerly separate

  20. TCABR data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Fagundes, A.N. E-mail: fagundes@if.usp.br; Sa, W.P.; Coelho, P.M.S.A

    2000-08-01

    A brief description of the design of the data acquisition system for the TCABR tokamak is presented. The system comprises the VME standard instrumentation incorporating CAMAC instrumentation through the use of a GPIB interface. All the necessary data for programming different parts of the equipment, as well as the repertoire of actions for the machine control, are stored in a DBMS, with friendly interfaces. Public access software is used, where feasible, in the development of codes. The TCABR distinguished feature is the virtual lack of frontiers in upgrading, either in hardware or software.