WorldWideScience

Sample records for temporal parallel acquisition

  1. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging

    International Nuclear Information System (INIS)

    Rabrait, C.

    2007-11-01

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  2. Single breath-hold real-time cine MR imaging: improved temporal resolution using generalized autocalibrating partially parallel acquisition (GRAPPA) algorithm

    International Nuclear Information System (INIS)

    Wintersperger, Bernd J.; Nikolaou, Konstantin; Dietrich, Olaf; Reiser, Maximilian F.; Schoenberg, Stefan O.; Rieber, Johannes; Nittka, Matthias

    2003-01-01

    The purpose of this study was to test parallel imaging techniques for improvement of temporal resolution in multislice single breath-hold real-time cine steady-state free precession (SSFP) in comparison with standard segmented single-slice SSFP techniques. Eighteen subjects were examined on a 1.5-T scanner using a multislice real-time cine SSFP technique using the GRAPPA algorithm. Global left ventricular parameters (EDV, ESV, SV, EF) were evaluated and results compared with a standard segmented single-slice SSFP technique. Results for EDV (r=0.93), ESV (r=0.99), SV (r=0.83), and EF (r=0.99) of real-time multislice SSFP imaging showed a high correlation with results of segmented SSFP acquisitions. Systematic differences between both techniques were statistically non-significant. Single breath-hold multislice techniques using GRAPPA allow for improvement of temporal resolution and for accurate assessment of global left ventricular functional parameters. (orig.)

  3. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging; Imagerie par resonance magnetique a haute resolution temporelle: developpement d'une methode d'acquisition parallele tridimensionnelle pour l'imagerie fonctionnelle cerebrale

    Energy Technology Data Exchange (ETDEWEB)

    Rabrait, C

    2007-11-15

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  4. Knowledge acquisition for temporal abstraction.

    Science.gov (United States)

    Stein, A; Musen, M A; Shahar, Y

    1996-01-01

    Temporal abstraction is the task of detecting relevant patterns in data over time. The knowledge-based temporal-abstraction method uses knowledge about a clinical domain's contexts, external events, and parameters to create meaningful interval-based abstractions from raw time-stamped clinical data. In this paper, we describe the acquisition and maintenance of domain-specific temporal-abstraction knowledge. Using the PROTEGE-II framework, we have designed a graphical tool for acquiring temporal knowledge directly from expert physicians, maintaining the knowledge in a sharable form, and converting the knowledge into a suitable format for use by an appropriate problem-solving method. In initial tests, the tool offered significant gains in our ability to rapidly acquire temporal knowledge and to use that knowledge to perform automated temporal reasoning.

  5. High temporal resolution functional MRI using parallel echo volumar imaging

    International Nuclear Information System (INIS)

    Rabrait, C.; Ciuciu, P.; Ribes, A.; Poupon, C.; Dehaine-Lambertz, G.; LeBihan, D.; Lethimonnier, F.; Le Roux, P.; Dehaine-Lambertz, G.

    2008-01-01

    Purpose: To combine parallel imaging with 3D single-shot acquisition (echo volumar imaging, EVI) in order to acquire high temporal resolution volumar functional MRI (fMRI) data. Materials and Methods: An improved EVI sequence was associated with parallel acquisition and field of view reduction in order to acquire a large brain volume in 200 msec. Temporal stability and functional sensitivity were increased through optimization of all imaging parameters and Tikhonov regularization of parallel reconstruction. Two human volunteers were scanned with parallel EVI in a 1.5 T whole-body MR system, while submitted to a slow event-related auditory paradigm. Results: Thanks to parallel acquisition, the EVI volumes display a low level of geometric distortions and signal losses. After removal of low-frequency drifts and physiological artifacts,activations were detected in the temporal lobes of both volunteers and voxel-wise hemodynamic response functions (HRF) could be computed. On these HRF different habituation behaviors in response to sentence repetition could be identified. Conclusion: This work demonstrates the feasibility of high temporal resolution 3D fMRI with parallel EVI. Combined with advanced estimation tools,this acquisition method should prove useful to measure neural activity timing differences or study the nonlinearities and non-stationarities of the BOLD response. (authors)

  6. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  7. Application of parallel preprocessors in data acquisition

    International Nuclear Information System (INIS)

    Butler, H.S.; Cooper, M.D.; Williams, R.A.; Hughes, E.B.; Rolfe, J.R.; Wilson, S.L.; Zeman, H.D.

    1981-01-01

    A data-acquisition system is being developed for a large-scale experiment at LAMPF. It will make use of four microprocessors running in parallel to acquire and preprocess data from 432 photomultiplier tubes (PMT) attached to 396 NaI crystals. The microprocessors are LSI-11/23s operating through CAMAC Auxiliary Crate Controllers (ACC). Data acquired by the microprocessors will be collected through a programmable Branch Driver (MBD) which also will read data from 52 scintillators (88 PMTs) and 728 wires comprising a drift chamber. The MBD will transfer data from each event into a PDP-11/44 for further processing and taping. The microprocessors will perform the secondary function of monitoring the calibration of the NaI PMTs. A special trigger circuit allows the system to stack data from a second event while the first is still being processed. Major components of the system were tested in April 1981. Timing measurements from this test are reported

  8. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    International Nuclear Information System (INIS)

    Tintera, Jaroslav; Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter

    2004-01-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  9. New partially parallel acquisition technique in cerebral imaging: preliminary findings

    Energy Technology Data Exchange (ETDEWEB)

    Tintera, Jaroslav [Institute for Clinical and Experimental Medicine, Prague (Czech Republic); Gawehn, Joachim; Bauermann, Thomas; Vucurevic, Goran; Stoeter, Peter [University Clinic Mainz, Institute of Neuroradiology, Mainz (Germany)

    2004-12-01

    In MRI applications where short acquisition time is necessary, the increase of acquisition speed is often at the expense of image resolution and SNR. In such cases, the newly developed parallel acquisition techniques could provide images without mentioned limitations and in reasonably shortened measurement time. A newly designed eight-channel head coil array (i-PAT coil) allowing for parallel acquisition of independently reconstructed images (GRAPPA mode) has been tested for its applicability in neuroradiology. Image homogeneity was tested in standard phantom and healthy volunteers. BOLD signal changes were studied in a group of six volunteers using finger tapping stimulation. Phantom studies revealed an important drop of signal even after the use of a normalization filter in the center of the image and an important increase of artifact power with reduction of measurement time strongly depending on the combination of acceleration parameters. The additional application of a parallel acquisition technique such as GRAPPA decreases measurement time in the range of about 30%, but further reduction is often possible only at the expense of SNR. This technique performs best in conditions in which imaging speed is important, such as CE MRA, but time resolution still does not allow the acquisition of angiograms separating the arterial and venous phase. Significantly larger areas of BOLD activation were found using the i-PAT coil compared to the standard head coil. Being an eight-channel surface coil array, peripheral cortical structures profit from high SNR as high-resolution imaging of small cortical dysplasias and functional activation of cortical areas imaged by BOLD contrast. In BOLD contrast imaging, susceptibility artifacts are reduced, but only if an appropriate combination of acceleration parameters is used. (orig.)

  10. A tomograph VMEbus parallel processing data acquisition system

    International Nuclear Information System (INIS)

    Wilkinson, N.A.; Rogers, J.G.; Atkins, M.S.

    1989-01-01

    This paper describes a VME based data acquisition system suitable for the development of Positron Volume Imaging tomographs which use 3-D data for improved image resolution over slice-oriented tomographs. the data acquisition must be flexible enough to accommodate several 3-D reconstruction algorithms; hence, a software-based system is most suitable. Furthermore, because of the increased dimensions and resolution of volume imaging tomographs, the raw data event rate is greater than that of slice-oriented machines. These dual requirements are met by our data acquisition system. Flexibility is achieved through an array of processors connected over a VMEbus, operating asynchronously and in parallel. High raw data throughput is achieved using a dedicated high speed data transfer device available for the VMEbus. The device can attain a raw data rate of 2.5 million coincidence events per second for raw events which are 64 bits wide

  11. A tomograph VMEbus parallel processing data acquisition system

    International Nuclear Information System (INIS)

    Atkins, M.S.; Wilkinson, N.A.; Rogers, J.G.

    1988-11-01

    This paper describes a VME based data acquisition system suitable for the development of Positron Volume Imaging tomographs which use 3-D data for improved image resolution over slice-oriented tomographs. The data acquisition must be flexible enough to accommodate several 3-D reconstruction algorithms; hence, a software-based system is most suitable. Furthermore, because of the increased dimensions and resolution of volume imaging tomographs, the raw data event rate is greater than that of slice-oriented machines. These dual requirements are met by our data acquisition systems. Flexibility is achieved through an array of processors connected over a VMEbus, operating asynchronously and in parallel. High raw data throughput is achieved using a dedicated high speed data transfer device available for the VMEbus. The device can attain a raw data rate of 2.5 million coincidence events per second for raw events per second for raw events which are 64 bits wide. Real-time data acquisition and pre-processing requirements can be met by about forty 20 MHz Motorola 68020/68881 processors

  12. Microprocessor event analysis in parallel with Camac data acquisition

    International Nuclear Information System (INIS)

    Cords, D.; Eichler, R.; Riege, H.

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a Camac System (GEC-ELLIOTT System Crate) and shares the Camac access with a Nord-1OS computer. Interfaces have been designed and tested for execution of Camac cycles, communication with the Nord-1OS computer and DMA-transfer from Camac to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-1OS computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the result of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-1OS buffer will be reset and the event omitted from further processing. (orig.)

  13. Microprocessor event analysis in parallel with CAMAC data acquisition

    CERN Document Server

    Cords, D; Riege, H

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a CAMAC System (GEC-ELLIOTT System Crate) and shares the CAMAC access with a Nord-10S computer. Interfaces have been designed and tested for execution of CAMAC cycles, communication with the Nord-10S computer and DMA-transfer from CAMAC to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-10S computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the results of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-10S buffer will be reset and the event omitted from further processing. (5 refs).

  14. Development of a parallel zoomed EVI sequence for high temporal resolution analysis of the BOLD response

    International Nuclear Information System (INIS)

    Rabrait, C.

    2006-01-01

    The hemodynamic impulse response to any short stimulus typically lasts around 20 seconds. Thus, the detection of the Blood Oxygenation Level Dependent (BOLD) effect is usually performed using a 2D Echo Planar Imaging (EPI) sequence, with repetition times on the order of 1 or 2 seconds. This temporal resolution is generally enough for detection purposes. Nevertheless, when trying to accurately estimate the hemodynamic response functions (HRF), higher scanning rates represent a real advantage. Thus, in order to reach a temporal resolution around 200 ms, we developed a new acquisition method, based on Echo Volumar Imaging and 2D parallel acquisition (1). Echo Volumar Imaging (EVI) has been proposed in 1977 by Mansfield (2). EVI intrinsically possesses a lot of advantages for functional neuroimaging, as a 3 D single shot acquisition method. Nevertheless, to date, only a few applications have been reported (3, 4). Actually, very restricting hardware requirements make EVI difficult to perform in satisfactory experimental conditions, even today. The critical point in EVI is the echo train duration, which is longer than in EPI, due to 3D acquisition. Indeed, at equal field of view and spatial resolutions, EVI echo train duration must be approximately equal to EPI echo train duration multiplied by the number of slices acquired in EPI. Consequently, EVI is much more sensitive than EPI to geometric distortions, which are related to phase errors, and also to signal losses, which are due to long echo times (TE). Thus, a first improvement has been brought by 'zoomed' or 'localized' EVI (5), which allows to focus on a small volume of interest and thus limit echo train durations compared to full FOV acquisitions.To reduce echo train durations, we chose to apply parallel acquisition. Moreover, since EVI is a 3D acquisition method, we are able to perform parallel acquisition and SENSE reconstruction along the two phase directions (6). The R = 4 under-sampling consists in the

  15. Acquisition of multiple prior distributions in tactile temporal order judgment

    Directory of Open Access Journals (Sweden)

    Yasuhito eNagai

    2012-08-01

    Full Text Available The Bayesian estimation theory proposes that the brain acquires the prior distribution of a task and integrates it with sensory signals to minimize the effect of sensory noise. Psychophysical studies have demonstrated that our brain actually implements Bayesian estimation in a variety of sensory-motor tasks. However, these studies only imposed one prior distribution on participants within a task period. In this study, we investigated the conditions that enable the acquisition of multiple prior distributions in temporal order judgment (TOJ of two tactile stimuli across the hands. In Experiment 1, stimulation intervals were randomly selected from one of two prior distributions (biased to right hand earlier and biased to left hand earlier in association with color cues (green and red, respectively. Although the acquisition of the two priors was not enabled by the color cues alone, it was significant when participants shifted their gaze (above or below in response to the color cues. However, the acquisition of multiple priors was not significant when participants moved their mouths (opened or closed. In Experiment 2, the spatial cues (above and below were used to identify which eye position or retinal cue position was crucial for the eye-movement-dependent acquisition of multiple priors in Experiment 1. The acquisition of the two priors was significant when participants moved their gaze to the cues (i.e., the cue positions on the retina were constant across the priors, as well as when participants did not shift their gazes (i.e., the cue positions on the retina changed according to the priors. Thus, both eye and retinal cue positions were effective in acquiring multiple priors. Based on previous neurophysiological reports, we discuss possible neural correlates that contribute to the acquisition of multiple priors.

  16. Parallel preprocessing in a nuclear data acquisition system

    International Nuclear Information System (INIS)

    Pichot, G.; Auriol, E.; Lemarchand, G.; Millaud, J.

    1977-01-01

    The appearance of microprocessors and large memory chips has somewhat modified the spectrum of tools usable by the data acquisition system designer. This is particular true in the nuclear research field where the data flow has been continuously growing as a consequence of the increasing capabilities of new detectors. This paper deals with the insertion, between a data acquisition system and a computer, of a preprocessing structure based on microprocessors and large capacity high speed memories. The results shows a significant improvement on several aspects in the operation of the system with returns paying back the investments in 18 months

  17. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics

    International Nuclear Information System (INIS)

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered

  18. On the acquisition of temporal conjunctions in Finnish.

    Science.gov (United States)

    Atanassova, M

    2001-03-01

    This study concerns the acquisition of complex sentence structures in Finnish. Specifically, three simultaneous and sequential events were acted out with toys in an elicitation task, and the production of "and," "and then," "when," and "after" were observed. There were 48 children in a cross-sectional design at the age levels 3, 4, 5, and 6 years. Immediately after the complex event was presented, the child was asked the initial request "What happened?" If the child did not produce the whole event spontaneously, she or he was prompted by "What else happened?" Finally, the prompted request "When did X?" was asked (X referring to the second action component of the event). The results showed that prompting better revealed the ability of the children, especially that of the younger ones, to use temporal conjunctions in complex sentences, as well as the delicate interplay of language skills and their flexible use.

  19. SMARTS: Exploiting Temporal Locality and Parallelism through Vertical Execution

    International Nuclear Information System (INIS)

    Beckman, P.; Crotinger, J.; Karmesin, S.; Malony, A.; Oldehoeft, R.; Shende, S.; Smith, S.; Vajracharya, S.

    1999-01-01

    In the solution of large-scale numerical prob- lems, parallel computing is becoming simultaneously more important and more difficult. The complex organization of today's multiprocessors with several memory hierarchies has forced the scientific programmer to make a choice between simple but unscalable code and scalable but extremely com- plex code that does not port to other architectures. This paper describes how the SMARTS runtime system and the POOMA C++ class library for high-performance scientific computing work together to exploit data parallelism in scientific applications while hiding the details of manag- ing parallelism and data locality from the user. We present innovative algorithms, based on the macro -dataflow model, for detecting data parallelism and efficiently executing data- parallel statements on shared-memory multiprocessors. We also desclibe how these algorithms can be implemented on clusters of SMPS

  20. SMARTS: Exploiting Temporal Locality and Parallelism through Vertical Execution

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, P.; Crotinger, J.; Karmesin, S.; Malony, A.; Oldehoeft, R.; Shende, S.; Smith, S.; Vajracharya, S.

    1999-01-04

    In the solution of large-scale numerical prob- lems, parallel computing is becoming simultaneously more important and more difficult. The complex organization of today's multiprocessors with several memory hierarchies has forced the scientific programmer to make a choice between simple but unscalable code and scalable but extremely com- plex code that does not port to other architectures. This paper describes how the SMARTS runtime system and the POOMA C++ class library for high-performance scientific computing work together to exploit data parallelism in scientific applications while hiding the details of manag- ing parallelism and data locality from the user. We present innovative algorithms, based on the macro -dataflow model, for detecting data parallelism and efficiently executing data- parallel statements on shared-memory multiprocessors. We also desclibe how these algorithms can be implemented on clusters of SMPS.

  1. A model for optimizing file access patterns using spatio-temporal parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  2. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics. [Data Acquisition by Parallel Histogramming and NEtworking

    Energy Technology Data Exchange (ETDEWEB)

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered.

  3. Spatio-temporal light shaping for parallel nano-biophotonics

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin

    followed separate tracks. Width-shaping, or spatial techniques, have mostly ignored light’s thickness (using continuous-wave lasers), while thickness-shaping, or temporal techniques, typically ignored the beam width. This disconnected spatial and temporal track also shows in our own research where we....... Another step is to vary light’s pulsewidth (thickness) as it propagates to get maximum compression (and highest energy density) at a chosen target plane. This temporal focusing can selectively look at a defined crosssection within a sample with only minimal disturbance from other regions. It can also do...... plane-byplane micromachining for faster laser processing compared to scanning a focused laser spot. Our previous work on spatial light shaping, together with the interplay between spatial and temporal modulation, invariably provides a strong position to pursue application-oriented spatiotemporal...

  4. VIBE with parallel acquisition technique - a novel approach to dynamic contrast-enhanced MR imaging of the liver

    International Nuclear Information System (INIS)

    Dobritz, M.; Radkow, T.; Bautz, W.; Fellner, F.A.; Nittka, M.

    2002-01-01

    Purpose: The VIBE (volume interpolated breath-hold examination) sequence in combination with parallel acquisition technique (iPAT: integrated parallel acquisition technique) allows dynamic contrast-enhanced MRI of the liver with high temporal and spatial resolution. The aim of this study was to obtain first clinical experience with this technique for the detection and characterization of focal liver lesions. Materials and Methods: We examined 10 consecutive patients using a 1.5 T MR system (gradient field strength 30 mT/m) with a phased-array coil combination. Following sequences- were acquired: T 2 -w TSE and T 1 -w FLASH, after administration of gadolinium, 6 VIBE sequences with iPAT (TR/TE/matrix/partition thickness/time of acquisition: 6.2 ms/ 3.2 ms/256 x 192/4 mm/13 s), as well as T 1 -weighted FLASH with fat saturation. Two observers evaluated the different sequences concerning the number of lesions and their dignity. Following lesions were found: hepatocellular carcinoma (5 patients), hemangioma (2), metastasis (1), cyst (1), adenoma (1). Results: The VIBE sequences were superior for the detection of lesions with arterial hyperperfusion with a total of 33 focal lesions. 21 lesions were found with T 2 -w TSE and 20 with plain T 1 -weighted FLASH. Diagnostic accuracy increased with the VIBE sequence in comparison to the other sequences. Conclusion: VIBE with iPAT allows MR imaging of the liver with high spatial and temporal resolution providing dynamic contrast-enhanced information about the whole liver. This may lead to improved detection of liver lesions, especially hepatocellular carcinoma. (orig.) [de

  5. Balanced steady-state free precession with parallel imaging gives distortion-free fMRI with high temporal resolution.

    Science.gov (United States)

    Chappell, Michael; Håberg, Asta K; Kristoffersen, Anders

    2011-01-01

    Research on the functions of the human brain requires that functional magnetic resonance imaging (MRI) moves towards producing images with less distortion and higher temporal and spatial resolution. This study compares passband balanced steady-state free precession (bSSFP) acquisitions with and without parallel imaging (PI) to investigate whether combining PI with this pulse sequence is a viable option for functional MRI. Such a novel combination has the potential to offer the distortion-free advantages of bSSFP with the reduced acquisition time of PI. Scans were done on a Philips 3T Intera, using the installed bSSFP pulse sequence, both with and without the sensitivity encoding (SENSE) PI option. The task was a visual flashing checkerboard, and the viewing window covered the visual cortex. Sensitivity comparisons with and without PI were done using the same manually drawn region of interest for each time course of the subject, and comparing the z-score summary statistics: number of voxels with z>2.3, the mean of those voxels, their 90th percentile and their maximum value. We show that PI greatly improves the temporal resolution in bSSFP, reducing the volume acquisition time by more than half in this study to 0.67 s with 3-mm isotropic voxels. At the same time, a statistically significant increase was found for the maximum z-score using bSSFP with PI as compared to without it (P=.02). This improvement can be understood in terms of physiological noise, as demonstrated by noise measurements. This produces observed increases in the overall temporal signal to noise of the functional time series, giving greater sensitivity to functional activations with PI. This study demonstrates for the first time the possibility of combining PI with bSSFP to achieve distortion-free functional images without loss of sensitivity and with high temporal resolution. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Characterization of Harmonic Signal Acquisition with Parallel Dipole and Multipole Detectors

    Science.gov (United States)

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2018-04-01

    Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) is a powerful instrument for the study of complex biological samples due to its high resolution and mass measurement accuracy. However, the relatively long signal acquisition periods needed to achieve high resolution can serve to limit applications of FTICR-MS. The use of multiple pairs of detector electrodes enables detection of harmonic frequencies present at integer multiples of the fundamental cyclotron frequency, and the obtained resolving power for a given acquisition period increases linearly with the order of harmonic signal. However, harmonic signal detection also increases spectral complexity and presents challenges for interpretation. In the present work, ICR cells with independent dipole and harmonic detection electrodes and preamplifiers are demonstrated. A benefit of this approach is the ability to independently acquire fundamental and multiple harmonic signals in parallel using the same ions under identical conditions, enabling direct comparison of achieved performance as parameters are varied. Spectra from harmonic signals showed generally higher resolving power than spectra acquired with fundamental signals and equal signal duration. In addition, the maximum observed signal to noise (S/N) ratio from harmonic signals exceeded that of fundamental signals by 50 to 100%. Finally, parallel detection of fundamental and harmonic signals enables deconvolution of overlapping harmonic signals since observed fundamental frequencies can be used to unambiguously calculate all possible harmonic frequencies. Thus, the present application of parallel fundamental and harmonic signal acquisition offers a general approach to improve utilization of harmonic signals to yield high-resolution spectra with decreased acquisition time. [Figure not available: see fulltext.

  7. Temporal Dynamics of Recovery from Extinction Shortly after Extinction Acquisition

    Science.gov (United States)

    Archbold, Georgina E.; Dobbek, Nick; Nader, Karim

    2013-01-01

    Evidence suggests that extinction is new learning. Memory acquisition involves both short-term memory (STM) and long-term memory (LTM) components; however, few studies have examined early phases of extinction retention. Retention of auditory fear extinction was examined at various time points. Shortly (1-4 h) after extinction acquisition…

  8. High spatial and temporal resolution retrospective cine cardiovascular magnetic resonance from shortened free breathing real-time acquisitions.

    Science.gov (United States)

    Xue, Hui; Kellman, Peter; Larocca, Gina; Arai, Andrew E; Hansen, Michael S

    2013-11-14

    Cine cardiovascular magnetic resonance (CMR) is challenging in patients who cannot perform repeated breath holds. Real-time, free-breathing acquisition is an alternative, but image quality is typically inferior. There is a clinical need for techniques that achieve similar image quality to the segmented cine using a free breathing acquisition. Previously, high quality retrospectively gated cine images have been reconstructed from real-time acquisitions using parallel imaging and motion correction. These methods had limited clinical applicability due to lengthy acquisitions and volumetric measurements obtained with such methods have not previously been evaluated systematically. This study introduces a new retrospective reconstruction scheme for real-time cine imaging which aims to shorten the required acquisition. A real-time acquisition of 16-20s per acquired slice was inputted into a retrospective cine reconstruction algorithm, which employed non-rigid registration to remove respiratory motion and SPIRiT non-linear reconstruction with temporal regularization to fill in missing data. The algorithm was used to reconstruct cine loops with high spatial (1.3-1.8 × 1.8-2.1 mm²) and temporal resolution (retrospectively gated, 30 cardiac phases, temporal resolution 34.3 ± 9.1 ms). Validation was performed in 15 healthy volunteers using two different acquisition resolutions (256 × 144/192 × 128 matrix sizes). For each subject, 9 to 12 short axis and 3 long axis slices were imaged with both segmented and real-time acquisitions. The retrospectively reconstructed real-time cine images were compared to a traditional segmented breath-held acquisition in terms of image quality scores. Image quality scoring was performed by two experts using a scale between 1 and 5 (poor to good). For every subject, LAX and three SAX slices were selected and reviewed in the random order. The reviewers were blinded to the reconstruction approach and acquisition protocols and

  9. Modeling, realization and evaluation of a parallel architecture for the data acquisition in multidetectors

    International Nuclear Information System (INIS)

    Guirande, Ph.; Aleonard, M-M.; Dien, Q-T.; Pedroza, J-L.

    1997-01-01

    The efficiency increasing in four π (EUROGAM, EUROBALL, DIAMANT) is achieved by an increase in the granularity, hence in the event counting rate in the acquisition system. Consequently, an evolution of the architecture of readout systems, coding and software is necessary. To achieve the required evaluation we have implemented a parallel architecture to check the quality of the events. The first application of this architecture was to make available an improved data acquisition system for the DIAMANT multidetector. The data acquisition system of DIAMANT is based on an ensemble of VME cards which must manage: the event readout, their salvation on magnetic support and histogram construction. The ensemble consists of processors distributed in a net, a workstation to control the experiment and a display system for spectra and arrays. In such architecture the task of VME bus becomes quickly a limitation for performances not only for the data transfer but also for coordination of different processors. The parallel architecture used makes the VME bus operation easy. It is based on three DSP C40 (Digital Signal Processor) implanted in a commercial (LSI) VME. It is provided with an external bus used to read the raw data from an interface card (ROCVI) between the 32 bit ECL bus reading the real time VME-based encoders. The performed tests have evidenced jamming after data exchanges between the processors using two communication lines. The analysis of this problem has indicated the necessity of dynamical changes of tasks to avoid this blocking. Intrinsic evaluation (i.e. without transfer on the VME bus) has been carried out for two parallel topologies (processor farm and tree). The simulation software permitted the generation of event packets. The obtained rates are sensibly equivalent (6 Mo/s) independent of topology. The farm topology has been chosen because it is simple to implant. The charge evaluation has reduced the rate in 'simplex' communication mode to 5.3 Mo/s and

  10. Parallel imaging: is GRAPPA a useful acquisition tool for MR imaging intended for volumetric brain analysis?

    Directory of Open Access Journals (Sweden)

    Frank Anders

    2009-08-01

    Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.

  11. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  12. Big Data GPU-Driven Parallel Processing Spatial and Spatio-Temporal Clustering Algorithms

    Science.gov (United States)

    Konstantaras, Antonios; Skounakis, Emmanouil; Kilty, James-Alexander; Frantzeskakis, Theofanis; Maravelakis, Emmanuel

    2016-04-01

    Advances in graphics processing units' technology towards encompassing parallel architectures [1], comprised of thousands of cores and multiples of parallel threads, provide the foundation in terms of hardware for the rapid processing of various parallel applications regarding seismic big data analysis. Seismic data are normally stored as collections of vectors in massive matrices, growing rapidly in size as wider areas are covered, denser recording networks are being established and decades of data are being compiled together [2]. Yet, many processes regarding seismic data analysis are performed on each seismic event independently or as distinct tiles [3] of specific grouped seismic events within a much larger data set. Such processes, independent of one another can be performed in parallel narrowing down processing times drastically [1,3]. This research work presents the development and implementation of three parallel processing algorithms using Cuda C [4] for the investigation of potentially distinct seismic regions [5,6] present in the vicinity of the southern Hellenic seismic arc. The algorithms, programmed and executed in parallel comparatively, are the: fuzzy k-means clustering with expert knowledge [7] in assigning overall clusters' number; density-based clustering [8]; and a selves-developed spatio-temporal clustering algorithm encompassing expert [9] and empirical knowledge [10] for the specific area under investigation. Indexing terms: GPU parallel programming, Cuda C, heterogeneous processing, distinct seismic regions, parallel clustering algorithms, spatio-temporal clustering References [1] Kirk, D. and Hwu, W.: 'Programming massively parallel processors - A hands-on approach', 2nd Edition, Morgan Kaufman Publisher, 2013 [2] Konstantaras, A., Valianatos, F., Varley, M.R. and Makris, J.P.: 'Soft-Computing Modelling of Seismicity in the Southern Hellenic Arc', Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [3] Papadakis, S. and

  13. Evaluation of Parallel and Fan-Beam Data Acquisition Geometries and Strategies for Myocardial SPECT Imaging

    Science.gov (United States)

    Qi, Yujin; Tsui, B. M. W.; Gilland, K. L.; Frey, E. C.; Gullberg, G. T.

    2004-06-01

    This study evaluates myocardial SPECT images obtained from parallel-hole (PH) and fan-beam (FB) collimator geometries using both circular-orbit (CO) and noncircular-orbit (NCO) acquisitions. A newly developed 4-D NURBS-based cardiac-torso (NCAT) phantom was used to simulate the /sup 99m/Tc-sestamibi uptakes in human torso with myocardial defects in the left ventricular (LV) wall. Two phantoms were generated to simulate patients with thick and thin body builds. Projection data including the effects of attenuation, collimator-detector response and scatter were generated using SIMSET Monte Carlo simulations. A large number of photon histories were generated such that the projection data were close to noise free. Poisson noise fluctuations were then added to simulate the count densities found in clinical data. Noise-free and noisy projection data were reconstructed using the iterative OS-EM reconstruction algorithm with attenuation compensation. The reconstructed images from noisy projection data show that the noise levels are lower for the FB as compared to the PH collimator due to increase in detected counts. The NCO acquisition method provides slightly better resolution and small improvement in defect contrast as compared to the CO acquisition method in noise-free reconstructed images. Despite lower projection counts the NCO shows the same noise level as the CO in the attenuation corrected reconstruction images. The results from the channelized Hotelling observer (CHO) study show that FB collimator is superior to PH collimator in myocardial defect detection, but the NCO shows no statistical significant difference from the CO for either PH or FB collimator. In conclusion, our results indicate that data acquisition using NCO makes a very small improvement in the resolution over CO for myocardial SPECT imaging. This small improvement does not make a significant difference on myocardial defect detection. However, an FB collimator provides better defect detection than a

  14. The design and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    International Nuclear Information System (INIS)

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1987-05-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these systems, taking advantages of the independent nature of ''events,'' is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, CPU power equivalent to several VAXs is obtained at a fraction of the cost of one VAX. The front end interfacs to a VAX 11/750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to reply data using the same command language. Design concepts, data structures, performance, and experience to data are discussed

  15. The design, creation, and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    International Nuclear Information System (INIS)

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1986-01-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these system, taking advantage of the independent nature of ''events'', is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, multi-VAX cpu power is obtained at a fraction of the cost of one VAX. The front end interfaces to a VAX 750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to replay data using the same command language. Design concepts, data structures, performance, and experience to data are discussed. 5 refs., 2 figs

  16. Rapid musculoskeletal magnetic resonance imaging using integrated parallel acquisition techniques (IPAT) - Initial experiences

    International Nuclear Information System (INIS)

    Romaneehsen, B.; Oberholzer, K.; Kreitner, K.-F.; Mueller, L.P.

    2003-01-01

    Purpose: To investigate the feasibility of using multiple receiver coil elements for time saving integrated parallel imaging techniques (iPAT) in traumatic musculoskeletal disorders. Material and methods: 6 patients with traumatic derangements of the knee, ankle and hip underwent MR imaging at 1.5 T. For signal detection of the knee and ankle, we used a 6-channel body array coil that was placed around the joints, for hip imaging two 4-channel body array coils and two elements of the spine array coil were combined for signal detection. All patients were investigated with a standard imaging protocol that mainly consisted of different turbo spin-echo sequences (PD-, T 2 -weighted TSE with and without fat suppression, STIR). All sequences were repeated with an integrated parallel acquisition technique (iPAT) using a modified sensitivity encoding (mSENSE) technique with an acceleration factor of 2. Overall image quality was subjectively assessed using a five-point scale as well as the ability for detection of pathologic findings. Results: Regarding overall image quality, there were no significant differences between standard imaging and imaging using mSENSE. All pathologies (occult fracture, meniscal tear, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques. iPAT led to a 48% reduction of acquisition time compared with standard technique. Additionally, time savings with iPAT led to a decrease of pain-induced motion artifacts in two cases. Conclusion: In times of increasing cost pressure, iPAT using multiple coil elements seems to be an efficient and economic tool for fast musculoskeletal imaging with diagnostic performance comparable to conventional techniques. (orig.) [de

  17. Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex

    Science.gov (United States)

    Lafer-Sousa, Rosa; Conway, Bevil R.

    2014-01-01

    Visual-object processing culminates in inferior temporal (IT) cortex. To assess the organization of IT, we measured fMRI responses in alert monkey to achromatic images (faces, fruit, bodies, places) and colored gratings. IT contained multiple color-biased regions, which were typically ventral to face patches and, remarkably, yoked to them, spaced regularly at four locations predicted by known anatomy. Color and face selectivity increased for more anterior regions, indicative of a broad hierarchical arrangement. Responses to non-face shapes were found across IT, but were stronger outside color-biased regions and face patches, consistent with multiple parallel streams. IT also contained multiple coarse eccentricity maps: face patches overlapped central representations; color-biased regions spanned mid-peripheral representations; and place-biased regions overlapped peripheral representations. These results suggest that IT comprises parallel, multi-stage processing networks subject to one organizing principle. PMID:24141314

  18. The Temporal Dynamics of Visual Search: Evidence for Parallel Processing in Feature and Conjunction Searches

    Science.gov (United States)

    McElree, Brian; Carrasco, Marisa

    2012-01-01

    Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310

  19. The Medial Temporal Lobe – Conduit of Parallel Connectivity: A model for Attention, Memory, and Perception.

    Directory of Open Access Journals (Sweden)

    Brian B. Mozaffari

    2014-11-01

    Full Text Available Based on the notion that the brain is equipped with a hierarchical organization, which embodies environmental contingencies across many time scales, this paper suggests that the medial temporal lobe (MTL – located deep in the hierarchy – serves as a bridge connecting supra to infra – MTL levels. Bridging the upper and lower regions of the hierarchy provides a parallel architecture that optimizes information flow between upper and lower regions to aid attention, encoding, and processing of quick complex visual phenomenon. Bypassing intermediate hierarchy levels, information conveyed through the MTL ‘bridge’ allows upper levels to make educated predictions about the prevailing context and accordingly select lower representations to increase the efficiency of predictive coding throughout the hierarchy. This selection or activation/deactivation is associated with endogenous attention. In the event that these ‘bridge’ predictions are inaccurate, this architecture enables the rapid encoding of novel contingencies. A review of hierarchical models in relation to memory is provided along with a new theory, Medial-temporal-lobe Conduit for Parallel Connectivity (MCPC. In this scheme, consolidation is considered as a secondary process, occurring after a MTL-bridged connection, which eventually allows upper and lower levels to access each other directly. With repeated reactivations, as contingencies become consolidated, less MTL activity is predicted. Finally, MTL bridging may aid processing transient but structured perceptual events, by allowing communication between upper and lower levels without calling on intermediate levels of representation.

  20. Rapid musculoskeletal magnetic resonance imaging using integrated parallel acquisition techniques (IPAT) - Initial experiences

    Energy Technology Data Exchange (ETDEWEB)

    Romaneehsen, B.; Oberholzer, K.; Kreitner, K.-F. [Johannes Gutenberg-Univ. Mainz (Germany). Klinik und Poliklinik fuer Radiologie; Mueller, L.P. [Johannes Gutenberg-Univ. Mainz (Germany). Klinik und Poliklinik fuer Unfallchirurgie

    2003-09-01

    Purpose: To investigate the feasibility of using multiple receiver coil elements for time saving integrated parallel imaging techniques (iPAT) in traumatic musculoskeletal disorders. Material and methods: 6 patients with traumatic derangements of the knee, ankle and hip underwent MR imaging at 1.5 T. For signal detection of the knee and ankle, we used a 6-channel body array coil that was placed around the joints, for hip imaging two 4-channel body array coils and two elements of the spine array coil were combined for signal detection. All patients were investigated with a standard imaging protocol that mainly consisted of different turbo spin-echo sequences (PD-, T{sub 2}-weighted TSE with and without fat suppression, STIR). All sequences were repeated with an integrated parallel acquisition technique (iPAT) using a modified sensitivity encoding (mSENSE) technique with an acceleration factor of 2. Overall image quality was subjectively assessed using a five-point scale as well as the ability for detection of pathologic findings. Results: Regarding overall image quality, there were no significant differences between standard imaging and imaging using mSENSE. All pathologies (occult fracture, meniscal tear, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques. iPAT led to a 48% reduction of acquisition time compared with standard technique. Additionally, time savings with iPAT led to a decrease of pain-induced motion artifacts in two cases. Conclusion: In times of increasing cost pressure, iPAT using multiple coil elements seems to be an efficient and economic tool for fast musculoskeletal imaging with diagnostic performance comparable to conventional techniques. (orig.) [German] Ziel: Einsatz integrierter paralleler Akquisitionstechniken (iPAT) zur Verkuerzung der Untersuchungszeit bei muskuloskelettalen Verletzungen. Material und Methoden: 6 Patienten mit einem Knie, Sprunggelenks- oder Huefttrauma wurden bei 1,5 T

  1. Environmental Enrichment Expedites Acquisition and Improves Flexibility on a Temporal Sequencing Task in Mice

    Directory of Open Access Journals (Sweden)

    Darius Rountree-Harrison

    2018-03-01

    Full Text Available Environmental enrichment (EE via increased opportunities for voluntary exercise, sensory stimulation and social interaction, can enhance the function of and behaviours regulated by cognitive circuits. Little is known, however, as to how this intervention affects performance on complex tasks that engage multiple, definable learning and memory systems. Accordingly, we utilised the Olfactory Temporal Order Discrimination (OTOD task which requires animals to recall and report sequence information about a series of recently encountered olfactory stimuli. This approach allowed us to compare animals raised in either enriched or standard laboratory housing conditions on a number of measures, including the acquisition of a complex discrimination task, temporal sequence recall accuracy (i.e., the ability to accurately recall a sequences of events and acuity (i.e., the ability to resolve past events that occurred in close temporal proximity, as well as cognitive flexibility tested in the style of a rule reversal and an Intra-Dimensional Shift (IDS. We found that enrichment accelerated the acquisition of the temporal order discrimination task, although neither accuracy nor acuity was affected at asymptotic performance levels. Further, while a subtle enhancement of overall performance was detected for both rule reversal and IDS versions of the task, accelerated performance recovery could only be attributed to the shift-like contingency change. These findings suggest that EE can affect specific elements of complex, multi-faceted cognitive processes.

  2. Temporal and spatial adaptations during the acquisition of a reversal movement.

    Science.gov (United States)

    van Loon, E M; Buekers, M J; Helsen, W; Magill, R A

    1998-03-01

    Adjustments of the biphasic movement in a coincidence anticipation task were studied using an erroneous knowledge of results (KR) paradigm. Forty participants received either no KR, correct KR, erroneous (+100 ms) KR, or 100 trials of correct KR followed by 50 trials of erroneous KR. Kinematic analyses revealed that for this 100-50 KR group the extension part of the movement was temporally adjusted under the influence of erroneous KR. Although accompanied by a decrease in movement amplitude, this did not account for the temporal shift in movement outcome, because all groups showed a reduction in amplitude. It is argued that changing external time constraints mainly results in temporal adaptations. However, spatial adaptations do play a role in kinematic changes during acquisition.

  3. Temporal Precedence Checking for Switched Models and its Application to a Parallel Landing Protocol

    Science.gov (United States)

    Duggirala, Parasara Sridhar; Wang, Le; Mitra, Sayan; Viswanathan, Mahesh; Munoz, Cesar A.

    2014-01-01

    This paper presents an algorithm for checking temporal precedence properties of nonlinear switched systems. This class of properties subsume bounded safety and capture requirements about visiting a sequence of predicates within given time intervals. The algorithm handles nonlinear predicates that arise from dynamics-based predictions used in alerting protocols for state-of-the-art transportation systems. It is sound and complete for nonlinear switch systems that robustly satisfy the given property. The algorithm is implemented in the Compare Execute Check Engine (C2E2) using validated simulations. As a case study, a simplified model of an alerting system for closely spaced parallel runways is considered. The proposed approach is applied to this model to check safety properties of the alerting logic for different operating conditions such as initial velocities, bank angles, aircraft longitudinal separation, and runway separation.

  4. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  5. The acquisition of face and person identity information following anterior temporal lobectomy.

    Science.gov (United States)

    Moran, Maria; Seidenberg, Michael; Sabsevitz, Dave; Swanson, Sara; Hermann, Bruce

    2005-05-01

    Thirty unilateral anterior temporal lobectomy (ATL) subjects (15 right and 15 left) and 15 controls were presented a multitrial learning task in which unfamiliar faces were paired with biographical information (occupation, city location, and a person's name). Face recognition hits were similar between groups, but the right ATL group committed more false-positive errors to face foils. Both left and right ATL groups were impaired relative to controls in acquiring biographical information, but the deficit was more pronounced for the left ATL group. Recall levels also varied for the different types of biographical information; occupation was most commonly recalled followed by city name and person name. In addition, city and person name recall was more likely when occupation was also recalled. Overall, recall of biographical information was positively correlated with clinical measures of anterograde episodic memory. Findings are discussed in terms of the role of the temporal lobe and associative learning ability in the successful acquisition of new face semantic (biographical) representations.

  6. A comparison of temporal, spatial and parallel phase shifting algorithms for digital image plane holography

    International Nuclear Information System (INIS)

    Arroyo, M P; Lobera, J

    2008-01-01

    This paper investigates the performance of several phase shifting (PS) techniques when using digital image plane holography (DIPH) as a fluid velocimetry technique. The main focus is on increasing the recording system aperture in order to overcome the limitation on the little light available in fluid applications. Some experiments with small rotations of a fluid-like solid object have been used to test the ability of PS-DIPH to faithfully reconstruct the object complex amplitude. Holograms for several apertures and for different defocusing distances have been recorded using spatial phase shifting (SPS) or temporal phase shifting (TPS) techniques. The parallel phase shifted holograms (H PPS ) have been generated from the TPS holograms (H TPS ). The data obtained from TPS-DIPH have been taken as the true object complex amplitude, which is used to benchmark that recovered using the other techniques. The findings of this work show that SPS and PPS are very similar indeed, and suggest that both can work for bigger apertures yet retain phase information

  7. Temporal progression of 'Candidatus Liberibacter asiaticus' infection in citrus and acquisition efficiency by Diaphorina citri.

    Science.gov (United States)

    Coletta-Filho, Helvecio D; Daugherty, Matthew P; Ferreira, Cléderson; Lopes, João R S

    2014-04-01

    Over the last decade, the plant disease huanglongbing (HLB) has emerged as a primary threat to citrus production worldwide. HLB is associated with infection by phloem-limited bacteria ('Candidatus Liberibacter' spp.) that are transmitted by the Asian citrus psyllid, Diaphorina citri. Transmission efficiency varies with vector-related aspects (e.g., developmental stage and feeding periods) but there is no information on the effects of host-pathogen interactions. Here, acquisition efficiency of 'Candidatus Liberibacter asiaticus' by D. citri was evaluated in relation to temporal progression of infection and pathogen titer in citrus. We graft inoculated sweet orange trees with 'Ca. L. asiaticus'; then, at different times after inoculation, we inspected plants for HLB symptoms, measured bacterial infection levels (i.e., titer or concentration) in plants, and measured acquisition by psyllid adults that were confined on the trees. Plant infection levels increased rapidly over time, saturating at uniformly high levels (≈10(8) copy number of 16S ribosomal DNA/g of plant tissue) near 200 days after inoculation-the same time at which all infected trees first showed disease symptoms. Pathogen acquisition by vectors was positively associated with plant infection level and time since inoculation, with acquisition occurring as early as the first measurement, at 60 days after inoculation. These results suggest that there is ample potential for psyllids to acquire the pathogen from trees during the asymptomatic phase of infection. If so, this could limit the effectiveness of tree rouging as a disease management tool and would likely explain the rapid spread observed for this disease in the field.

  8. Fast magnetic resonance imaging of the knee using a parallel acquisition technique (mSENSE): a prospective performance evaluation

    International Nuclear Information System (INIS)

    Kreitner, K.F.; Romaneehsen, Bernd; Oberholzer, Katja; Dueber, Christoph; Krummenauer, Frank; Mueller, L.P.

    2006-01-01

    The performance of a magnetic resonance (MR) imaging strategy that uses multiple receiver coil elements and integrated parallel imaging techniques (iPAT) in traumatic and degenerative disorders of the knee and to compare this technique with a standard MR imaging protocol was evaluated. Ninety patients with suspected internal derangements of the knee joint prospectively underwent MR imaging at 1.5 T. For signal detection, a 6-channel array coil was used. All patients were investigated with a standard imaging protocol consisting of different turbo spin-echo sequences proton density (PD), T 2 -weighted turbo spin echo (TSE) with and without fat suppression in three imaging planes. All sequences were repeated with an integrated parallel acquisition technique (iPAT) using the modified sensitivity encoding (mSENSE) algorithm with an acceleration factor of 2. Two radiologists independently evaluated and scored all images with regard to overall image quality, artefacts and pathologic findings. Agreement of the parallel ratings between readers and imaging techniques, respectively, was evaluated by means of pairwise kappa coefficients that were stratified for the area of evaluation. Agreement between the parallel readers for both the iPAT imaging and the conventional technique, respectively, as well as between imaging techniques was found encouraging with inter-observer kappa values ranging between 0.78 and 0.98 for both imaging techniques, and the inter-method kappa values ranging between 0.88 and 1.00 for both clinical readers. All pathological findings (e.g. occult fractures, meniscal and cruciate ligament tears, torn and interpositioned Hoffa's cleft, cartilage damage) were detected by both techniques with comparable performance. The use of iPAT lead to a 48% reduction of acquisition time compared with standard technique. Parallel imaging using mSENSE proved to be an efficient and economic tool for fast musculoskeletal MR imaging of the knee joint with comparable

  9. Reducing contrast contamination in radial turbo-spin-echo acquisitions by combining a narrow-band KWIC filter with parallel imaging.

    Science.gov (United States)

    Neumann, Daniel; Breuer, Felix A; Völker, Michael; Brandt, Tobias; Griswold, Mark A; Jakob, Peter M; Blaimer, Martin

    2014-12-01

    Cartesian turbo spin-echo (TSE) and radial TSE images are usually reconstructed by assembling data containing different contrast information into a single k-space. This approach results in mixed contrast contributions in the images, which may reduce their diagnostic value. The goal of this work is to improve the image contrast from radial TSE acquisitions by reducing the contribution of signals with undesired contrast information. Radial TSE acquisitions allow the reconstruction of multiple images with different T2 contrasts using the k-space weighted image contrast (KWIC) filter. In this work, the image contrast is improved by reducing the band-width of the KWIC filter. Data for the reconstruction of a single image are selected from within a small temporal range around the desired echo time. The resulting dataset is undersampled and, therefore, an iterative parallel imaging algorithm is applied to remove aliasing artifacts. Radial TSE images of the human brain reconstructed with the proposed method show an improved contrast when compared with Cartesian TSE images or radial TSE images with conventional KWIC reconstructions. The proposed method provides multi-contrast images from radial TSE data with contrasts similar to multi spin-echo images. Contaminations from unwanted contrast weightings are strongly reduced. © 2014 Wiley Periodicals, Inc.

  10. Spatio-Temporal Patterns of the International Merger and Acquisition Network.

    Science.gov (United States)

    Dueñas, Marco; Mastrandrea, Rossana; Barigozzi, Matteo; Fagiolo, Giorgio

    2017-09-07

    This paper analyses the world web of mergers and acquisitions (M&As) using a complex network approach. We use data of M&As to build a temporal sequence of binary and weighted-directed networks for the period 1995-2010 and 224 countries (nodes) connected according to their M&As flows (links). We study different geographical and temporal aspects of the international M&A network (IMAN), building sequences of filtered sub-networks whose links belong to specific intervals of distance or time. Given that M&As and trade are complementary ways of reaching foreign markets, we perform our analysis using statistics employed for the study of the international trade network (ITN), highlighting the similarities and differences between the ITN and the IMAN. In contrast to the ITN, the IMAN is a low density network characterized by a persistent giant component with many external nodes and low reciprocity. Clustering patterns are very heterogeneous and dynamic. High-income economies are the main acquirers and are characterized by high connectivity, implying that most countries are targets of a few acquirers. Like in the ITN, geographical distance strongly impacts the structure of the IMAN: link-weights and node degrees have a non-linear relation with distance, and an assortative pattern is present at short distances.

  11. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  12. Parallel image-acquisition in continuous-wave electron paramagnetic resonance imaging with a surface coil array: Proof-of-concept experiments

    Science.gov (United States)

    Enomoto, Ayano; Hirata, Hiroshi

    2014-02-01

    This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

  13. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  14. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  15. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  16. VIBE with parallel acquisition technique - a novel approach to dynamic contrast-enhanced MR imaging of the liver; VIBE mit paralleler Akquisitionstechnik - eine neue Moeglichkeit der dynamischen kontrastverstaerkten MRT der Leber

    Energy Technology Data Exchange (ETDEWEB)

    Dobritz, M.; Radkow, T.; Bautz, W.; Fellner, F.A. [Inst. fuer Diagnostische Radiologie, Friedrich-Alexander-Univ. Erlangen-Nuernberg (Germany); Nittka, M. [Siemens Medical Solutions, Erlangen (Germany)

    2002-06-01

    Purpose: The VIBE (volume interpolated breath-hold examination) sequence in combination with parallel acquisition technique (iPAT: integrated parallel acquisition technique) allows dynamic contrast-enhanced MRI of the liver with high temporal and spatial resolution. The aim of this study was to obtain first clinical experience with this technique for the detection and characterization of focal liver lesions. Materials and Methods: We examined 10 consecutive patients using a 1.5 T MR system (gradient field strength 30 mT/m) with a phased-array coil combination. Following sequences- were acquired: T{sub 2}-w TSE and T{sub 1}-w FLASH, after administration of gadolinium, 6 VIBE sequences with iPAT (TR/TE/matrix/partition thickness/time of acquisition: 6.2 ms/ 3.2 ms/256 x 192/4 mm/13 s), as well as T{sub 1}-weighted FLASH with fat saturation. Two observers evaluated the different sequences concerning the number of lesions and their dignity. Following lesions were found: hepatocellular carcinoma (5 patients), hemangioma (2), metastasis (1), cyst (1), adenoma (1). Results: The VIBE sequences were superior for the detection of lesions with arterial hyperperfusion with a total of 33 focal lesions. 21 lesions were found with T{sub 2}-w TSE and 20 with plain T{sub 1}-weighted FLASH. Diagnostic accuracy increased with the VIBE sequence in comparison to the other sequences. Conclusion: VIBE with iPAT allows MR imaging of the liver with high spatial and temporal resolution providing dynamic contrast-enhanced information about the whole liver. This may lead to improved detection of liver lesions, especially hepatocellular carcinoma. (orig.) [German] Ziel: Die VIBE-Sequenz (Volume Interpolated Breath-hold Examination) in Kombination mit paralleler Bildgebung (iPAT) ermoeglicht eine dynamische kontrastmittel-gestuetzte Untersuchung der Leber in hoher zeitlicher und oertlicher Aufloesung. Ziel war es, erste klinische Erfahrungen mit dieser Technik in der Detektion fokaler

  17. Serum IGF-1 affects skeletal acquisition in a temporal and compartment-specific manner.

    Directory of Open Access Journals (Sweden)

    Hayden-William Courtland

    2011-03-01

    Full Text Available Insulin-like growth factor-1 (IGF-1 plays a critical role in the development of the growing skeleton by establishing both longitudinal and transverse bone accrual. IGF-1 has also been implicated in the maintenance of bone mass during late adulthood and aging, as decreases in serum IGF-1 levels appear to correlate with decreases in bone mineral density (BMD. Although informative, mouse models to date have been unable to separate the temporal effects of IGF-1 depletion on skeletal development. To address this problem, we performed a skeletal characterization of the inducible LID mouse (iLID, in which serum IGF-1 levels are depleted at selected ages. We found that depletion of serum IGF-1 in male iLID mice prior to adulthood (4 weeks decreased trabecular bone architecture and significantly reduced transverse cortical bone properties (Ct.Ar, Ct.Th by 16 weeks (adulthood. Likewise, depletion of serum IGF-1 in iLID males at 8 weeks of age, resulted in significantly reduced transverse cortical bone properties (Ct.Ar, Ct.Th by 32 weeks (late adulthood, but had no effect on trabecular bone architecture. In contrast, depletion of serum IGF-1 after peak bone acquisition (at 16 weeks resulted in enhancement of trabecular bone architecture, but no significant changes in cortical bone properties by 32 weeks as compared to controls. These results indicate that while serum IGF-1 is essential for bone accrual during the postnatal growth phase, depletion of IGF-1 after peak bone acquisition (16 weeks is compartment-specific and does not have a detrimental effect on cortical bone mass in the older adult mouse.

  18. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    Science.gov (United States)

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Role of drug transporters and drug accumulation in the temporal acquisition of drug resistance

    International Nuclear Information System (INIS)

    Hembruff, Stacey L; Laberge, Monique L; Villeneuve, David J; Guo, Baoqing; Veitch, Zachary; Cecchetto, Melanie; Parissenti, Amadeo M

    2008-01-01

    Anthracyclines and taxanes are commonly used in the treatment of breast cancer. However, tumor resistance to these drugs often develops, possibly due to overexpression of drug transporters. It remains unclear whether drug resistance in vitro occurs at clinically relevant doses of chemotherapy drugs and whether both the onset and magnitude of drug resistance can be temporally and causally correlated with the enhanced expression and activity of specific drug transporters. To address these issues, MCF-7 cells were selected for survival in increasing concentrations of doxorubicin (MCF-7 DOX-2 ), epirubicin (MCF-7 EPI ), paclitaxel (MCF-7 TAX-2 ), or docetaxel (MCF-7 TXT ). During selection cells were assessed for drug sensitivity, drug uptake, and the expression of various drug transporters. In all cases, resistance was only achieved when selection reached a specific threshold dose, which was well within the clinical range. A reduction in drug uptake was temporally correlated with the acquisition of drug resistance for all cell lines, but further increases in drug resistance at doses above threshold were unrelated to changes in cellular drug uptake. Elevated expression of one or more drug transporters was seen at or above the threshold dose, but the identity, number, and temporal pattern of drug transporter induction varied with the drug used as selection agent. The pan drug transporter inhibitor cyclosporin A was able to partially or completely restore drug accumulation in the drug-resistant cell lines, but had only partial to no effect on drug sensitivity. The inability of cyclosporin A to restore drug sensitivity suggests the presence of additional mechanisms of drug resistance. This study indicates that drug resistance is achieved in breast tumour cells only upon exposure to concentrations of drug at or above a specific selection dose. While changes in drug accumulation and the expression of drug transporters does occur at the threshold dose, the magnitude of

  20. Assessment of temporal resolution of multi-detector row computed tomography in helical acquisition mode using the impulse method.

    Science.gov (United States)

    Ichikawa, Katsuhiro; Hara, Takanori; Urikura, Atsushi; Takata, Tadanori; Ohashi, Kazuya

    2015-06-01

    The purpose of this study was to propose a method for assessing the temporal resolution (TR) of multi-detector row computed tomography (CT) (MDCT) in the helical acquisition mode using temporal impulse signals generated by a metal ball passing through the acquisition plane. An 11-mm diameter metal ball was shot along the central axis at approximately 5 m/s during a helical acquisition, and the temporal sensitivity profile (TSP) was measured from the streak image intensities in the reconstructed helical CT images. To assess the validity, we compared the measured and theoretical TSPs for the 4-channel modes of two MDCT systems. A 64-channel MDCT system was used to compare TSPs and image quality of a motion phantom for the pitch factors P of 0.6, 0.8, 1.0 and 1.2 with a rotation time R of 0.5 s, and for two R/P combinations of 0.5/1.2 and 0.33/0.8. Moreover, the temporal transfer functions (TFs) were calculated from the obtained TSPs. The measured and theoretical TSPs showed perfect agreement. The TSP narrowed with an increase in the pitch factor. The image sharpness of the 0.33/0.8 combination was inferior to that of the 0.5/1.2 combination, despite their almost identical full width at tenth maximum values. The temporal TFs quantitatively confirmed these differences. The TSP results demonstrated that the TR in the helical acquisition mode significantly depended on the pitch factor as well as the rotation time, and the pitch factor and reconstruction algorithm affected the TSP shape. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. Parallel search engine optimisation and pay-per-click campaigns: A comparison of cost per acquisition

    Directory of Open Access Journals (Sweden)

    Wouter T. Kritzinger

    2017-07-01

    Full Text Available Background: It is imperative that commercial websites should rank highly in search engine result pages because these provide the main entry point to paying customers. There are two main methods to achieve high rankings: search engine optimisation (SEO and pay-per-click (PPC systems. Both require a financial investment – SEO mainly at the beginning, and PPC spread over time in regular amounts. If marketing budgets are applied in the wrong area, this could lead to losses and possibly financial ruin. Objectives: The objective of this research was to investigate, using three real-world case studies, the actual expenditure on and income from both SEO and PPC systems. These figures were then compared, and specifically, the cost per acquisition (CPA was used to decide which system yielded the best results. Methodology: Three diverse websites were chosen, and analytics data for all three were compared over a 3-month period. Calculations were performed to reduce the figures to single ratios, to make comparisons between them possible. Results: Some of the resultant ratios varied widely between websites. However, the CPA was shown to be on average 52.1 times lower for SEO than for PPC systems. Conclusion: It was concluded that SEO should be the marketing system of preference for e-commerce-based websites. However, there are cases where PPC would yield better results – when instant traffic is required, and when a large initial expenditure is not possible.

  2. A proposed scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; Vanberg, R.

    1990-01-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequence, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a proposed new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of Gigabytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the proposed Scalable Parallel Open Architecture data acquisition system are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build a prototype of the proposed data acquisition system architecture is given in the paper. The major component of the system, a self-routing parallel event builder, is described in detail

  3. Temporal Dynamics of Late Second Language Acquisition: Evidence from Event-Related Brain Potentials

    Science.gov (United States)

    Steinhauer, Karsten; White, Erin J.; Drury, John E.

    2009-01-01

    The ways in which age of acquisition (AoA) may affect (morpho)syntax in second language acquisition (SLA) are discussed. We suggest that event-related brain potentials (ERPs) provide an appropriate online measure to test some such effects. ERP findings of the past decade are reviewed with a focus on recent and ongoing research. It is concluded…

  4. Detection and Evaluation of Spatio-Temporal Spike Patterns in Massively Parallel Spike Train Data with SPADE

    Directory of Open Access Journals (Sweden)

    Pietro Quaglio

    2017-05-01

    Full Text Available Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs. STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons. In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST. We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE analysis.

  5. β-Adrenergic Receptors Regulate the Acquisition and Consolidation Phases of Aversive Memory Formation Through Distinct, Temporally Regulated Signaling Pathways.

    Science.gov (United States)

    Schiff, Hillary C; Johansen, Joshua P; Hou, Mian; Bush, David E A; Smith, Emily K; Klein, JoAnna E; LeDoux, Joseph E; Sears, Robert M

    2017-03-01

    Memory formation requires the temporal coordination of molecular events and cellular processes following a learned event. During Pavlovian threat (fear) conditioning (PTC), sensory and neuromodulatory inputs converge on post-synaptic neurons within the lateral nucleus of the amygdala (LA). By activating an intracellular cascade of signaling molecules, these G-protein-coupled neuromodulatory receptors are capable of recruiting a diverse profile of plasticity-related proteins. Here we report that norepinephrine, through its actions on β-adrenergic receptors (βARs), modulates aversive memory formation following PTC through two molecularly and temporally distinct signaling mechanisms. Specifically, using behavioral pharmacology and biochemistry in adult rats, we determined that βAR activity during, but not after PTC training initiates the activation of two plasticity-related targets: AMPA receptors (AMPARs) for memory acquisition and short-term memory and extracellular regulated kinase (ERK) for consolidating the learned association into a long-term memory. These findings reveal that βAR activity during, but not following PTC sets in motion cascading molecular events for the acquisition (AMPARs) and subsequent consolidation (ERK) of learned associations.

  6. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC [Superconducting Super Collider] detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; VanBerg, R.

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab

  7. Temporal Uncoupling between Energy Acquisition and Allocation to Reproduction in a Herbivorous-Detritivorous Fish.

    Directory of Open Access Journals (Sweden)

    Francisco Villamarín

    Full Text Available Although considerable knowledge has been gathered regarding the role of fish in cycling and translocation of nutrients across ecosystem boundaries, little information is available on how the energy obtained from different ecosystems is temporally allocated in fish bodies. Although in theory, limitations on energy budgets promote the existence of a trade-off between energy allocated to reproduction and somatic growth, this trade-off has rarely been found under natural conditions. Combining information on RNA:DNA ratios and carbon and nitrogen stable-isotope analyses we were able to achieve novel insights into the reproductive allocation of diamond mullet (Liza alata, a catadromous, widely distributed herbivorous-detritivorous fish. Although diamond mullet were in better condition during the wet season, most reproductive allocation occurred during the dry season when resources are limited and fish have poorer body condition. We found a strong trade-off between reproductive and somatic investment. Values of δ13C from reproductive and somatic tissues were correlated, probably because δ13C in food resources between dry and wet seasons do not differ markedly. On the other hand, data for δ15N showed that gonads are more correlated to muscle, a slow turnover tissue, suggesting long term synthesis of reproductive tissues. In combination, these lines of evidence suggest that L. alata is a capital breeder which shows temporal uncoupling of resource ingestion, energy storage and later allocation to reproduction.

  8. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    Science.gov (United States)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  9. MR sialography: evaluation of an ultra-fast sequence in consideration of a parallel acquisition technique and different functional conditions in patients with salivary gland diseases

    International Nuclear Information System (INIS)

    Petridis, C.; Ries, T.; Cramer, M.C.; Graessner, J.; Petersen, K.U.; Reitmeier, F.; Jaehne, M.; Weiss, F.; Adam, G.; Habermann, C.R.

    2007-01-01

    Purpose: To evaluate an ultra-fast sequence for MR sialography requiring no post-processing and to compare the acquisition technique regarding the effect of oral stimulation with a parallel acquisition technique in patients with salivary gland diseases. Materials and Methods: 128 patients with salivary gland disease were prospectively examined using a 1.5-T superconducting system with a 30 mT/m maximum gradient capability and a maximum slew rate of 125 mT/m/sec. A single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation. All images were obtained with and without a parallel imaging technique. The evaluation of the ductal system of the parotid and submandibular gland was performed using a visual scale of 1-5 for each side. The images were assessed by two independent experienced radiologists. An ANOVA with posthoc comparisons and an overall two tailed significance level of p=0.05 was used for the statistical evaluation. An intraclass correlation was computed to evaluate interobserver variability and a correlation of >0.8 was determined, thereby indicating a high correlation. Results: Depending on the diagnosed diseases and the absence of abruption of the ducts, all parts of excretory ducts were able to be visualized in all patients using the developed technique with an overall rating for all ducts of 2.70 (SD±0.89). A high correlation was achieved between the two observers with an intraclass correlation of 0.73. Oral application of a sialogogum improved the visibility of excretory ducts significantly (p<0.001). In contrast, the use of a parallel imaging technique led to a significant decrease in image quality (p=0,011). (orig.)

  10. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands; MR-Sialographie: Optimierung und Bewertung ultraschneller Sequenzen mit paralleler Bildgebung und oraler Stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G. [Radiologisches Zentrum, Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie, Universitaetsklinikum Hamburg-Eppendorf (Germany); Graessner, J. [Siemens Medical Systems, Hamburg (Germany); Reitmeier, F.; Jaehne, M. [Kopf- und Hautzentrum, Klinik und Poliklinik fuer Hals-, Nasen- und Ohrenheilkunde, Universitaetsklinikum Hamburg-Eppendorf (Germany); Petersen, K.U. [Zentrum fuer Psychosoziale Medizin, Klinik und Poliklinik fuer Psychiatrie und Psychotherapie, Universitaetsklinikum Hamburg-Eppendorf (Germany)

    2005-04-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD{+-}1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p<0.001; {eta}{sup 2}=0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p<0.001; {eta}{sup 2}=0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post

  11. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands

    International Nuclear Information System (INIS)

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G.; Graessner, J.; Reitmeier, F.; Jaehne, M.; Petersen, K.U.

    2005-01-01

    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD±1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p 2 =0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p 2 =0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post-processing needed. To improve results of MR

  12. Dynamic motion analysis of fetuses with central nervous system disorders by cine magnetic resonance imaging using fast imaging employing steady-state acquisition and parallel imaging: a preliminary result.

    Science.gov (United States)

    Guo, Wan-Yuo; Ono, Shigeki; Oi, Shizuo; Shen, Shu-Huei; Wong, Tai-Tong; Chung, Hsiao-Wen; Hung, Jeng-Hsiu

    2006-08-01

    The authors present a novel cine magnetic resonance (MR) imaging, two-dimensional (2D) fast imaging employing steady-state acquisition (FIESTA) technique with parallel imaging. It achieves temporal resolution at less than half a second as well as high spatial resolution cine imaging free of motion artifacts for evaluating the dynamic motion of fetuses in utero. The information obtained is used to predict postnatal outcome. Twenty-five fetuses with anomalies were studied. Ultrasonography demonstrated severe abnormalities in five of the fetuses; the other 20 fetuses constituted a control group. The cine fetal MR imaging demonstrated fetal head, neck, trunk, extremity, and finger as well as swallowing motions. Imaging findings were evaluated and compared in fetuses with major central nervous system (CNS) anomalies in five cases and minor CNS, non-CNS, or no anomalies in 20 cases. Normal motility was observed in the latter group. For fetuses in the former group, those with abnormal motility failed to survive after delivery, whereas those with normal motility survived with functioning preserved. The power deposition of radiofrequency, presented as specific absorption rate (SAR), was calculated. The SAR of FIESTA was approximately 13 times lower than that of conventional MR imaging of fetuses obtained using single-shot fast spin echo sequences. The following conclusions are drawn: 1) Fetal motion is no longer a limitation for prenatal imaging after the implementation of parallel imaging with 2D FIESTA, 2) Cine MR imaging illustrates fetal motion in utero with high clinical reliability, 3) For cases involving major CNS anomalies, cine MR imaging provides information on extremity motility in fetuses and serves as a prognostic indicator of postnatal outcome, and 4) The cine MR used to observe fetal activity is technically 2D and conceptually three-dimensional. It provides four-dimensional information for making proper and timely obstetrical and/or postnatal management

  13. Temporal locality optimizations for stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    Energy Technology Data Exchange (ETDEWEB)

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-01

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes a technique for introducing cache blocking suitable for certain classes of numerical algorithms, demonstrates and analyzes the resulting performance gains, and indicates how this optimization transformation is being automated.

  14. Improving temporal resolution in fMRI using a 3D spiral acquisition and low rank plus sparse (L+S) reconstruction.

    Science.gov (United States)

    Petrov, Andrii Y; Herbst, Michael; Andrew Stenger, V

    2017-08-15

    Rapid whole-brain dynamic Magnetic Resonance Imaging (MRI) is of particular interest in Blood Oxygen Level Dependent (BOLD) functional MRI (fMRI). Faster acquisitions with higher temporal sampling of the BOLD time-course provide several advantages including increased sensitivity in detecting functional activation, the possibility of filtering out physiological noise for improving temporal SNR, and freezing out head motion. Generally, faster acquisitions require undersampling of the data which results in aliasing artifacts in the object domain. A recently developed low-rank (L) plus sparse (S) matrix decomposition model (L+S) is one of the methods that has been introduced to reconstruct images from undersampled dynamic MRI data. The L+S approach assumes that the dynamic MRI data, represented as a space-time matrix M, is a linear superposition of L and S components, where L represents highly spatially and temporally correlated elements, such as the image background, while S captures dynamic information that is sparse in an appropriate transform domain. This suggests that L+S might be suited for undersampled task or slow event-related fMRI acquisitions because the periodic nature of the BOLD signal is sparse in the temporal Fourier transform domain and slowly varying low-rank brain background signals, such as physiological noise and drift, will be predominantly low-rank. In this work, as a proof of concept, we exploit the L+S method for accelerating block-design fMRI using a 3D stack of spirals (SoS) acquisition where undersampling is performed in the k z -t domain. We examined the feasibility of the L+S method to accurately separate temporally correlated brain background information in the L component while capturing periodic BOLD signals in the S component. We present results acquired in control human volunteers at 3T for both retrospective and prospectively acquired fMRI data for a visual activation block-design task. We show that a SoS fMRI acquisition with an

  15. Temporal resolution measurement of 128-slice dual source and 320-row area detector computed tomography scanners in helical acquisition mode using the impulse method.

    Science.gov (United States)

    Hara, Takanori; Urikura, Atsushi; Ichikawa, Katsuhiro; Hoshino, Takashi; Nishimaru, Eiji; Niwa, Shinji

    2016-04-01

    To analyse the temporal resolution (TR) of modern computed tomography (CT) scanners using the impulse method, and assess the actual maximum TR at respective helical acquisition modes. To assess the actual TR of helical acquisition modes of a 128-slice dual source CT (DSCT) scanner and a 320-row area detector CT (ADCT) scanner, we assessed the TRs of various acquisition combinations of a pitch factor (P) and gantry rotation time (R). The TR of the helical acquisition modes for the 128-slice DSCT scanner continuously improved with a shorter gantry rotation time and greater pitch factor. However, for the 320-row ADCT scanner, the TR with a pitch factor of pitch factor of >1.0, it was approximately one half of the gantry rotation time. The maximum TR values of single- and dual-source helical acquisition modes for the 128-slice DSCT scanner were 0.138 (R/P=0.285/1.5) and 0.074s (R/P=0.285/3.2), and the maximum TR values of the 64×0.5- and 160×0.5-mm detector configurations of the helical acquisition modes for the 320-row ADCT scanner were 0.120 (R/P=0.275/1.375) and 0.195s (R/P=0.3/0.6), respectively. Because the TR of a CT scanner is not accurately depicted in the specifications of the individual scanner, appropriate acquisition conditions should be determined based on the actual TR measurement. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. "Hello Jumbo!” The spatio-temporal rollout and traffic to a new grocery chain after acquisition

    NARCIS (Netherlands)

    van Lin, Arjen; Gijsbrechts, Els

    Grocery retailers increasingly use acquisitions to expand their presence. Such acquisitions are risky, especially when retailers decide to subsume the acquired stores under their own banner, which can take years and demands careful planning. We show how the dynamics of consumer valuations of the old

  17. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    Energy Technology Data Exchange (ETDEWEB)

    Shimada, Kotaro, E-mail: kotaro@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Isoda, Hiroyoshi, E-mail: sayuki@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Okada, Tomohisa, E-mail: tomokada@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Kamae, Toshikazu, E-mail: toshi13@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Arizono, Shigeki, E-mail: arizono@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Hirokawa, Yuusuke, E-mail: yuusuke@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Shibata, Toshiya, E-mail: ksj@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Togashi, Kaori, E-mail: ktogashi@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan)

    2011-01-15

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 {+-} 1.0 min (mean {+-} standard deviation), 5.9 {+-} 0.8 min, and 5.8 {+-} 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  18. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    International Nuclear Information System (INIS)

    Shimada, Kotaro; Isoda, Hiroyoshi; Okada, Tomohisa; Kamae, Toshikazu; Arizono, Shigeki; Hirokawa, Yuusuke; Shibata, Toshiya; Togashi, Kaori

    2011-01-01

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 ± 1.0 min (mean ± standard deviation), 5.9 ± 0.8 min, and 5.8 ± 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  19. Selection and integration of a network of parallel processors in the real time acquisition system of the 4π DIAMANT multidetector: modeling, realization and evaluation of the software installed on this network

    International Nuclear Information System (INIS)

    Guirande, F.

    1997-01-01

    The increase in sensitivity of 4π arrays such as EUROBALL or DIAMANT has led to an increase in the data flow rate into the data acquisition system. If at the electronic level, the data flow has been distributed onto several data acquisition buses, it is necessary in the data processing system to increase the processing power. This work regards the modelling and implementation of the software allocated onto an architecture of parallel processors. Object analysis and formal methods were used, benchmark and evolution in the future of this architecture are presented. The thesis consists of two parts. Part A, devoted to 'Nuclear Spectroscopy with 4 π multidetectors', contains a first chapter entitled 'The Physics of 4π multidetectors' and a second chapter entitled 'Integral architecture of 4π multidetectors'. Part B, devoted to 'Parallel acquisition system of DIAMANT' contains three chapters entitled 'Material architecture', 'Software architecture' and 'Validation and Performances'. Four appendices and a term glossary close this work. (author)

  20. Selection and integration of a network of parallel processors in the real time acquisition system of the 4{pi} DIAMANT multidetector: modeling, realization and evaluation of the software installed on this network; Choix et integration d`un reseau de processeurs paralleles dans le systeme d`acquisition temps reel du multidetecteur 4{pi} DIAMANT: modelisation, realisation et evaluation du logiciel implante sur ce reseau

    Energy Technology Data Exchange (ETDEWEB)

    Guirande, F. [Ecole Doctorale des Sciences Physiques et de l`Ingenieur, Bordeaux-1 Univ., 33 (France)

    1997-07-11

    The increase in sensitivity of 4{pi} arrays such as EUROBALL or DIAMANT has led to an increase in the data flow rate into the data acquisition system. If at the electronic level, the data flow has been distributed onto several data acquisition buses, it is necessary in the data processing system to increase the processing power. This work regards the modelling and implementation of the software allocated onto an architecture of parallel processors. Object analysis and formal methods were used, benchmark and evolution in the future of this architecture are presented. The thesis consists of two parts. Part A, devoted to `Nuclear Spectroscopy with 4 {pi} multidetectors`, contains a first chapter entitled `The Physics of 4{pi} multidetectors` and a second chapter entitled `Integral architecture of 4{pi} multidetectors`. Part B, devoted to `Parallel acquisition system of DIAMANT` contains three chapters entitled `Material architecture`, `Software architecture` and `Validation and Performances`. Four appendices and a term glossary close this work. (author) 58 refs.

  1. Dynamic MRI of the liver with parallel acquisition technique. Characterization of focal liver lesions and analysis of the hepatic vasculature in a single MRI session

    International Nuclear Information System (INIS)

    Heilmaier, C.; Sutter, R.; Lutz, A.M.; Willmann, J.K.; Seifert, B.

    2008-01-01

    Purpose: to retrospectively evaluate the performance of breath-hold contrast-enhanced 3D dynamic parallel gradient echo MRI (pMRT) for the characterization of focal liver lesions (standard of reference: histology) and for the analysis of hepatic vasculature (standard of reference: contrast-enhanced 64-detector row computed tomography; MSCT) in a single MRI session. Materials and method: two blinded readers independently analyzed preoperative pMRT data sets (1.5T-MRT) of 45 patients (23 men, 22 women; 28 - 77 years, average age, 48 years) with a total of 68 focal liver lesions with regard to image quality of hepatic arteries, portal and hepatic veins, presence of variant anatomy of the hepatic vasculature, as well as presence of portal vein thrombosis and hemodynamically significant arterial stenosis. In addition, both readers were asked to identify and characterize focal liver lesions. Imaging parameters of pMRT were: TR/TE/matrix/slice thickness/acquisition time: 3.1 ms/1.4 ms/384 x 224/4 mm/15 - 17 s. MSCT was performed with a pitch of 1.2, an effective slice thickness of 1 mm and a matrix of 512 x 512. Results: based on histology, the 68 liver lesions were found to be 42 hepatocellular carcinomas (HCC), 20 metastases, 3 cholangiocellular carcinomas (CCC) as well as 1 dysplastic nodule, 1 focal nodular hyperplasia (FNH) and 1 atypical hemangioma. Overall, the diagnostic accuracy was high for both readers (91 - 100%) in the characterization of these focal liver lesions with an excellent interobserver agreement (κ-values of 0.89 [metastases], 0.97 [HCC] and 1 [CCC]). On average, the image quality of all vessels under consideration was rated good or excellent in 89% (reader 1) and 90% (reader 2). Anatomical variants of the hepatic arteries, hepatic veins and portal vein as well as thrombosis of the portal vein were reliably detected by pMRT. Significant arterial stenosis was found with a sensitivity between 86% and 100% and an excellent interobserver agreement (κ

  2. Aquisição de uma tarefa temporal (DRL por ratos submetidos a lesão seletiva do giro denteado The acquisition of a temporal task (DRL by dentate gyrus-selective colchicine lesioned rats

    Directory of Open Access Journals (Sweden)

    José Lino Oliveira Bueno

    2006-01-01

    Full Text Available A lesão seletiva do giro denteado (DG reduz a eficiência do desempenho de ratos treinados pré-operatoriamente em um esquema de reforçamento diferencial de baixas taxas (DRL; embora os animais lesados sejam capazes de suprimir a resposta de pressão na barra por determinado intervalo de tempo após a resposta anterior, eles subestimam esse intervalo, resultando em um desempenho menos eficiente. Como os animais tinham recebido treinamento pré-operatório, não ficou claro se a lesão interfere na aquisição da discriminação temporal. Este estudo avaliou o efeito da lesão do DG na aquisição de uma tarefa de DRL-20 s. Ratos foram submetidos à neurocirurgia e então ao treino na tarefa de DRL-20 s. Os resultados mostraram que embora os animais lesados se beneficiem do treinamento na tarefa, sua aquisição não é tão eficiente quanto a exibida pelos animais controle. Os resultados sugerem ainda que a lesão do giro denteado interfere na acuidade da discriminação temporal.Previous studies have shown that dentate gyrus damage render rats less efficient than sham-operated controls in the performance of a differential reinforcement of low rates of responding (DRL-20 s task acquired prior to the lesion; even though the lesioned rats were able to postpone their responses after a previous bar press, they seem to underestimate time relative to sham-operated controls, which interferes with their performance. This study investigated the effects of multiplesite, intradentate, colchicine injections on the acquisition and performance of a DRL-20 s task in rats not exposed to preoperatory training, i.e., trained after the lesion. Results showed that the lesioned rats improved along repetitive training in the DRL-20 s task; however, relative to the sham-operated controls, their acquisition rate was slower and the level of proficiency achieved was poorer, indicating that damage to the dentate gyrus interferes with temporal discrimination.

  3. Pre-learning stress that is temporally removed from acquisition exerts sex-specific effects on long-term memory.

    Science.gov (United States)

    Zoladz, Phillip R; Warnecke, Ashlee J; Woelke, Sarah A; Burke, Hanna M; Frigo, Rachael M; Pisansky, Julia M; Lyle, Sarah M; Talbot, Jeffery N

    2013-02-01

    We have examined the influence of sex and the perceived emotional nature of learned information on pre-learning stress-induced alterations of long-term memory. Participants submerged their dominant hand in ice cold (stress) or warm (no stress) water for 3 min. Thirty minutes later, they studied 30 words, rated the words for their levels of emotional valence and arousal and were then given an immediate free recall test. Twenty-four hours later, participants' memory for the word list was assessed via delayed free recall and recognition assessments. The resulting memory data were analyzed after categorizing the studied words (i.e., distributing them to "positive-arousing", "positive-non-arousing", "negative-arousing", etc. categories) according to participants' valence and arousal ratings of the words. The results revealed that participants exhibiting a robust cortisol response to stress exhibited significantly impaired recognition memory for neutral words. More interestingly, however, males displaying a robust cortisol response to stress demonstrated significantly impaired recall, overall, and a marginally significant impairment of overall recognition memory, while females exhibiting a blunted cortisol response to stress demonstrated a marginally significant impairment of overall recognition memory. These findings support the notion that a brief stressor that is temporally separated from learning can exert deleterious effects on long-term memory. However, they also suggest that such effects depend on the sex of the organism, the emotional salience of the learned information and the degree to which stress increases corticosteroid levels. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  5. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  6. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  7. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  8. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  9. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  10. Temporal contingency

    Science.gov (United States)

    Gallistel, C.R.; Craig, Andrew R.; Shahan, Timothy A.

    2015-01-01

    Contingency, and more particularly temporal contingency, has often figured in thinking about the nature of learning. However, it has never been formally defined in such a way as to make it a measure that can be applied to most animal learning protocols. We use elementary information theory to define contingency in such a way as to make it a measurable property of almost any conditioning protocol. We discuss how making it a measurable construct enables the exploration of the role of different contingencies in the acquisition and performance of classically and operantly conditioned behavior. PMID:23994260

  11. Temporal contingency.

    Science.gov (United States)

    Gallistel, C R; Craig, Andrew R; Shahan, Timothy A

    2014-01-01

    Contingency, and more particularly temporal contingency, has often figured in thinking about the nature of learning. However, it has never been formally defined in such a way as to make it a measure that can be applied to most animal learning protocols. We use elementary information theory to define contingency in such a way as to make it a measurable property of almost any conditioning protocol. We discuss how making it a measurable construct enables the exploration of the role of different contingencies in the acquisition and performance of classically and operantly conditioned behavior. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  13. Radiative Heat Transfer in Combustion Applications: Parallel Efficiencies of Two Gas Models, Turbulent Radiation Interactions in Particulate Laden Flows, and Coarse Mesh Finite Difference Acceleration for Improved Temporal Accuracy

    Science.gov (United States)

    Cleveland, Mathew A.

    We investigate several aspects of the numerical solution of the radiative transfer equation in the context of coal combustion: the parallel efficiency of two commonly-used opacity models, the sensitivity of turbulent radiation interaction (TRI) effects to the presence of coal particulate, and an improvement of the order of temporal convergence using the coarse mesh finite difference (CMFD) method. There are four opacity models commonly employed to evaluate the radiative transfer equation in combustion applications; line-by-line (LBL), multigroup, band, and global. Most of these models have been rigorously evaluated for serial computations of a spectrum of problem types [1]. Studies of these models for parallel computations [2] are limited. We assessed the performance of the Spectral-Line-Based weighted sum of gray gasses (SLW) model, a global method related to K-distribution methods [1], and the LBL model. The LBL model directly interpolates opacity information from large data tables. The LBL model outperforms the SLW model in almost all cases, as suggested by Wang et al. [3]. The SLW model, however, shows superior parallel scaling performance and a decreased sensitivity to load imbalancing, suggesting that for some problems, global methods such as the SLW model, could outperform the LBL model. Turbulent radiation interaction (TRI) effects are associated with the differences in the time scales of the fluid dynamic equations and the radiative transfer equations. Solving on the fluid dynamic time step size produces large changes in the radiation field over the time step. We have modified the statistically homogeneous, non-premixed flame problem of Deshmukh et al. [4] to include coal-type particulate. The addition of low mass loadings of particulate minimally impacts the TRI effects. Observed differences in the TRI effects from variations in the packing fractions and Stokes numbers are difficult to analyze because of the significant effect of variations in problem

  14. Uma interface lab-made para aquisição de sinais analógicos instrumentais via porta paralela do microcomputador A lab-made interface for acquisition of instrumental analog signals at the parallel port of a microcomputer

    Directory of Open Access Journals (Sweden)

    Edvaldo da Nóbrega Gaião

    2004-10-01

    Full Text Available A lab-made interface for acquisition of instrumental analog signals between 0 and 5 V at a frequency up to 670 kHz at the parallel port of a microcomputer is described. Since it uses few and small components, it was built into the connector of a printer parallel cable. Its performance was evaluated by monitoring the signals of four different instruments and similar analytical curves were obtained with the interface and from readings from the instrument' displays. Because the components are cheap (~U$35,00 and easy to get, the proposed interface is a simple and economical alternative for data acquisition in small laboratories for routine work, research and teaching.

  15. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  16. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    Science.gov (United States)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  17. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  18. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  19. Temporal compressive sensing systems

    Science.gov (United States)

    Reed, Bryan W.

    2017-12-12

    Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

  20. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  1. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  2. Data acquisition

    International Nuclear Information System (INIS)

    Clout, P.N.

    1982-01-01

    Data acquisition systems are discussed for molecular biology experiments using synchrotron radiation sources. The data acquisition system requirements are considered. The components of the solution are described including hardwired solutions and computer-based solutions. Finally, the considerations for the choice of the computer-based solution are outlined. (U.K.)

  3. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  4. The Chateau de Cristal data acquisition system

    International Nuclear Information System (INIS)

    Villard, M.M.

    1987-05-01

    This data acquisition system is built on several dedicated data transfer busses: ADC data readout through the FERA bus, parallel data processing in two VME crates. High data rates and selectivities are performed via this acquisition structure and new developed processing units. The system modularity allows various experiments with additional detectors

  5. Non-Cartesian parallel imaging reconstruction.

    Science.gov (United States)

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. © 2014 Wiley Periodicals, Inc.

  6. Mergers + acquisitions.

    Science.gov (United States)

    Hoppszallern, Suzanna

    2002-05-01

    The hospital sector in 2001 led the health care field in mergers and acquisitions. Most deals involved a network augmenting its presence within a specific region or in a market adjacent to its primary service area. Analysts expect M&A activity to increase in 2002.

  7. The FINUDA data acquisition system

    International Nuclear Information System (INIS)

    Cerello, P.; Marcello, S.; Filippini, V.; Fiore, L.; Gianotti, P.; Raimondo, A.

    1996-07-01

    A parallel scalable Data Acquisition System, based on VME, has been developed to be used in the FINUDA experiment, scheduled to run at the DAPHNE machine at Frascati starting from 1997. The acquisition software runs on embedded RTPC 8067 processors using the LynxOS operating system. The readout of event fragments is coordinated by a suitable trigger Supervisor. data read by different controllers are transported via dedicated bus to a Global Event Builder running on a UNIX machine. Commands from and to VME processors are sent via socket based network protocols. The network hardware is presently ethernet, but it can easily changed to optical fiber

  8. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  9. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  10. Mergers & Acquisitions

    DEFF Research Database (Denmark)

    Fomcenco, Alex

    This dissertation is a legal dogmatic thesis, the goal of which is to describe and analyze the current state of law in Europe in regard to some relevant selected elements related to mergers and acquisitions, and the adviser’s counsel in this regard. Having regard to the topic of the dissertation...... and fiscal neutrality, group-related issues, holding-structure issues, employees, stock exchange listing issues, and corporate nationality....

  11. Ultrasound Vector Flow Imaging: Part II: Parallel Systems

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav Ivanov; Yu, Alfred C. H.

    2016-01-01

    The paper gives a review of the current state-of-theart in ultrasound parallel acquisition systems for flow imaging using spherical and plane waves emissions. The imaging methods are explained along with the advantages of using these very fast and sensitive velocity estimators. These experimental...... ultrasound imaging for studying brain function in animals. The paper explains the underlying acquisition and estimation methods for fast 2-D and 3-D velocity imaging and gives a number of examples. Future challenges and the potentials of parallel acquisition systems for flow imaging are also discussed....

  12. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  13. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  14. Development and application of efficient strategies for parallel magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Breuer, F.

    2006-07-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image

  15. Development and application of efficient strategies for parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Breuer, F.

    2006-01-01

    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image artifacts

  16. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  17. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  18. Temporal networks

    CERN Document Server

    Saramäki, Jari

    2013-01-01

    The concept of temporal networks is an extension of complex networks as a modeling framework to include information on when interactions between nodes happen. Many studies of the last decade examine how the static network structure affect dynamic systems on the network. In this traditional approach  the temporal aspects are pre-encoded in the dynamic system model. Temporal-network methods, on the other hand, lift the temporal information from the level of system dynamics to the mathematical representation of the contact network itself. This framework becomes particularly useful for cases where there is a lot of structure and heterogeneity both in the timings of interaction events and the network topology. The advantage compared to common static network approaches is the ability to design more accurate models in order to explain and predict large-scale dynamic phenomena (such as, e.g., epidemic outbreaks and other spreading phenomena). On the other hand, temporal network methods are mathematically and concept...

  19. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  20. Microcomputer data acquisition and control.

    Science.gov (United States)

    East, T D

    1986-01-01

    In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence.

  1. A wavelet-based regularized reconstruction algorithm for SENSE parallel MRI with applications to neuroimaging

    International Nuclear Information System (INIS)

    Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.

    2011-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)

  2. Project Temporalities

    DEFF Research Database (Denmark)

    Tryggestad, Kjell; Justesen, Lise; Mouritsen, Jan

    2013-01-01

    Purpose – The purpose of this paper is to explore how animals can become stakeholders in interaction with project management technologies and what happens with project temporalities when new and surprising stakeholders become part of a project and a recognized matter of concern to be taken...... into account. Design/methodology/approach – The paper is based on a qualitative case study of a project in the building industry. The authors use actor-network theory (ANT) to analyze the emergence of animal stakeholders, stakes and temporalities. Findings – The study shows how project temporalities can...... multiply in interaction with project management technologies and how conventional linear conceptions of project time may be contested with the emergence of new non-human stakeholders and temporalities. Research limitations/implications – The study draws on ANT to show how animals can become stakeholders...

  3. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  4. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  5. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  6. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  7. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  8. 2017 NAIP Acquisition Map

    Data.gov (United States)

    Farm Service Agency, Department of Agriculture — Planned States for 2017 NAIP acquisition and acquisition status layer (updated daily). Updates to the acquisition seasons may be made during the season to...

  9. The second language acquisition of French tense, aspect, mood and modality

    CERN Document Server

    Ayoun, Dalila

    2013-01-01

    Temporal-aspectual systems have a great potential of informing our understanding of the developing competence of second language learners. So far, the vast majority of empirical studies investigating L2 acquisition have largely focused on past temporality, neglecting the acquisition of the expression of the present and future temporalities with rare exceptions (aside from ESL learners), leaving unanswered the question of how the investigation of different types of temporality may inform our understanding of the acquisition of temporal, aspectual and mood systems as a whole. This monograph addr

  10. Data acquisition systems at Fermilab

    International Nuclear Information System (INIS)

    Votava, M.

    1999-01-01

    Experiments at Fermilab require an ongoing program of development for high speed, distributed data acquisition systems. The physics program at the lab has recently started the operation of a Fixed Target run in which experiments are running the DART[1] data acquisition system. The CDF and D experiments are preparing for the start of the next Collider run in mid 2000. Each will read out on the order of 1 million detector channels. In parallel, future experiments such as BTeV R ampersand D and Minos have already started prototype and test beam work. BTeV in particular has challenging data acquisition system requirements with an input rate of 1500 Gbytes/sec into Level 1 buffers and a logging rate of 200 Mbytes/sec. This paper will present a general overview of these data acquisition systems on three fronts those currently in use, those to be deployed for the Collider Run in 2000, and those proposed for future experiments. It will primarily focus on the CDF and D architectures and tools

  11. Syntax acquisition.

    Science.gov (United States)

    Crain, Stephen; Thornton, Rosalind

    2012-03-01

    Every normal child acquires a language in just a few years. By 3- or 4-years-old, children have effectively become adults in their abilities to produce and understand endlessly many sentences in a variety of conversational contexts. There are two alternative accounts of the course of children's language development. These different perspectives can be traced back to the nature versus nurture debate about how knowledge is acquired in any cognitive domain. One perspective dates back to Plato's dialog 'The Meno'. In this dialog, the protagonist, Socrates, demonstrates to Meno, an aristocrat in Ancient Greece, that a young slave knows more about geometry than he could have learned from experience. By extension, Plato's Problem refers to any gap between experience and knowledge. How children fill in the gap in the case of language continues to be the subject of much controversy in cognitive science. Any model of language acquisition must address three factors, inter alia: 1. The knowledge children accrue; 2. The input children receive (often called the primary linguistic data); 3. The nonlinguistic capacities of children to form and test generalizations based on the input. According to the famous linguist Noam Chomsky, the main task of linguistics is to explain how children bridge the gap-Chomsky calls it a 'chasm'-between what they come to know about language, and what they could have learned from experience, even given optimistic assumptions about their cognitive abilities. Proponents of the alternative 'nurture' approach accuse nativists like Chomsky of overestimating the complexity of what children learn, underestimating the data children have to work with, and manifesting undue pessimism about children's abilities to extract information based on the input. The modern 'nurture' approach is often referred to as the usage-based account. We discuss the usage-based account first, and then the nativist account. After that, we report and discuss the findings of several

  12. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  13. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  14. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  15. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  16. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  17. Data acquisition system for SLD

    International Nuclear Information System (INIS)

    Sherden, D.J.

    1985-05-01

    This paper describes the data acquisition system planned for the SLD detector which is being constructed for use with the SLAC Linear Collider (SLC). An exclusively FASTBUS front-end system is used together with a VAX-based host system. While the volume of data transferred does not challenge the band-width capabilities of FASTBUS, extensive use is made of the parallel processing capabilities allowed by FASTBUS to reduce the data to a size which can be handled by the host system. The low repetition rate of the SLC allows a relatively simple software-based trigger. The principal components and overall architecture of the hardware and software are described

  18. Automated, parallel mass spectrometry imaging and structural identification of lipids

    DEFF Research Database (Denmark)

    Ellis, Shane R.; Paine, Martin R.L.; Eijkel, Gert B.

    2018-01-01

    We report a method that enables automated data-dependent acquisition of lipid tandem mass spectrometry data in parallel with a high-resolution mass spectrometry imaging experiment. The method does not increase the total image acquisition time and is combined with automatic structural assignments....... This lipidome-per-pixel approach automatically identified and validated 104 unique molecular lipids and their spatial locations from rat cerebellar tissue....

  19. Temporal networks

    Science.gov (United States)

    Holme, Petter; Saramäki, Jari

    2012-10-01

    A great variety of systems in nature, society and technology-from the web of sexual contacts to the Internet, from the nervous system to power grids-can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via e-mail, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names-temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered

  20. Parallel data grabbing card based on PCI bus RS422

    International Nuclear Information System (INIS)

    Zhang Zhenghui; Shen Ji; Wei Dongshan; Chen Ziyu

    2005-01-01

    This article briefly introduces the developments of the parallel data grabbing card based on RS422 and PCI bus. It could be applied for grabbing the 14 bits parallel data in high speed, coming from the devices with RS422 interface. The methods of data acquisition which bases on the PCI protocol, the functions and their usages of the chips employed, the ideas and principles of the hardware and software designing are presented. (authors)

  1. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  2. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  3. Time-resolved 3D pulmonary perfusion MRI: comparison of different k-space acquisition strategies at 1.5 and 3 T.

    Science.gov (United States)

    Attenberger, Ulrike I; Ingrisch, Michael; Dietrich, Olaf; Herrmann, Karin; Nikolaou, Konstantin; Reiser, Maximilian F; Schönberg, Stefan O; Fink, Christian

    2009-09-01

    Time-resolved pulmonary perfusion MRI requires both high temporal and spatial resolution, which can be achieved by using several nonconventional k-space acquisition techniques. The aim of this study is to compare the image quality of time-resolved 3D pulmonary perfusion MRI with different k-space acquisition techniques in healthy volunteers at 1.5 and 3 T. Ten healthy volunteers underwent contrast-enhanced time-resolved 3D pulmonary MRI on 1.5 and 3 T using the following k-space acquisition techniques: (a) generalized autocalibrating partial parallel acquisition (GRAPPA) with an internal acquisition of reference lines (IRS), (b) GRAPPA with a single "external" acquisition of reference lines (ERS) before the measurement, and (c) a combination of GRAPPA with an internal acquisition of reference lines and view sharing (VS). The spatial resolution was kept constant at both field strengths to exclusively evaluate the influences of the temporal resolution achieved with the different k-space sampling techniques on image quality. The temporal resolutions were 2.11 seconds IRS, 1.31 seconds ERS, and 1.07 VS at 1.5 T and 2.04 seconds IRS, 1.30 seconds ERS, and 1.19 seconds VS at 3 T.Image quality was rated by 2 independent radiologists with regard to signal intensity, perfusion homogeneity, artifacts (eg, wrap around, noise), and visualization of pulmonary vessels using a 3 point scale (1 = nondiagnostic, 2 = moderate, 3 = good). Furthermore, the signal-to-noise ratio in the lungs was assessed. At 1.5 T the lowest image quality (sum score: 154) was observed for the ERS technique and the highest quality for the VS technique (sum score: 201). In contrast, at 3 T images acquired with VS were hampered by strong artifacts and image quality was rated significantly inferior (sum score: 137) compared with IRS (sum score: 180) and ERS (sum score: 174). Comparing 1.5 and 3 T, in particular the overall rating of the IRS technique (sum score: 180) was very similar at both field

  4. Temporal naturalism

    Science.gov (United States)

    Smolin, Lee

    2015-11-01

    Two people may claim both to be naturalists, but have divergent conceptions of basic elements of the natural world which lead them to mean different things when they talk about laws of nature, or states, or the role of mathematics in physics. These disagreements do not much affect the ordinary practice of science which is about small subsystems of the universe, described or explained against a background, idealized to be fixed. But these issues become crucial when we consider including the whole universe within our system, for then there is no fixed background to reference observables to. I argue here that the key issue responsible for divergent versions of naturalism and divergent approaches to cosmology is the conception of time. One version, which I call temporal naturalism, holds that time, in the sense of the succession of present moments, is real, and that laws of nature evolve in that time. This is contrasted with timeless naturalism, which holds that laws are immutable and the present moment and its passage are illusions. I argue that temporal naturalism is empirically more adequate than the alternatives, because it offers testable explanations for puzzles its rivals cannot address, and is likely a better basis for solving major puzzles that presently face cosmology and physics. This essay also addresses the problem of qualia and experience within naturalism and argues that only temporal naturalism can make a place for qualia as intrinsic qualities of matter.

  5. Improving parallel imaging by jointly reconstructing multi-contrast data.

    Science.gov (United States)

    Bilgic, Berkin; Kim, Tae Hyung; Liao, Congyu; Manhard, Mary Kate; Wald, Lawrence L; Haldar, Justin P; Setsompop, Kawin

    2018-08-01

    To develop parallel imaging techniques that simultaneously exploit coil sensitivity encoding, image phase prior information, similarities across multiple images, and complementary k-space sampling for highly accelerated data acquisition. We introduce joint virtual coil (JVC)-generalized autocalibrating partially parallel acquisitions (GRAPPA) to jointly reconstruct data acquired with different contrast preparations, and show its application in 2D, 3D, and simultaneous multi-slice (SMS) acquisitions. We extend the joint parallel imaging concept to exploit limited support and smooth phase constraints through Joint (J-) LORAKS formulation. J-LORAKS allows joint parallel imaging from limited autocalibration signal region, as well as permitting partial Fourier sampling and calibrationless reconstruction. We demonstrate highly accelerated 2D balanced steady-state free precession with phase cycling, SMS multi-echo spin echo, 3D multi-echo magnetization-prepared rapid gradient echo, and multi-echo gradient recalled echo acquisitions in vivo. Compared to conventional GRAPPA, proposed joint acquisition/reconstruction techniques provide more than 2-fold reduction in reconstruction error. JVC-GRAPPA takes advantage of additional spatial encoding from phase information and image similarity, and employs different sampling patterns across acquisitions. J-LORAKS achieves a more parsimonious low-rank representation of local k-space by considering multiple images as additional coils. Both approaches provide dramatic improvement in artifact and noise mitigation over conventional single-contrast parallel imaging reconstruction. Magn Reson Med 80:619-632, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  6. Speed in Acquisitions

    DEFF Research Database (Denmark)

    Meglio, Olimpia; King, David R.; Risberg, Annette

    2017-01-01

    The advantage of speed is often invoked by academics and practitioners as an essential condition during post-acquisition integration, frequently without consideration of the impact earlier decisions have on acquisition speed. In this article, we examine the role speed plays in acquisitions across...... the acquisition process using research organized around characteristics that display complexity with respect to acquisition speed. We incorporate existing research with a process perspective of acquisitions in order to present trade-offs, and consider the influence of both stakeholders and the pre......-deal-completion context on acquisition speed, as well as the organization’s capabilities to facilitating that speed. Observed trade-offs suggest both that acquisition speed often requires longer planning time before an acquisition and that associated decisions require managerial judgement. A framework for improving...

  7. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  8. Language Acquisition without an Acquisition Device

    Science.gov (United States)

    O'Grady, William

    2012-01-01

    Most explanatory work on first and second language learning assumes the primacy of the acquisition phenomenon itself, and a good deal of work has been devoted to the search for an "acquisition device" that is specific to humans, and perhaps even to language. I will consider the possibility that this strategy is misguided and that language…

  9. LAMPF nuclear chemistry data acquisition system

    International Nuclear Information System (INIS)

    Giesler, G.C.

    1983-01-01

    The LAMPF Nuclear Chemistry Data Acquisition System (DAS) is designed to provide both real-time control of data acquisition and facilities for data processing for a large variety of users. It consists of a PDP-11/44 connected to a parallel CAMAC branch highway as well as to a large number of peripherals. The various types of radiation counters and spectrometers and their connections to the system will be described. Also discussed will be the various methods of connection considered and their advantages and disadvantages. The operation of the system from the standpoint of both hardware and software will be described as well as plans for the future

  10. Improving quality of arterial spin labeling MR imaging at 3 Tesla with a 32-channel coil and parallel imaging.

    Science.gov (United States)

    Ferré, Jean-Christophe; Petr, Jan; Bannier, Elise; Barillot, Christian; Gauvrit, Jean-Yves

    2012-05-01

    To compare 12-channel and 32-channel phased-array coils and to determine the optimal parallel imaging (PI) technique and factor for brain perfusion imaging using Pulsed Arterial Spin labeling (PASL) at 3 Tesla (T). Twenty-seven healthy volunteers underwent 10 different PASL perfusion PICORE Q2TIPS scans at 3T using 12-channel and 32-channel coils without PI and with GRAPPA or mSENSE using factor 2. PI with factor 3 and 4 were used only with the 32-channel coil. Visual quality was assessed using four parameters. Quantitative analyses were performed using temporal noise, contrast-to-noise and signal-to-noise ratios (CNR, SNR). Compared with 12-channel acquisition, the scores for 32-channel acquisition were significantly higher for overall visual quality, lower for noise and higher for SNR and CNR. With the 32-channel coil, artifact compromise achieved the best score with PI factor 2. Noise increased, SNR and CNR decreased with PI factor. However mSENSE 2 scores were not always significantly different from acquisition without PI. For PASL at 3T, the 32-channel coil at 3T provided better quality than the 12-channel coil. With the 32-channel coil, mSENSE 2 seemed to offer the best compromise for decreasing artifacts without significantly reducing SNR, CNR. Copyright © 2012 Wiley Periodicals, Inc.

  11. The HyperCP data acquisition system

    International Nuclear Information System (INIS)

    Kaplan, D.M.

    1997-06-01

    For the HyperCP experiment at Fermilab, we have assembled a data acquisition system that records on up to 45 Exabyte 8505 tape drives in parallel at up to 17 MB/s. During the beam spill, data are acquired from the front-end digitization systems at ∼ 60 MB/s via five parallel data paths. The front-end systems achieve typical readout deadtime of ∼ 1 micros per event, allowing operation at 75-kHz trigger rate with approx-lt 30% deadtime. Event building and tapewriting are handled by 15 Motorola MVME167 processors in 5 VME crates

  12. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  13. Calo trigger acquisition system

    CERN Multimedia

    Franchini, Matteo

    2016-01-01

    Calo trigger acquisition system - Evolution of the acquisition system from a multiple boards system (upper, orange cables) to a single board one (below, light blue cables) where all the channels are collected in a single board.

  14. Modelling live forensic acquisition

    CSIR Research Space (South Africa)

    Grobler, MM

    2009-06-01

    Full Text Available This paper discusses the development of a South African model for Live Forensic Acquisition - Liforac. The Liforac model is a comprehensive model that presents a range of aspects related to Live Forensic Acquisition. The model provides forensic...

  15. Playing at Serial Acquisitions

    NARCIS (Netherlands)

    J.T.J. Smit (Han); T. Moraitis (Thras)

    2010-01-01

    textabstractBehavioral biases can result in suboptimal acquisition decisions-with the potential for errors exacerbated in consolidating industries, where consolidators design serial acquisition strategies and fight escalating takeover battles for platform companies that may determine their future

  16. Pattern recognition with parallel associative memory

    Science.gov (United States)

    Toth, Charles K.; Schenk, Toni

    1990-01-01

    An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.

  17. Acceleration of cardiovascular MRI using parallel imaging: basic principles, practical considerations, clinical applications and future directions

    International Nuclear Information System (INIS)

    Niendorf, T.; Sodickson, D.

    2006-01-01

    Cardiovascular Magnetic Resonance (CVMR) imaging has proven to be of clinical value for non-invasive diagnostic imaging of cardiovascular diseases. CVMR requires rapid imaging; however, the speed of conventional MRI is fundamentally limited due to its sequential approach to image acquisition, in which data points are collected one after the other in the presence of sequentially-applied magnetic field gradients and radiofrequency coils to acquire multiple data points simultaneously, and thereby to increase imaging speed and efficiency beyond the limits of purely gradient-based approaches. The resulting improvements in imaging speed can be used in various ways, including shortening long examinations, improving spatial resolution and anatomic coverage, improving temporal resolution, enhancing image quality, overcoming physiological constraints, detecting and correcting for physiologic motion, and streamlining work flow. Examples of these strategies will be provided in this review, after some of the fundamentals of parallel imaging methods now in use for cardiovascular MRI are outlined. The emphasis will rest upon basic principles and clinical state-of-the art cardiovascular MRI applications. In addition, practical aspects such as signal-to-noise ratio considerations, tailored parallel imaging protocols and potential artifacts will be discussed, and current trends and future directions will be explored. (orig.)

  18. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  19. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  20. Relationship among RR interval, optimal reconstruction phase, temporal resolution, and image quality of end-systolic reconstruction of coronary CT angiography in patients with high heart rates. In search of the optimal acquisition protocol

    International Nuclear Information System (INIS)

    Sano, Tomonari; Matsutani, Hideyuki; Kondo, Takeshi; Fujimoto, Shinichiro; Sekine, Takako; Arai, Takehiro; Morita, Hitomi; Takase, Shinichi

    2011-01-01

    The purpose of this study is to elucidate the relationship among RR interval (RR), the optimal reconstruction phase, and adequate temporal resolution (TR) to obtain coronary CT angiography images of acceptable quality using 64-multi detector-row CT (MDCT) (Aquilion 64) of end-systolic reconstruction in 407 patients with high heart rates. Image quality was classified into 3 groups [rank A (excellent): 161, rank B (acceptable): 207, and rank C (unacceptable): 39 patients]. The optimal absolute phase (OAP) significantly correlated with RR [OAP (ms)=119-0.286 RR (ms), r=0.832, p<0.0001], and the optimal relative phase (ORP) also significantly correlated with RR [ORP (%)=62-0.023 RR (ms), r=0.656, p<0.0001], and the correlation coefficient of OAP was significantly (p<0.0001) higher than that of ORP. The OAP range (±2 standard deviation (SD)) in which it is highly possible to get a static image was from [119-0.286 RR (ms)-46] to [119-0.286 RR (ms)+46]. The TR was significantly different among ranks A (97±22 ms), B (111±31 ms) and C (135±34 ms). The TR significantly correlated with RR in ranks A (TR=-16+0.149 RR, r=0.767, p<0.0001), B (TR=-15+0.166 RR, r=0.646, p<0.0001), and C (TR=52+0.117 RR, r=0.425, p=0.0069). Rank C was distinguished from ranks A or B by linear discriminate analysis (TR=-46+0.21 RR), and the discriminate rate was 82.6%. In conclusion, both the OAP and adequate TR depend on RR, and the OAP range (±2 SD) can be calculated using the formula [119-0.286 RR (ms)-46] to [119-0.286 RR (ms) +46], and an adequate TR value would be less than (-46+0.21 RR). (author)

  1. Mergers and Acquisitions

    OpenAIRE

    Frasch, Manfred; Leptin, Maria

    2000-01-01

    Mergers and acquisitions (M&As) are booming a strategy of choice for organizations attempting to maintain a competitive advantage. Previous research on mergers and acquisitions declares that acquirers do not normally benefit from acquisitions. Targets, on the other hand, have a tendency of gaining positive returns in the few days surrounding merger announcements due to several characteristic on the acquisitions deal. The announcement period wealth effect on acquiring firms, however, is as cle...

  2. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  3. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  4. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  5. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  6. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  7. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  8. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  9. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  10. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  11. Data acquisition for the D0 experiment

    International Nuclear Information System (INIS)

    Cutts, D.; Hoftun, J.S.; Johnson, C.R.; Zeller, R.T.; Trojak, T.; Van Berg, R.

    1985-01-01

    We describe the acquisition system for the D0 experiment at Fermilab, focusing primarily on the second level, which is based on a large parallel array of MicroVAX-II's. In this design data flows from the detector readout crates at a maximum rate of 320 Mbytes/sec into dual-port memories associated with one selected processor in which a VAXELIN based program performs the filter analysis of a complete event

  12. The UA1 VME data acquisition system

    International Nuclear Information System (INIS)

    Cittolin, S.

    1988-01-01

    The data acquisition system of a large-scale experiment such as UA1, running at the CERN proton-antiproton collider, has to cope with very high data rates and to perform sophisticated triggering and filtering in order to analyze interesting events. These functions are performed by a variety of programmable units organized in a parallel multiprocessor system whose central architecture is based on the industry-standard VME/VMXbus. (orig.)

  13. Smart acquisition EELS

    International Nuclear Information System (INIS)

    Sader, Kasim; Schaffer, Bernhard; Vaughan, Gareth; Brydson, Rik; Brown, Andy; Bleloch, Andrew

    2010-01-01

    We have developed a novel acquisition methodology for the recording of electron energy loss spectra (EELS) using a scanning transmission electron microscope (STEM): 'Smart Acquisition'. Smart Acquisition allows the independent control of probe scanning procedures and the simultaneous acquisition of analytical signals such as EELS. The original motivation for this work arose from the need to control the electron dose experienced by beam-sensitive specimens whilst maintaining a sufficiently high signal-to-noise ratio in the EEL signal for the extraction of useful analytical information (such as energy loss near edge spectral features) from relatively undamaged areas. We have developed a flexible acquisition framework which separates beam position data input, beam positioning, and EELS acquisition. In this paper we demonstrate the effectiveness of this technique on beam-sensitive thin films of amorphous aluminium trifluoride. Smart Acquisition has been used to expose lines to the electron beam, followed by analysis of the structures created by line-integrating EELS acquisitions, and the results are compared to those derived from a standard EELS linescan. High angle annular dark-field images show clear reductions in damage for the Smart Acquisition areas compared to the conventional linescan, and the Smart Acquisition low loss EEL spectra are more representative of the undamaged material than those derived using a conventional linescan. Atomically resolved EELS of all four elements of CaNdTiO show the high resolution capabilities of Smart Acquisition.

  14. Smart acquisition EELS

    Energy Technology Data Exchange (ETDEWEB)

    Sader, Kasim, E-mail: k.sader@leeds.ac.uk [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Schaffer, Bernhard [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Department of Physics and Astronomy, University of Glasgow (United Kingdom); Vaughan, Gareth [Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Brydson, Rik [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Brown, Andy [Institute for Materials Research, University of Leeds, LS2 9JT (United Kingdom); Bleloch, Andrew [SuperSTEM, J block, Daresbury Laboratory, Warrington, Cheshire, WA4 4AD (United Kingdom); Department of Engineering, University of Liverpool, Liverpool (United Kingdom)

    2010-07-15

    We have developed a novel acquisition methodology for the recording of electron energy loss spectra (EELS) using a scanning transmission electron microscope (STEM): 'Smart Acquisition'. Smart Acquisition allows the independent control of probe scanning procedures and the simultaneous acquisition of analytical signals such as EELS. The original motivation for this work arose from the need to control the electron dose experienced by beam-sensitive specimens whilst maintaining a sufficiently high signal-to-noise ratio in the EEL signal for the extraction of useful analytical information (such as energy loss near edge spectral features) from relatively undamaged areas. We have developed a flexible acquisition framework which separates beam position data input, beam positioning, and EELS acquisition. In this paper we demonstrate the effectiveness of this technique on beam-sensitive thin films of amorphous aluminium trifluoride. Smart Acquisition has been used to expose lines to the electron beam, followed by analysis of the structures created by line-integrating EELS acquisitions, and the results are compared to those derived from a standard EELS linescan. High angle annular dark-field images show clear reductions in damage for the Smart Acquisition areas compared to the conventional linescan, and the Smart Acquisition low loss EEL spectra are more representative of the undamaged material than those derived using a conventional linescan. Atomically resolved EELS of all four elements of CaNdTiO show the high resolution capabilities of Smart Acquisition.

  15. Temporal auditory processing in elders

    Directory of Open Access Journals (Sweden)

    Azzolini, Vanuza Conceição

    2010-03-01

    Full Text Available Introduction: In the trial of aging all the structures of the organism are modified, generating intercurrences in the quality of the hearing and of the comprehension. The hearing loss that occurs in consequence of this trial occasion a reduction of the communicative function, causing, also, a distance of the social relationship. Objective: Comparing the performance of the temporal auditory processing between elderly individuals with and without hearing loss. Method: The present study is characterized for to be a prospective, transversal and of diagnosis character field work. They were analyzed 21 elders (16 women and 5 men, with ages between 60 to 81 years divided in two groups, a group "without hearing loss"; (n = 13 with normal auditive thresholds or restricted hearing loss to the isolated frequencies and a group "with hearing loss" (n = 8 with neurosensory hearing loss of variable degree between light to moderately severe. Both the groups performed the tests of frequency (PPS and duration (DPS, for evaluate the ability of temporal sequencing, and the test Randon Gap Detection Test (RGDT, for evaluate the temporal resolution ability. Results: It had not difference statistically significant between the groups, evaluated by the tests DPS and RGDT. The ability of temporal sequencing was significantly major in the group without hearing loss, when evaluated by the test PPS in the condition "muttering". This result presented a growing one significant in parallel with the increase of the age group. Conclusion: It had not difference in the temporal auditory processing in the comparison between the groups.

  16. An original approach to data acquisition CHADAC

    CERN Document Server

    CERN. Geneva

    1981-01-01

    Many labs try to boost existing data acquisition systems by inserting high performance intelligent devices in the important nodes of the system's structure. This strategy finds its limits in the system's architecture. The CHADAC project proposes a simple and efficient solution to this problem, using a multiprocessor modular architecture. CHADAC main features are: parallel acquisition of data; CHADAC is fast, it dedicates one processor per branch and each processor can read and store one 16 bit word in 800 ns; original structure; each processor can work in its own private memory, in its own shared memory (double access) and in the shared memory of any other processor. Simple and fast communications between processors are also provided by local DMAs; flexibility; each processor is autonomous and may be used as an independent acquisition system for a branch, by connecting local peripherals to it. Adjunction of fast trigger logic is possible. By its architecture and performances, CHADAC is designed to provide a g...

  17. Estimating liver perfusion from free-breathing continuously acquired dynamic gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid-enhanced acquisition with compressed sensing reconstruction.

    Science.gov (United States)

    Chandarana, Hersh; Block, Tobias Kai; Ream, Justin; Mikheev, Artem; Sigal, Samuel H; Otazo, Ricardo; Rusinek, Henry

    2015-02-01

    The purpose of this study was to estimate perfusion metrics in healthy and cirrhotic liver with pharmacokinetic modeling of high-temporal resolution reconstruction of continuously acquired free-breathing gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid-enhanced acquisition in patients undergoing clinically indicated liver magnetic resonance imaging. In this Health Insurance Portability and Accountability Act-compliant prospective study, 9 cirrhotic and 10 noncirrhotic patients underwent clinical magnetic resonance imaging, which included continuously acquired radial stack-of-stars 3-dimensional gradient recalled echo sequence with golden-angle ordering scheme in free breathing during contrast injection. A total of 1904 radial spokes were acquired continuously in 318 to 340 seconds. High-temporal resolution data sets were formed by grouping 13 spokes per frame for temporal resolution of 2.2 to 2.4 seconds, which were reconstructed using the golden-angle radial sparse parallel technique that combines compressed sensing and parallel imaging. High-temporal resolution reconstructions were evaluated by a board-certified radiologist to generate gadolinium concentration-time curves in the aorta (arterial input function), portal vein (venous input function), and liver, which were fitted to dual-input dual-compartment model to estimate liver perfusion metrics that were compared between cirrhotic and noncirrhotic livers. The cirrhotic livers had significantly lower total plasma flow (70.1 ± 10.1 versus 103.1 ± 24.3 mL/min per 100 mL; P The mean transit time was higher in the cirrhotic livers (24.4 ± 4.7 versus 15.7 ± 3.4 seconds; P the hepatocellular uptake rate was lower (3.03 ± 2.1 versus 6.53 ± 2.4 100/min; P < 0.05). Liver perfusion metrics can be estimated from free-breathing dynamic acquisition performed for every clinical examination without additional contrast injection or time. This is a novel paradigm for dynamic liver imaging.

  18. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  19. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  20. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  1. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  2. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  3. Slow phasic changes in nucleus accumbens dopamine release during fixed ratio acquisition: a microdialysis study.

    Science.gov (United States)

    Segovia, K N; Correa, M; Salamone, J D

    2011-11-24

    Nucleus accumbens dopamine (DA) is a critical component of the brain circuitry regulating behavioral output during reinforcement-seeking behavior. Several studies have investigated the characteristics of accumbens DA release during the performance of well-learned operant behaviors, but relatively few have focused on the initial acquisition of particular instrumental behaviors or operant schedules. The present experiments focused on the initial acquisition of operant performance on a reinforcement schedule by studying the transition from a fixed ratio 1 (FR1) schedule to another operant schedule with a higher ratio requirement (i.e. fixed ratio 5 [FR5]). Microdialysis sessions were conducted in different groups of rats that were tested on either the FR1 schedule; the first, second, or third day of FR5 training; or after weeks of FR5 training. Consistent with previous studies, well-trained rats performing on the FR5 schedule after weeks of training showed significant increases in extracellular DA in both core and shell subregions of nucleus accumbens during the behavioral session. On the first day of FR5 training, there was a substantial increase in DA release in nucleus accumbens shell (i.e. approximately 300% of baseline). In contrast, accumbens core DA release was greatest on the second day of FR5 training. In parallel experiments, DA release in core and shell subregions did not significantly increase during free consumption of the same high carbohydrate food pellets that were used in the operant experiments, despite the very high levels of food intake in experienced rats. However, in rats exposed to the high-carbohydrate food for the first time, there was a tendency for extracellular DA to show a small increase. These results demonstrate that transient increases in accumbens DA release occur during the initial acquisition of ratio performance, and suggest that core and shell subregions show different temporal patterns during acquisition of instrumental behavior

  4. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  5. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  6. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  7. Temporal Glare

    DEFF Research Database (Denmark)

    Ritschel, Tobias; Ihrke, Matthias; Frisvad, Jeppe Revall

    2009-01-01

    Glare is a consequence of light scattered within the human eye when looking at bright light sources. This effect can be exploited for tone mapping since adding glare to the depiction of high-dynamic range (HDR) imagery on a low-dynamic range (LDR) medium can dramatically increase perceived contra...... to initially static HDR images. By conducting psychophysical studies, we validate that our method improves perceived brightness and that dynamic glare-renderings are often perceived as more attractive depending on the chosen scene.......Glare is a consequence of light scattered within the human eye when looking at bright light sources. This effect can be exploited for tone mapping since adding glare to the depiction of high-dynamic range (HDR) imagery on a low-dynamic range (LDR) medium can dramatically increase perceived contrast....... Even though most, if not all, subjects report perceiving glare as a bright pattern that fluctuates in time, up to now it has only been modeled as a static phenomenon. We argue that the temporal properties of glare are a strong means to increase perceived brightness and to produce realistic...

  8. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  9. Front-end data processing the SLD data acquisition system

    International Nuclear Information System (INIS)

    Nielsen, B.S.

    1986-07-01

    The data acquisition system for the SLD detector will make extensive use of parallel at the front-end level. Fastbus acquisition modules are being built with powerful processing capabilities for calibration, data reduction and further pre-processing of the large amount of analog data handled by each module. This paper describes the read-out electronics chain and data pre-processing system adapted for most of the detector channels, exemplified by the central drift chamber waveform digitization and processing system

  10. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  11. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  12. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  13. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  14. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  15. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    Science.gov (United States)

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  16. Quantum Temporal Imaging

    OpenAIRE

    Tsang, Mankei; Psaltis, Demetri

    2006-01-01

    The concept of quantum temporal imaging is proposed to manipulate the temporal correlation of entangled photons. In particular, we show that time correlation and anticorrelation can be converted to each other using quantum temporal imaging.

  17. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  18. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  19. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  20. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  1. Acquisition Research Program Homepage

    OpenAIRE

    2015-01-01

    Includes an image of the main page on this date and compressed file containing additional web pages. Established in 2003, Naval Postgraduate School’s (NPS) Acquisition Research Program provides leadership in innovation, creative problem solving and an ongoing dialogue, contributing to the evolution of Department of Defense acquisition strategies.

  2. Making Acquisition Measurable

    Science.gov (United States)

    2011-04-30

    Corporation. All rights reserved End Users Administrator/ Maintainer (A/M) Subject Matter Expert ( SME ) Trainer/ Instructor Manager, Evaluator, Supervisor... CMMI ) - Acquisition (AQ) © 2011 The MITRE Corporation. All rights reserved 13 CMMI -Development Incremental iterative development (planning & execution...objectives Constructing games highlighting particular aspects of proposed CCOD® acquisition, and conducting exercises with Subject Matter Experts ( SMEs

  3. An embedded control and acquisition system for multichannel detectors

    International Nuclear Information System (INIS)

    Gori, L.; Tommasini, R.; Cautero, G.; Giuressi, D.; Barnaba, M.; Accardo, A.; Carrato, S.; Paolucci, G.

    1999-01-01

    We present a pulse counting multichannel data acquisition system, characterized by the high number of high speed acquisition channels, and by the modular, embedded system architecture. The former leads to very fast acquisitions and allows to obtain sequences of snapshots, for the study of time dependent phenomena. The latter, thanks to the integration of a CPU into the system, provides high computational capabilities, so that the interfacing with the user computer is very simple and user friendly. Moreover, the user computer is free from control and acquisition tasks. The system has been developed for one of the beamlines of the third generation synchrotron radiation sources ELETTRA, and because of the modular architecture can be useful in various other kinds of experiments, where parallel acquisition, high data rates, and user friendliness are required. First experimental results on a double pass hemispherical electron analyser provided with a 96 channel detector confirm the validity of the approach. (author)

  4. Mergers and Acquisitions

    DEFF Research Database (Denmark)

    Risberg, Annette

    Introduction to the study of mergers and acquisitions. This book provides an understanding of the mergers and acquisitions process, how and why they occur, and also the broader implications for organizations. It presents issues including motives and planning, partner selection, integration......, employee experiences and communication. Mergers and acquisitions remain one of the most common forms of growth, yet they present considerable challenges for the companies and management involved. The effects on stakeholders, including shareholders, managers and employees, must be considered as well...... by editorial commentaries and reflects the important organizational and behavioural aspects which have often been ignored in the past. By providing this in-depth understanding of the mergers and acquisitions process, the reader understands not only how and why mergers and acquisitions occur, but also...

  5. Data Acquisition System

    International Nuclear Information System (INIS)

    Cirstea, C.D.; Buda, S.I.; Constantin, F.

    2005-01-01

    This paper deals with a multi parametric acquisition system developed for a four input Analog to Digital Converter working in CAMAC Standard. The acquisition software is built in MS Visual C++ on a standard PC with a USB interface. It has a visual interface which permits Start/Stop of the acquisition, setting the type of acquisition (True/Live time), the time and various menus for primary data acquisition. The spectrum is dynamically visualized with a moving cursor indicating the content and position. The microcontroller PIC16C765 is used for data transfer from ADC to PC; The microcontroller and the software create an embedded system which emulates the CAMAC protocol programming the 4 input ADC for operating modes ('zero suppression', 'addressed' and 'sequential') and handling the data transfers from ADC to its internal memory. From its memory the data is transferred into the PC by the USB interface. The work is in progress. (authors)

  6. Data acquisition system

    International Nuclear Information System (INIS)

    Cirstea, D.C.; Buda, S.I.; Constantin, F.

    2005-01-01

    The topic of this paper deals with a multi parametric acquisition system developed around a four input Analog to Digital Converter working in CAMAC Standard. The acquisition software is built in MS Visual C++ on a standard PC with a USB interface. It has a visual interface which permits Start/Stop of the acquisition, setting the type of acquisition (True/Live time), the time and various menus for primary data acquisition. The spectrum is dynamically visualized with a moving cursor indicating the content and position. The microcontroller PIC16C765 is used for data transfer from ADC to PC; The microcontroller and the software create an embedded system which emulates the CAMAC protocol programming, the 4 input ADC for operating modes ('zero suppression', 'addressed' and 'sequential') and handling the data transfers from ADC to its internal memory. From its memory the data is transferred into the PC by the USB interface. The work is in progress. (authors)

  7. Unified dataflow model for the analysis of data and pipeline parallelism, and buffer sizing

    NARCIS (Netherlands)

    Hausmans, J.P.H.M.; Geuns, S.J.; Wiggers, M.H.; Bekooij, Marco Jan Gerrit

    2014-01-01

    Real-time stream processing applications such as software defined radios are usually executed concurrently on multiprocessor systems. Exploiting coarse-grained data parallelism by duplicating tasks is often required, besides pipeline parallelism, to meet the temporal constraints of the applications.

  8. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  9. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  10. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Dynamic surface-pressure instrumentation for rods in parallel flow

    International Nuclear Information System (INIS)

    Mulcahy, T.M.; Lawrence, W.

    1979-01-01

    Methods employed and experience gained in measuring random fluid boundary layer pressures on the surface of a small diameter cylindrical rod subject to dense, nonhomogeneous, turbulent, parallel flow in a relatively noise-contaminated flow loop are described. Emphasis is placed on identification of instrumentation problems; description of transducer construction, mounting, and waterproofing; and the pretest calibration required to achieve instrumentation capable of reliable data acquisition

  12. 3D Hyperpolarized C-13 EPI with Calibrationless Parallel Imaging

    DEFF Research Database (Denmark)

    Gordon, Jeremy W.; Hansen, Rie Beck; Shin, Peter J.

    2018-01-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and tem...... strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism....

  13. Indexing mergers and acquisitions

    OpenAIRE

    Gang, Jianhua; Guo, Jie (Michael); Hu, Nan; Li, Xi

    2017-01-01

    We measure the efficiency of mergers and acquisitions by putting forward an index (the ‘M&A Index’) based on stochastic frontier analysis. The M&A Index is calculated for each takeover deal and is standardized between 0 and 1. An acquisition with a higher index encompasses higher efficiency. We find that takeover bids with higher M&A Indices are more likely to succeed. Moreover, the M&A Index shows a strong and positive relation with the acquirers’ post-acquisition stock perfo...

  14. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  15. ENHANCING THE INTERNATIONALIZATION OF THE GLOBAL INSURANCE MARKET: CHANGING DRIVERS OF MERGERS AND ACQUISITIONS

    Directory of Open Access Journals (Sweden)

    D. Rasshyvalov

    2014-03-01

    Full Text Available One-third of worldwide mergers and acquisitions involving firms from different countries make M&A one of the key drivers of internationalization. Over the past five years insurance cross-border merger and acquisition activities have globally paralleled deep financial crisis.

  16. Towards General Temporal Aggregation

    DEFF Research Database (Denmark)

    Boehlen, Michael H.; Gamper, Johann; Jensen, Christian Søndergaard

    2008-01-01

    associated with the management of temporal data. Indeed, temporal aggregation is complex and among the most difficult, and thus interesting, temporal functionality to support. This paper presents a general framework for temporal aggregation that accommodates existing kinds of aggregation, and it identifies...

  17. Acquisition Workforce Annual Report 2006

    Data.gov (United States)

    General Services Administration — This is the Federal Acquisition Institute's (FAI's) Annual demographic report on the Federal acquisition workforce, showing trends by occupational series, employment...

  18. Acquisition Workforce Annual Report 2008

    Data.gov (United States)

    General Services Administration — This is the Federal Acquisition Institute's (FAI's) Annual demographic report on the Federal acquisition workforce, showing trends by occupational series, employment...

  19. The Acquisition of Particles

    African Journals Online (AJOL)

    process of language acquisition on the basis of linguistic evidence the child is exposed to. ..... particle verbs are recognized in language processing differs from the way morphologically ..... In Natural Language and Linguistic Theory 11.

  20. High speed data acquisition

    International Nuclear Information System (INIS)

    Cooper, P.S.

    1997-07-01

    A general introduction to high speed data acquisition system techniques in modern particle physics experiments is given. Examples are drawn from the SELEX(E78 1) high statistics charmed baryon production and decay experiment now taking data at Fermilab

  1. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  2. Acquisition and analysis of throughput rates for an operational department-wide PACS

    Science.gov (United States)

    Stewart, Brent K.; Taira, Ricky K.; Dwyer, Samuel J., III; Huang, H. K.

    1992-07-01

    The accurate prediction of image throughput is a critical issue in planning for and acquisition of any successful Picture Archiving and Communication System (PACS). Bottlenecks or design flaws can render an expensive PACS implementation useless. This manuscript presents a method for accurately predicting and measuring image throughput of a PACS design. To create the simulation model of the planned or implemented PACS, it must first be decomposed into principal tasks. We have decomposed the entire PACS image management chain into eight subsystems. These subsystems include network transfers over three different networks (Ethernet, FDDI and UltraNet) and five software programs and/or queues: (1) transfer of image data from the imaging modality computer to the image acquisition/reformatting computer; (2) reformatting the image data into a standard image format; (3) transferring the image data from the acquisition/reformatting computer to the image archive computer; (4) updating a relational database management system over the network; (5) image processing-- rotation and optimal gray-scale lookup table calculation; (6) request that the image be archived; (7) image transfer from the image archive computer to a designated image display workstation; and (8) update the local database on the image display station, separate the image header from the image data and store the image data on a parallel disk array. Through development of an event logging facility and implementation of a network management package we have acquired throughput data for each subsystem in the PACS chain. In addition, from our PACS relational database management system, we have distilled the traffic generation patterns (temporal, file size and destination) of our imaging modality devices. This data has been input into a simulation modeling package (Block Oriented Network Simulator-- BONeS) to estimate the characteristics of the modeled PACS, e.g., the throughput rates and delay time. This simulation

  3. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  4. Extended data acquisition support at GSI

    International Nuclear Information System (INIS)

    Marinescu, D.C.; Busch, F.; Hultzsch, H.; Lowsky, J.; Richter, M.

    1984-01-01

    The Experiment Data Acquisition and Analysis System (EDAS) of GSI, designed to support the data processing associated with nuclear physics experiments, provides three modes of operation: real-time, interactive replay and batch replay. The real-time mode is used for data acquisition and data analysis during an experiment performed at the heavy ion accelerator at GSI. An experiment may be performed either in Stand Alone Mode, using only the Experiment Computers, or in Extended Mode using all computing resources available. The Extended Mode combines the advantages of the real-time response of a dedicated minicomputer with the availability of computing resources in a large computing environment. This paper first gives an overview of EDAS and presents the GSI High Speed Data Acquisition Network. Data Acquisition Modes and the Extended Mode are then introduced. The structure of the system components, their implementation and the functions pertinent to the Extended Mode are presented. The control functions of the Experiment Computer sub-system are discussed in detail. Two aspects of the design of the sub-system running on the mainframe are stressed, namely the use of a multi-user installation for real-time processing and the use of a high level programming language, PL/I, as an implementation language for a system which uses parallel processing. The experience accumulated is summarized in a number of conclusions

  5. An original approach to data acquisition: CHADAC

    International Nuclear Information System (INIS)

    Huppert, M.; Nayman, P.; Rivoal, M.

    1981-01-01

    Many labs try to boost existing data acquisition systems by inserting high performance intelligent devices in the important nodes of the system's structure. This strategy finds its limits in the system's architecture. The CHADAC project proposes a simple and efficient solution to this problem, using a multiprocessor modular architecture. CHADAC main features are: a) Parallel acquisition of data: CHADAC is fast; it dedicates one processor per branch; each processor can read and store one 16 bit word in 800 ns. b) Original structure: each processor can work in its own private memory, in its own shared memory (double access) and in the shared memory of any other processor (this feature being particulary useful to avoid wasteful data transfers). Simple and fast communications between processors are also provided by local DMA'S. c) Flexibility: each processor is autonomous and may be used as an independent acquisition system for a branch, by connecting local peripherals to it. Adjunction of fast trigger logic is possible. By its architecture and performances, CHADAC is designed to provide a good support for local intelligent devices and transfer operators developped elsewhere, providing a way to implement systems well fitted to various types of data acquisition. (orig.)

  6. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  7. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  8. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  9. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  10. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  11. Automatic parallelization of while-Loops using speculative execution

    International Nuclear Information System (INIS)

    Collard, J.F.

    1995-01-01

    Automatic parallelization of imperative sequential programs has focused on nests of for-loops. The most recent of them consist in finding an affine mapping with respect to the loop indices to simultaneously capture the temporal and spatial properties of the parallelized program. Such a mapping is usually called a open-quotes space-time transformation.close quotes This work describes an extension of these techniques to while-loops using speculative execution. We show that space-time transformations are a good framework for summing up previous restructuration techniques of while-loop, such as pipelining. Moreover, we show that these transformations can be derived and applied automatically

  12. The Spacing Effect and Its Relevance to Second Language Acquisition

    Science.gov (United States)

    Rogers, John

    2017-01-01

    This commentary discusses some theoretical and methodological issues related to research on the spacing effect in second language acquisition research (SLA). There has been a growing interest in SLA in how the temporal distribution of input might impact language development. SLA research in this area has frequently drawn upon the rich field of…

  13. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  14. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT

    International Nuclear Information System (INIS)

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-01-01

    Purpose: The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. Methods: To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. Results: While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same

  15. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.

    Science.gov (United States)

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-03-01

    The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used

  16. Post-Acquisition IT Integration

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Yetton, Philip

    2013-01-01

    The extant research on post-acquisition IT integration analyzes how acquirers realize IT-based value in individual acquisitions. However, serial acquirers make 60% of acquisitions. These acquisitions are not isolated events, but are components in growth-by-acquisition programs. To explain how...... serial acquirers realize IT-based value, we develop three propositions on the sequential effects on post-acquisition IT integration in acquisition programs. Their combined explanation is that serial acquirers must have a growth-by-acquisition strategy that includes the capability to improve...... IT integration capabilities, to sustain high alignment across acquisitions and to maintain a scalable IT infrastructure with a flat or decreasing cost structure. We begin the process of validating the three propositions by investigating a longitudinal case study of a growth-by-acquisition program....

  17. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  18. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  19. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  20. Temporal Coding of Volumetric Imagery

    Science.gov (United States)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  1. The Acoustic and Peceptual Effects of Series and Parallel Processing

    Directory of Open Access Journals (Sweden)

    Melinda C. Anderson

    2009-01-01

    Full Text Available Temporal envelope (TE cues provide a great deal of speech information. This paper explores how spectral subtraction and dynamic-range compression gain modifications affect TE fluctuations for parallel and series configurations. In parallel processing, algorithms compute gains based on the same input signal, and the gains in dB are summed. In series processing, output from the first algorithm forms the input to the second algorithm. Acoustic measurements show that the parallel arrangement produces more gain fluctuations, introducing more changes to the TE than the series configurations. Intelligibility tests for normal-hearing (NH and hearing-impaired (HI listeners show (1 parallel processing gives significantly poorer speech understanding than an unprocessed (UNP signal and the series arrangement and (2 series processing and UNP yield similar results. Speech quality tests show that UNP is preferred to both parallel and series arrangements, although spectral subtraction is the most preferred. No significant differences exist in sound quality between the series and parallel arrangements, or between the NH group and the HI group. These results indicate that gain modifications affect intelligibility and sound quality differently. Listeners appear to have a higher tolerance for gain modifications with regard to intelligibility, while judgments for sound quality appear to be more affected by smaller amounts of gain modification.

  2. Seismic data acquisition systems

    International Nuclear Information System (INIS)

    Kolvankar, V.G.; Nadre, V.N.; Rao, D.S.

    1989-01-01

    Details of seismic data acquisition systems developed at the Bhabha Atomic Research Centre, Bombay are reported. The seismic signals acquired belong to different signal bandwidths in the band from 0.02 Hz to 250 Hz. All these acquisition systems are built around a unique technique of recording multichannel data on to a single track of an audio tape and in digital form. Techniques of how these signals in different bands of frequencies were acquired and recorded are described. Method of detecting seismic signals and its performance is also discussed. Seismic signals acquired in different set-ups are illustrated. Time indexing systems for different set-ups and multichannel waveform display systems which form essential part of the data acquisition systems are also discussed. (author). 13 refs., 6 figs., 1 tab

  3. On Shaft Data Acquisition System (OSDAS)

    Science.gov (United States)

    Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles

    2012-01-01

    On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test

  4. LEGS data acquisition facility

    International Nuclear Information System (INIS)

    LeVine, M.J.

    1985-01-01

    The data acquisition facility for the LEGS medium energy photonuclear beam line is composed of an auxiliary crate controller (ACC) acting as a front-end processor, loosely coupled to a time-sharing host computer based on a UNIX-like environment. The ACC services all real-time demands in the CAMAC crate: it responds to LAMs generated by data acquisition modules, to keyboard commands, and it refreshes the graphics display at frequent intervals. The host processor is needed only for printing histograms and recording event buffers on magnetic tape. The host also provides the environment for software development. The CAMAC crate is interfaced by a VERSAbus CAMAC branch driver

  5. Acquisition IT Integration

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Øhrgaard, Christian

    2015-01-01

    of temporary agency workers. Following an analytic induction approach, theoretically grounded in the re-source-based view of the firm, we identify the complimentary and supplementary roles consultants can assume in acquisition IT integration. Through case studies of three acquirers, we investigate how...... the acquirers appropriate the use of agency workers as part of its acquisition strategy. For the investigated acquirers, assigning roles to agency workers is contingent on balancing the needs of knowledge induction and knowledge retention, as well as experience richness and in-depth under-standing. Composition...

  6. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  7. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  8. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  9. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  10. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  11. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  12. RETROSPECTIVE DETECTION OF INTERLEAVED SLICE ACQUISITION PARAMETERS FROM FMRI DATA

    Science.gov (United States)

    Parker, David; Rotival, Georges; Laine, Andrew; Razlighi, Qolamreza R.

    2015-01-01

    To minimize slice excitation leakage to adjacent slices, interleaved slice acquisition is nowadays performed regularly in fMRI scanners. In interleaved slice acquisition, the number of slices skipped between two consecutive slice acquisitions is often referred to as the ‘interleave parameter’; the loss of this parameter can be catastrophic for the analysis of fMRI data. In this article we present a method to retrospectively detect the interleave parameter and the axis in which it is applied. Our method relies on the smoothness of the temporal-distance correlation function, which becomes disrupted along the axis on which interleaved slice acquisition is applied. We examined this method on simulated and real data in the presence of fMRI artifacts such as physiological noise, motion, etc. We also examined the reliability of this method in detecting different types of interleave parameters and demonstrated an accuracy of about 94% in more than 1000 real fMRI scans. PMID:26161244

  13. OpenMP parallelization of a gridded SWAT (SWATG)

    Science.gov (United States)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  14. ACQUISITIONS LIST, MAY 1966.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Graduate School of Education.

    THIS ACQUISITIONS LIST IS A BIBLIOGRAPHY OF MATERIAL ON VARIOUS ASPECTS OF EDUCATION. OVER 300 UNANNOTATED REFERENCES ARE PROVIDED FOR DOCUMENTS DATING MAINLY FROM 1960 TO 1966. BOOKS, JOURNALS, REPORT MATERIALS, AND UNPUBLISHED MANUSCRIPTS ARE LISTED UNDER THE FOLLOWING HEADINGS--(1) ACHIEVEMENT, (2) ADOLESCENCE, (3) CHILD DEVELOPMENT, (4)…

  15. MAST data acquisition system

    International Nuclear Information System (INIS)

    Shibaev, S.; Counsell, G.; Cunningham, G.; Manhood, S.J.; Thomas-Davies, N.; Waterhouse, J.

    2006-01-01

    The data acquisition system of the Mega-Amp Spherical Tokamak (MAST) presently collects up to 400 MB of data in about 3000 data items per shot, and subsequent fast growth is expected. Since the start of MAST operations (in 1999) the system has changed dramatically. Though we continue to use legacy CAMAC hardware, newer VME, PCI, and PXI based sub-systems collect most of the data now. All legacy software has been redesigned and new software has been developed. Last year a major system improvement was made-replacement of the message distribution system. The new message system provides easy connection of any sub-system independently of its platform and serves as a framework for many new applications. A new data acquisition controller provides full control of common sub-systems, central error logging, and data acquisition alarms for the MAST plant. A number of new sub-systems using Linux and Windows OSs on VME, PCI, and PXI platforms have been developed. A new PXI unit has been designed as a base sub-system accommodating any type of data acquisition and control devices. Several web applications for the real-time MAST monitoring and data presentation have been developed

  16. Data acquisition techniques

    International Nuclear Information System (INIS)

    Dougherty, R.C.

    1976-01-01

    Testing neutron generators and major subassemblies has undergone a transition in the past few years. Digital information is now used for storage and analysis. The key to the change is the availability of a high-speed digitizer system. The status of the Sandia Laboratory data acquisition and handling system as applied to this area is surveyed. 1 figure

  17. Surviving mergers & acquisitions.

    Science.gov (United States)

    Dixon, Diane L

    2002-01-01

    Mergers and acquisitions are never easy to implement. The health care landscape is a minefield of failed mergers and uneasy alliances generating great turmoil and pain. But some mergers have been successful, creating health systems that benefit the communities they serve. Five prominent leaders offer their advice on minimizing the difficulties of M&As.

  18. General image acquisition parameters

    International Nuclear Information System (INIS)

    Teissier, J.M.; Lopez, F.M.; Langevin, J.F.

    1993-01-01

    The general parameters are of primordial importance to achieve image quality in terms of spatial resolution and contrast. They also play a role in the acquisition time for each sequence. We describe them separately, before associating them in a decision tree gathering the various options that are possible for diagnosis

  19. Decentralized Blended Acquisition

    NARCIS (Netherlands)

    Berkhout, A.J.

    2013-01-01

    The concept of blending and deblending is reviewed, making use of traditional and dispersed source arrays. The network concept of distributed blended acquisition is introduced. A million-trace robot system is proposed, illustrating that decentralization may bring about a revolution in the way we

  20. MPS Data Acquisition System

    International Nuclear Information System (INIS)

    Eiseman, S.E.; Miller, W.J.

    1975-01-01

    A description is given of the data acquisition system used with the multiparticle spectrometer facility at Brookhaven. Detailed information is provided on that part of the system which connects the detectors to the data handler; namely, the detector electronics, device controller, and device port optical isolator

  1. [Acquisition of arithmetic knowledge].

    Science.gov (United States)

    Fayol, Michel

    2008-01-01

    The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3).

  2. Second Language Acquisition.

    Science.gov (United States)

    McLaughlin, Barry; Harrington, Michael

    1989-01-01

    A distinction is drawn between representational and processing models of second-language acquisition. The first approach is derived primarily from linguistics, the second from psychology. Both fields, it is argued, need to collaborate more fully, overcoming disciplinary narrowness in order to achieve more fruitful research. (GLR)

  3. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  4. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  5. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  6. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  7. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  8. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  9. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  10. Data acquisition system for a proton imaging apparatus

    CERN Document Server

    Sipala, V; Bruzzi, M; Bucciolini, M; Candiano, G; Capineri, L; Cirrone, G A P; Civinini, C; Cuttone, G; Lo Presti, D; Marrazzo, L; Mazzaglia, E; Menichelli, D; Randazzo, N; Talamonti, C; Tesi, M; Valentini, S

    2009-01-01

    New developments in the proton-therapy field for cancer treatments, leaded Italian physics researchers to realize a proton imaging apparatus consisting of a silicon microstrip tracker to reconstruct the proton trajectories and a calorimeter to measure their residual energy. For clinical requirements, the detectors used and the data acquisition system should be able to sustain about 1 MHz proton rate. The tracker read-out, using an ASICs developed by the collaboration, acquires the signals detector and sends data in parallel to an FPGA. The YAG:Ce calorimeter generates also the global trigger. The data acquisition system and the results obtained in the calibration phase are presented and discussed.

  11. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  12. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  13. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  14. Recovering task fMRI signals from highly under-sampled data with low-rank and temporal subspace constraints.

    Science.gov (United States)

    Chiew, Mark; Graedel, Nadine N; Miller, Karla L

    2018-07-01

    Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  15. The JET fast central acquisition and trigger system

    International Nuclear Information System (INIS)

    Blackler, K.; Edwards, A.W.

    1994-01-01

    This paper describes a new data acquisition system at JET which uses Texas TMS320C40 parallel digital signal processors and the HELIOS parallel operating system to reduce the large amounts of experimental data produced by fast diagnostics. This unified system features a two level trigger system which performs real-time activity detection together with asynchronous event classification and selection. This provides automated data reduction during an experiment. The system's application to future fusion machines which have almost continuous operation is discussed

  16. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  17. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  18. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    Second derivative parallel block backward differentiation type formulas for Stiff ODEs. ... Log in or Register to get access to full text downloads. ... and the methods are inherently parallel and can be distributed over parallel processors. They are ...

  19. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  20. Integrated parallel reception, excitation, and shimming (iPRES).

    Science.gov (United States)

    Han, Hui; Song, Allen W; Truong, Trong-Kha

    2013-07-01

    To develop a new concept for a hardware platform that enables integrated parallel reception, excitation, and shimming. This concept uses a single coil array rather than separate arrays for parallel excitation/reception and B0 shimming. It relies on a novel design that allows a radiofrequency current (for excitation/reception) and a direct current (for B0 shimming) to coexist independently in the same coil. Proof-of-concept B0 shimming experiments were performed with a two-coil array in a phantom, whereas B0 shimming simulations were performed with a 48-coil array in the human brain. Our experiments show that individually optimized direct currents applied in each coil can reduce the B0 root-mean-square error by 62-81% and minimize distortions in echo-planar images. The simulations show that dynamic shimming with the 48-coil integrated parallel reception, excitation, and shimming array can reduce the B0 root-mean-square error in the prefrontal and temporal regions by 66-79% as compared with static second-order spherical harmonic shimming and by 12-23% as compared with dynamic shimming with a 48-coil conventional shim array. Our results demonstrate the feasibility of the integrated parallel reception, excitation, and shimming concept to perform parallel excitation/reception and B0 shimming with a unified coil system as well as its promise for in vivo applications. Copyright © 2013 Wiley Periodicals, Inc.

  1. Data acquisition for PLT

    International Nuclear Information System (INIS)

    Thompson, P.A.

    1975-01-01

    DA/PLT, the data acquisition system for the Princeton Large Torus (PLT) fusion research device, consists of a PDP-10 host computer, five satellite PDP-11s connected to the host by a special high-speed interface, miscellaneous other minicomputers and commercially supplied instruments, and much PPPL produced hardware. The software consists of the standard PDP-10 monitor with local modifications and the special systems and applications programs to customize the DA/PLT for the specific job of supporting data acquisition, analysis, display, and archiving, with concurrent off-line analysis, program development, and, in the background, general batch and timesharing. Some details of the over-all architecture are presented, along with a status report of the different PLT experiments being supported

  2. Knowledge Transfers following Acquisition

    DEFF Research Database (Denmark)

    Gammelgaard, Jens

    2001-01-01

    Prior relations between the acquiring firm and the target company pave the way for knowledge transfers subsequent to the acquisitions. One major reason is that through the market-based relations the two actors build up mutual trust and simultaneously they learn how to communicate. An empirical...... study of 54 Danish acquisitions taking place abroad from 1994 to 1998 demonstrated that when there was a high level of trust between the acquiring firm and the target firm before the take-over, then medium and strong tie-binding knowledge transfer mechanisms, such as project groups and job rotation......, were used more intensively. Further, the degree of stickiness was significantly lower in the case of prior trust-based relations....

  3. Amplitudes, acquisition and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Bloor, Robert

    1998-12-31

    Accurate seismic amplitude information is important for the successful evaluation of many prospects and the importance of such amplitude information is increasing with the advent of time lapse seismic techniques. It is now widely accepted that the proper treatment of amplitudes requires seismic imaging in the form of either time or depth migration. A key factor in seismic imaging is the spatial sampling of the data and its relationship to the imaging algorithms. This presentation demonstrates that acquisition caused spatial sampling irregularity can affect the seismic imaging and perturb amplitudes. Equalization helps to balance the amplitudes, and the dealing strategy improves the imaging further when there are azimuth variations. Equalization and dealiasing can also help with the acquisition irregularities caused by shot and receiver dislocation or missing traces. 2 refs., 2 figs.

  4. Data acquisition instruments: Psychopharmacology

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III

    1998-01-01

    This report contains the results of a Direct Assistance Project performed by Lockheed Martin Energy Systems, Inc., for Dr. K. O. Jobson. The purpose of the project was to perform preliminary analysis of the data acquisition instruments used in the field of psychiatry, with the goal of identifying commonalities of data and strategies for handling and using the data in the most advantageous fashion. Data acquisition instruments from 12 sources were provided by Dr. Jobson. Several commonalities were identified and a potentially useful data strategy is reported here. Analysis of the information collected for utility in performing diagnoses is recommended. In addition, further work is recommended to refine the commonalities into a directly useful computer systems structure.

  5. First Language Acquisition and Teaching

    Science.gov (United States)

    Cruz-Ferreira, Madalena

    2011-01-01

    "First language acquisition" commonly means the acquisition of a single language in childhood, regardless of the number of languages in a child's natural environment. Language acquisition is variously viewed as predetermined, wondrous, a source of concern, and as developing through formal processes. "First language teaching" concerns schooling in…

  6. Advances in temporal logic

    CERN Document Server

    Fisher, Michael; Gabbay, Dov; Gough, Graham

    2000-01-01

    Time is a fascinating subject that has captured mankind's imagination from ancient times to the present. It has been, and continues to be studied across a wide range of disciplines, from the natural sciences to philosophy and logic. More than two decades ago, Pnueli in a seminal work showed the value of temporal logic in the specification and verification of computer programs. Today, a strong, vibrant international research community exists in the broad community of computer science and AI. This volume presents a number of articles from leading researchers containing state-of-the-art results in such areas as pure temporal/modal logic, specification and verification, temporal databases, temporal aspects in AI, tense and aspect in natural language, and temporal theorem proving. Earlier versions of some of the articles were given at the most recent International Conference on Temporal Logic, University of Manchester, UK. Readership: Any student of the area - postgraduate, postdoctoral or even research professor ...

  7. Multiprocessor data acquisition system

    International Nuclear Information System (INIS)

    Haumann, J.R.; Crawford, R.K.

    1987-01-01

    A multiprocessor data acquisition system has been built to replace the single processor systems at the Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory. The multiprocessor system was needed to accommodate the higher data rates at IPNS brought about by improvements in the source and changes in instrument configurations. This paper describes the hardware configuration of the system and the method of task sharing and compares results to the single processor system

  8. Implementing acquisition strategies

    International Nuclear Information System (INIS)

    Montgomery, G. K.

    1997-01-01

    The objective of this paper is to address some of the strategies necessary to effect a successful asset or corporate acquisition. Understanding the corporate objective, the full potential of the asset, the specific strategies to be employed, the value of time, and most importantly the interaction of all these are crucial, for missed steps are likely to result in missed opportunities. The amount of factual information that can be obtained and utilized in a timely fashion is the largest single hurdle to the capture of value in the asset or corporate acquisition. Fact, familiarity and experience are key in this context. The importance of the due diligence process prior to title or data transfer cannot be overemphasized. Some of the most important assets acquired in a merger may be the people. To maximize effectiveness, it is essential to merge both existing staff and those that came with the new acquisition as soon as possible. By thinking together as a unit, knowledge and experience can be applied to realize the potential of the asset. Hence team building is one of the challenges, doing it quickly is usually the most effective. Developing new directions for the new enlarged company by combining the strengths of the old and the new creates more value, as well as a more efficient operation. Equally important to maximizing the potential of the new acquisition is the maintenance of the momentum generated by the need to grow that gave the impetus to acquiring new assets in the first place. In brief, the right mix of vision, facts and perceptions, quick enactment of the post-close strategies and keeping the momentum alive, are the principal ingredients of a focused strategy

  9. Internationalize Mergers and Acquisitions

    OpenAIRE

    Zhou, Lili

    2017-01-01

    As globalization processes, an increasing number of companies use mergers and acquisitions as a tool to achieve company growth in the international business world. The purpose of this thesis is to investigate the process of an international M&A and analyze the factors leading to success. The research started with reviewing different academic theory. The important aspects in both pre-M&A phase and post-M&A phase have been studied in depth. Because of the complexity in international...

  10. Data Acquisition System

    International Nuclear Information System (INIS)

    Watwood, D.; Beatty, J.

    1991-01-01

    The Data Acquisition System (DAS) is comprised of a Hewlett-Packard (HP) model 9816, Series 200 Computer System with the appropriate software to acquire, control, and archive data from a Data Acquisition/Control Unit, models HP3497A and HP3498A. The primary storage medium is an HP9153 16-megabyte hard disc. The data is backed-up on three floppy discs. One floppy disc drive is contained in the HP9153 chassis; the other two comprise an HP9122 dual disc drive. An HP82906A line printer supplies hard copy backup. A block diagram of the hardware setup is shown. The HP3497A/3498A Data Acquisition/Control Units read each input channel and transmit the raw voltage reading to the HP9816 CPU via the HPIB bus. The HP9816 converts this voltage to the appropriate engineering units using the calibration curves for the sensor being read. The HP9816 archives both the raw and processed data along with the time and the readings were taken to hard and floppy discs. The processed values and reading time are printed on the line printer. This system is designed to accommodate several types of sensors; each type is discussed in the following sections

  11. Complexity in language acquisition.

    Science.gov (United States)

    Clark, Alexander; Lappin, Shalom

    2013-01-01

    Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form. Copyright © 2013 Cognitive Science Society, Inc.

  12. MDSplus data acquisition system

    International Nuclear Information System (INIS)

    Stillerman, J.A.; Fredian, T.W.; Klare, K.; Manduchi, G.

    1997-01-01

    MDSplus, a tree based, distributed data acquisition system, was developed in collaboration with the ZTH Group at Los Alamos National Lab and the RFX Group at CNR in Padua, Italy. It is currently in use at MIT, RFX in Padua, TCV at EPFL in Lausanne, and KBSI in South Korea. MDSplus is made up of a set of X/motif based tools for data acquisition and display, as well as diagnostic configuration and management. It is based on a hierarchical experiment description which completely describes the data acquisition and analysis tasks and contains the results from these operations. These tools were designed to operate in a distributed, client/server environment with multiple concurrent readers and writers to the data store. While usually used over a Local Area Network, these tools can be used over the Internet to provide access for remote diagnosticians and even machine operators. An interface to a relational database is provided for storage and management of processed data. IDL is used as the primary data analysis and visualization tool. IDL is a registered trademark of Research Systems Inc. copyright 1996 American Institute of Physics

  13. Frames of reference in spatial language acquisition.

    Science.gov (United States)

    Shusterman, Anna; Li, Peggy

    2016-08-01

    Languages differ in how they encode spatial frames of reference. It is unknown how children acquire the particular frame-of-reference terms in their language (e.g., left/right, north/south). The present paper uses a word-learning paradigm to investigate 4-year-old English-speaking children's acquisition of such terms. In Part I, with five experiments, we contrasted children's acquisition of novel word pairs meaning left-right and north-south to examine their initial hypotheses and the relative ease of learning the meanings of these terms. Children interpreted ambiguous spatial terms as having environment-based meanings akin to north and south, and they readily learned and generalized north-south meanings. These studies provide the first direct evidence that children invoke geocentric representations in spatial language acquisition. However, the studies leave unanswered how children ultimately acquire "left" and "right." In Part II, with three more experiments, we investigated why children struggle to master body-based frame-of-reference words. Children successfully learned "left" and "right" when the novel words were systematically introduced on their own bodies and extended these words to novel (intrinsic and relative) uses; however, they had difficulty learning to talk about the left and right sides of a doll. This difficulty was paralleled in identifying the left and right sides of the doll in a non-linguistic memory task. In contrast, children had no difficulties learning to label the front and back sides of a doll. These studies begin to paint a detailed account of the acquisition of spatial terms in English, and provide insights into the origins of diverse spatial reference frames in the world's languages. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Parallel fabrication of macroporous scaffolds.

    Science.gov (United States)

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  15. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  16. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  17. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  18. Parallel imaging enhanced MR colonography using a phantom model.

    LENUS (Irish Health Repository)

    Morrin, Martina M

    2008-09-01

    To compare various Array Spatial and Sensitivity Encoding Technique (ASSET)-enhanced T2W SSFSE (single shot fast spin echo) and T1-weighted (T1W) 3D SPGR (spoiled gradient recalled echo) sequences for polyp detection and image quality at MR colonography (MRC) in a phantom model. Limitations of MRC using standard 3D SPGR T1W imaging include the long breath-hold required to cover the entire colon within one acquisition and the relatively low spatial resolution due to the long acquisition time. Parallel imaging using ASSET-enhanced T2W SSFSE and 3D T1W SPGR imaging results in much shorter imaging times, which allows for increased spatial resolution.

  19. Indeterministic Temporal Logic

    Directory of Open Access Journals (Sweden)

    Trzęsicki Kazimierz

    2015-09-01

    Full Text Available The questions od determinism, causality, and freedom have been the main philosophical problems debated since the beginning of temporal logic. The issue of the logical value of sentences about the future was stated by Aristotle in the famous tomorrow sea-battle passage. The question has inspired Łukasiewicz’s idea of many-valued logics and was a motive of A. N. Prior’s considerations about the logic of tenses. In the scheme of temporal logic there are different solutions to the problem. In the paper we consider indeterministic temporal logic based on the idea of temporal worlds and the relation of accessibility between them.

  20. Non-Cartesian Parallel Imaging Reconstruction of Undersampled IDEAL Spiral 13C CSI Data

    DEFF Research Database (Denmark)

    Hansen, Rie Beck; Hanson, Lars G.; Ardenkjær-Larsen, Jan Henrik

    scan times based on spatial information inherent to each coil element. In this work, we explored the combination of non-cartesian parallel imaging reconstruction and spatially undersampled IDEAL spiral CSI1 acquisition for efficient encoding of multiple chemical shifts within a large FOV with high...

  1. D0 experiment: its trigger, data acquisition, and computers

    International Nuclear Information System (INIS)

    Cutts, D.; Zeller, R.; Schamberger, D.; Van Berg, R.

    1984-05-01

    The new collider facility to be built at Fermilab's Tevatron-I D0 region is described. The data acquisition requirements are discussed, as well as the hardware and software triggers designed to meet these needs. An array of MicroVAX computers running VAXELN will filter in parallel (a complete event in each microcomputer) and transmit accepted events via Ethernet to a host. This system, together with its subsequent offline needs, is briefly presented

  2. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  3. The Role of Visual and Auditory Temporal Processing for Chinese Children with Developmental Dyslexia

    Science.gov (United States)

    Chung, Kevin K. H.; McBride-Chang, Catherine; Wong, Simpson W. L.; Cheung, Him; Penney, Trevor B.; Ho, Connie S. -H.

    2008-01-01

    This study examined temporal processing in relation to Chinese reading acquisition and impairment. The performances of 26 Chinese primary school children with developmental dyslexia on tasks of visual and auditory temporal order judgement, rapid naming, visual-orthographic knowledge, morphological, and phonological awareness were compared with…

  4. 8-Channel acquisition system for Time-Correlated Single-Photon Counting.

    Science.gov (United States)

    Antonioli, S; Miari, L; Cuccato, A; Crotti, M; Rech, I; Ghioni, M

    2013-06-01

    Nowadays, an increasing number of applications require high-performance analytical instruments capable to detect the temporal trend of weak and fast light signals with picosecond time resolution. The Time-Correlated Single-Photon Counting (TCSPC) technique is currently one of the preferable solutions when such critical optical signals have to be analyzed and it is fully exploited in biomedical and chemical research fields, as well as in security and space applications. Recent progress in the field of single-photon detector arrays is pushing research towards the development of high performance multichannel TCSPC systems, opening the way to modern time-resolved multi-dimensional optical analysis. In this paper we describe a new 8-channel high-performance TCSPC acquisition system, designed to be compact and versatile, to be used in modern TCSPC measurement setups. We designed a novel integrated circuit including a multichannel Time-to-Amplitude Converter with variable full-scale range, a D∕A converter, and a parallel adder stage. The latter is used to adapt each converter output to the input dynamic range of a commercial 8-channel Analog-to-Digital Converter, while the integrated DAC implements the dithering technique with as small as possible area occupation. The use of this monolithic circuit made the design of a scalable system of very small dimensions (95 × 40 mm) and low power consumption (6 W) possible. Data acquired from the TCSPC measurement are digitally processed and stored inside an FPGA (Field-Programmable Gate Array), while a USB transceiver allows real-time transmission of up to eight TCSPC histograms to a remote PC. Eventually, the experimental results demonstrate that the acquisition system performs TCSPC measurements with high conversion rate (up to 5 MHz/channel), extremely low differential nonlinearity (<0.04 peak-to-peak of the time bin width), high time resolution (down to 20 ps Full-Width Half-Maximum), and very low crosstalk between channels.

  5. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  6. Towards a five-minute comprehensive cardiac MR examination using highly accelerated parallel imaging with a 32-element coil array: feasibility and initial comparative evaluation.

    Science.gov (United States)

    Xu, Jian; Kim, Daniel; Otazo, Ricardo; Srichai, Monvadi B; Lim, Ruth P; Axel, Leon; Mcgorty, Kelly Anne; Niendorf, Thoralf; Sodickson, Daniel K

    2013-07-01

    To evaluate the feasibility and perform initial comparative evaluations of a 5-minute comprehensive whole-heart magnetic resonance imaging (MRI) protocol with four image acquisition types: perfusion (PERF), function (CINE), coronary artery imaging (CAI), and late gadolinium enhancement (LGE). This study protocol was Health Insurance Portability and Accountability Act (HIPAA)-compliant and Institutional Review Board-approved. A 5-minute comprehensive whole-heart MRI examination protocol (Accelerated) using 6-8-fold-accelerated volumetric parallel imaging was incorporated into and compared with a standard 2D clinical routine protocol (Standard). Following informed consent, 20 patients were imaged with both protocols. Datasets were reviewed for image quality using a 5-point Likert scale (0 = non-diagnostic, 4 = excellent) in blinded fashion by two readers. Good image quality with full whole-heart coverage was achieved using the accelerated protocol, particularly for CAI, although significant degradations in quality, as compared with traditional lengthy examinations, were observed for the other image types. Mean total scan time was significantly lower for the Accelerated as compared to Standard protocols (28.99 ± 4.59 min vs. 1.82 ± 0.05 min, P simplified scan prescription and high spatial and temporal resolution enabled by highly parallel imaging technology. The study also highlights technical hurdles that remain to be addressed. Although image quality remained diagnostic for most scan types, the reduced image quality of PERF, CINE, and LGE scans in the Accelerated protocol remain a concern. Copyright © 2012 Wiley Periodicals, Inc.

  7. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  8. Parallel algorithms for online trackfinding at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.

  9. Parallel multiscale simulations of a brain aneurysm

    Energy Technology Data Exchange (ETDEWEB)

    Grinberg, Leopold [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Fedosov, Dmitry A. [Institute of Complex Systems and Institute for Advanced Simulation, Forschungszentrum Jülich, Jülich 52425 (Germany); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2013-07-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in

  10. Parallel multiscale simulations of a brain aneurysm

    International Nuclear Information System (INIS)

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em

    2013-01-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in

  11. Data acquisition and real-time bolometer tomography using LabVIEW RT

    International Nuclear Information System (INIS)

    Giannone, L.; Eich, T.; Fuchs, J.C.; Ravindran, M.; Ruan, Q.; Wenzel, L.; Cerna, M.; Concezzi, S.

    2011-01-01

    The currently available multi-core PCI Express systems running LabVIEW RT (real-time), equipped with FPGA cards for data acquisition and real-time parallel signal processing, greatly shorten the design and implementation cycles of large-scale, real-time data acquisition and control systems. This paper details a data acquisition and real-time tomography system using LabVIEW RT for the bolometer diagnostic on the ASDEX Upgrade tokamak (Max Planck Institute for Plasma Physics, Garching, Germany). The transformation matrix for tomography is pre-computed based on the geometry of distributed radiation sources and sensors. A parallelized iterative algorithm is adapted to solve a constrained linear system for the reconstruction of the radiated power density. Real-time bolometer tomography is performed with LabVIEW RT. Using multi-core machines to execute the parallelized algorithm, a cycle time well below 1 ms is reached.

  12. A fast data acquisition system for PHA and MCS measurements

    International Nuclear Information System (INIS)

    Eijk, P.J.A. van; Keyser, C.J.; Rigterink, B.J.; Hasper, H.

    1985-01-01

    A microprocessor controlled data acquisition system for pulse height analysis and multichannel scaling is described. A 4K x 24 bit static memory is used to obtain a fast data acquisition rate. The system can store 12 bit ADC or TDC data within 150 ns. Operating commands can be entered via a small keyboard or by a RS-232-C interface. An oscilloscope is used to display a spectrum. The display of a spectrum or the transmission of spectrum data to an external computer causes only a short interruption of a measurement in progress and is accomplished by using a DMA circuit. The program is written in Modular Pascal and is divided into 15 modules. These implement 9 parallel processes which are synchronized by using semaphores. Hardware interrupts from the data acquisition, DMA, keyboard and RS-232-C circuits are used to signal these processes. (orig.)

  13. Chondroblastoma of temporal bone

    Energy Technology Data Exchange (ETDEWEB)

    Tanohta, K.; Noda, M.; Katoh, H.; Okazaki, A.; Sugiyama, S.; Maehara, T.; Onishi, S.; Tanida, T.

    1986-07-01

    The case of a 55-year-old female with chondroblastoma arising from the left temporal bone is presented. Although 10 cases of temporal chondroblastoma have been reported, this is the first in which plain radiography, pluridirectional tomography, computed tomography (CT) and angiography were performed. We discuss the clinical and radiological aspects of this rare tumor.

  14. Chondroblastoma of temporal bone

    International Nuclear Information System (INIS)

    Tanohta, K.; Noda, M.; Katoh, H.; Okazaki, A.; Sugiyama, S.; Maehara, T.; Onishi, S.; Tanida, T.

    1986-01-01

    The case of a 55-year-old female with chondroblastoma arising from the left temporal bone is presented. Although 10 cases of temporal chondroblastoma have been reported, this is the first in which plain radiography, pluridirectional tomography, computed tomography (CT) and angiography were performed. We discuss the clinical and radiological aspects of this rare tumor. (orig.)

  15. Investigation of True High Frequency Electrical Substrates of fMRI-Based Resting State Networks Using Parallel Independent Component Analysis of Simultaneous EEG/fMRI Data.

    Science.gov (United States)

    Kyathanahally, Sreenath P; Wang, Yun; Calhoun, Vince D; Deshpande, Gopikrishna

    2017-01-01

    Previous work using simultaneously acquired electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data has shown that the slow temporal dynamics of resting state brain networks (RSNs), e.g., default mode network (DMN), visual network (VN), obtained from fMRI are correlated with smoothed and down sampled versions of various EEG features such as microstates and band-limited power envelopes. Therefore, even though the down sampled and smoothed envelope of EEG gamma band power is correlated with fMRI fluctuations in the RSNs, it does not mean that the electrical substrates of the RSNs fluctuate with periods state fMRI fluctuations in the RSNs, researchers have speculated that truly high frequency electrical substrates may exist for the RSNs, which would make resting fluctuations obtained from fMRI more meaningful to typically occurring fast neuronal processes in the sub-100 ms time scale. In this study, we test this critical hypothesis using an integrated framework involving simultaneous EEG/fMRI acquisition, fast fMRI sampling ( TR = 200 ms) using multiband EPI (MB EPI), and EEG/fMRI fusion using parallel independent component analysis (pICA) which does not require the down sampling of EEG to fMRI temporal resolution . Our results demonstrate that with faster sampling, high frequency electrical substrates (fluctuating with periods <100 ms time scale) of the RSNs can be observed. This provides a sounder neurophysiological basis for the RSNs.

  16. The NUSTAR data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Loeher, B.; Toernqvist, H.T. [TU Darmstadt (Germany); GSI (Germany); Agramunt, J. [IFIC, CSIC (Spain); Bendel, M.; Gernhaeuser, R.; Le Bleis, T.; Winkel, M. [TU Muenchen (Germany); Charpy, A.; Heinz, A.; Johansson, H.T. [Chalmers University of Technology (Sweden); Coleman-Smith, P.; Lazarus, I.H.; Pucknell, V.F.E. [STFC Daresbury (United Kingdom); Czermak, A. [IFJ (Poland); Kurz, N.; Nociforo, C.; Pietri, S.; Schaffner, H.; Simon, H. [GSI (Germany); Scheit, H. [TU Darmstadt (Germany); Taieb, J. [CEA (France)

    2015-07-01

    The diversity of upcoming experiments within the NUSTAR collaboration, including experiments in storage rings, reactions at relativistic energies and high-precision spectroscopy, is reflected in the diversity of the required detection systems. A challenging task is to incorporate the different needs of individual detectors within the unified NUSTAR Data AcQuisition (NDAQ). NDAQ takes up this challenge by providing a high degree of availability via continuously running systems, high flexibility via experiment-specific configuration files for data streams and trigger logic, distributed timestamps and trigger information on km distances, all built on the solid basis of the GSI Multi-Branch System. NDAQ ensures interoperability between individual NUSTAR detectors and allows merging of formerly separate data streams according to the needs of all experiments, increasing reliability in NUSTAR data acquisition. An overview of the NDAQ infrastructure and the current progress is presented. The NUSTAR (NUclear STructure, Astrophysics and Reactions) collaboration represents one of the four pillars motivating the construction of the international FAIR facility. The diversity of NUSTAR experiments, including experiments in storage rings, reactions at relativistic energies and high-precision spectroscopy, is reflected in the diversity of the required detection systems. A challenging task is to incorporate the different needs of individual detectors and components under the umbrella of the unified NUSTAR Data AQuisition (NDAQ) infrastructure. NDAQ takes up this challenge by providing a high degree of availability via continuously running systems, high flexibility via experiment-specific configuration files for data streams and trigger logic, and distributed time stamps and trigger information on km distances, all built on the solid basis of the GSI Multi-Branch System (MBS). NDAQ ensures interoperability between individual NUSTAR detectors and allows merging of formerly separate

  17. Mechanisms of rule acquisition and rule following in inductive reasoning.

    Science.gov (United States)

    Crescentini, Cristiano; Seyed-Allaei, Shima; De Pisapia, Nicola; Jovicich, Jorge; Amati, Daniele; Shallice, Tim

    2011-05-25

    Despite the recent interest in the neuroanatomy of inductive reasoning processes, the regional specificity within prefrontal cortex (PFC) for the different mechanisms involved in induction tasks remains to be determined. In this study, we used fMRI to investigate the contribution of PFC regions to rule acquisition (rule search and rule discovery) and rule following. Twenty-six healthy young adult participants were presented with a series of images of cards, each consisting of a set of circles numbered in sequence with one colored blue. Participants had to predict the position of the blue circle on the next card. The rules that had to be acquired pertained to the relationship among succeeding stimuli. Responses given by subjects were categorized in a series of phases either tapping rule acquisition (responses given up to and including rule discovery) or rule following (correct responses after rule acquisition). Mid-dorsolateral PFC (mid-DLPFC) was active during rule search and remained active until successful rule acquisition. By contrast, rule following was associated with activation in temporal, motor, and medial/anterior prefrontal cortex. Moreover, frontopolar cortex (FPC) was active throughout the rule acquisition and rule following phases before a rule became familiar. We attributed activation in mid-DLPFC to hypothesis generation and in FPC to integration of multiple separate inferences. The present study provides evidence that brain activation during inductive reasoning involves a complex network of frontal processes and that different subregions respond during rule acquisition and rule following phases.

  18. TCABR data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Fagundes, A.N. E-mail: fagundes@if.usp.br; Sa, W.P.; Coelho, P.M.S.A

    2000-08-01

    A brief description of the design of the data acquisition system for the TCABR tokamak is presented. The system comprises the VME standard instrumentation incorporating CAMAC instrumentation through the use of a GPIB interface. All the necessary data for programming different parts of the equipment, as well as the repertoire of actions for the machine control, are stored in a DBMS, with friendly interfaces. Public access software is used, where feasible, in the development of codes. The TCABR distinguished feature is the virtual lack of frontiers in upgrading, either in hardware or software.

  19. Flexible data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Clout, P N; Ridley, P A [Science Research Council, Daresbury (UK). Daresbury Lab.

    1978-06-01

    A data acquisition system has been developed which enables several independent experiments to be controlled by a 24 K word PDP-11 computer. Significant features of the system are the use of CAMAC, a high level language (RTL/2) and a general-purpose operating system executive which assist the rapid implementation of new experiments. This system has been used successfully for EXAFS and photo-electron spectroscopy experiments. It is intended to provide powerful concurrent data analysis and feedback facilities to the experimenter by on-line connection to the central IBM 370/165 computer.

  20. Getting Defense Acquisition Right

    Science.gov (United States)

    2017-01-01

    on top of events and steer them to get where we need to go as efficiently as possible. Program management is not a spectator sport . Frank Frank...I made in the e-mail above and discusses some of the proactive steps a Program Manager can take, ahead of time , to reduce the potential...The Congress will rescind funds that are not obligated in a timely way. This puts pressure on the DoD’s acquisition managers to put money on

  1. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  2. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  3. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  4. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  5. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  6. Parallel plate transmission line transformer

    NARCIS (Netherlands)

    Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M.

    2011-01-01

    A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the

  7. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  8. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  9. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  10. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  11. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  12. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  13. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.

    Science.gov (United States)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Data-acquisition systems

    International Nuclear Information System (INIS)

    Cyborski, D.R.; Teh, K.M.

    1995-01-01

    Up to now, DAPHNE, the data-acquisition system developed for ATLAS, was used routinely for experiments at ATLAS and the Dynamitron. More recently, the Division implemented 2 MSU/DAPHNE systems. The MSU/DAPHNE system is a hybrid data-acquisition system which combines the front-end of the Michigan State University (MSU) DA system with the traditional DAPHNE back-end. The MSU front-end is based on commercially available modules. This alleviates the problems encountered with the DAPHNE front-end which is based on custom designed electronics. The first MSU system was obtained for the APEX experiment and was used there successfully. A second MSU front-end, purchased as a backup for the APEX experiment, was installed as a fully-independent second MSU/DAPHNE system with the procurement of a DEC 3000 Alpha host computer, and was used successfully for data-taking in an experiment at ATLAS. Additional hardware for a third system was bought and will be installed. With the availability of 2 MSU/DAPHNE systems in addition to the existing APEX setup, it is planned that the existing DAPHNE front-end will be decommissioned

  15. Continued Data Acquisition Development

    Energy Technology Data Exchange (ETDEWEB)

    Schwellenbach, David [National Security Technologies, LLC. (NSTec), Mercury, NV (United States)

    2017-11-27

    This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muon tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.

  16. Unsupervised Language Acquisition

    Science.gov (United States)

    de Marcken, Carl

    1996-11-01

    This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions that must hold for this general learning strategy to arrive at linguistically desirable grammars. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars, and a learning strategy that separates the ``content'' of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning vocabularies and grammars from unsegmented text and continuous speech, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language.

  17. A Framework for Multi-Robot Motion Planning from Temporal Logic Specifications

    DEFF Research Database (Denmark)

    Koo, T. John; Li, Rongqing; Quottrup, Michael Melholt

    2012-01-01

    -time Temporal Logic, Computation Tree Logic, and -calculus can be preserved. Motion planning can then be performed at a discrete level by considering the parallel composition of discrete abstractions of the robots with a requirement specification given in a suitable temporal logic. The bisimilarity ensures...

  18. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  19. Parallel Algorithms for Model Checking

    NARCIS (Netherlands)

    van de Pol, Jaco; Mousavi, Mohammad Reza; Sgall, Jiri

    2017-01-01

    Model checking is an automated verification procedure, which checks that a model of a system satisfies certain properties. These properties are typically expressed in some temporal logic, like LTL and CTL. Algorithms for LTL model checking (linear time logic) are based on automata theory and graph

  20. Otosclerosis: Temporal Bone Pathology.

    Science.gov (United States)

    Quesnel, Alicia M; Ishai, Reuven; McKenna, Michael J

    2018-04-01

    Otosclerosis is pathologically characterized by abnormal bony remodeling, which includes bone resorption, new bone deposition, and vascular proliferation in the temporal bone. Sensorineural hearing loss in otosclerosis is associated with extension of otosclerosis to the cochlear endosteum and deposition of collagen throughout the spiral ligament. Persistent or recurrent conductive hearing loss after stapedectomy has been associated with incomplete footplate fenestration, poor incus-prosthesis connection, and incus resorption in temporal bone specimens. Human temporal bone pathology has helped to define the role of computed tomography imaging for otosclerosis, confirming that computed tomography is highly sensitive for diagnosis, yet limited in assessing cochlear endosteal involvement. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. An original acquisition chain for the TOHR High Resolution Tomograph

    International Nuclear Information System (INIS)

    Pinot, Laurent

    1999-01-01

    The framework of this work is part of a new approach of emission tomography adapted to small animals. The principle of our tomographic system TOHR (French acronym for High Resolution Tomograph) is based on the use of large solid angle and high resolution focusing collimators each mounted in front of a detection module of high efficiency. With a first-generation acquisition chain we were able to characterize TOHR, however, to take fully advantage of the TOHR possibilities, a completely new acquisition scheme had to be designed. This system, being the main topic of this work, makes use of temporal information. The detection of a particle that entered the detector is translated into temporal logical signals. These signals pass into a time coding circuitry and the coded results are transferred in a digital processor. According to the initial terms of delivery, the developed acquisition chain steers the detection of events dependent on the deposited energy and time of arrival. The latter is done by coincidence measurements. All elements are mounted on a special board included into a PC unit and a dedicated program controls the whole system. First experiments showed up the interest of the new acquisition unit for other application in physics or medical imaging

  2. Collection assessment and acquisitions budgets

    CERN Document Server

    Lee, Sul H

    2013-01-01

    This invaluable new book contains timely information about the assessment of academic library collections and the relationship of collection assessment to acquisition budgets. The rising cost of information significantly influences academic libraries'abilities to acquire the necessary materials for students and faculty, and public libraries'abilities to acquire material for their clientele. Collection Assessment and Acquisitions Budgets examines different aspects of the relationship between the assessment of academic library collections and the management of library acquisition budgets. Librar

  3. High-resolution whole-brain diffusion MRI at 7T using radiofrequency parallel transmission.

    Science.gov (United States)

    Wu, Xiaoping; Auerbach, Edward J; Vu, An T; Moeller, Steen; Lenglet, Christophe; Schmitter, Sebastian; Van de Moortele, Pierre-François; Yacoub, Essa; Uğurbil, Kâmil

    2018-03-30

    Investigating the utility of RF parallel transmission (pTx) for Human Connectome Project (HCP)-style whole-brain diffusion MRI (dMRI) data at 7 Tesla (7T). Healthy subjects were scanned in pTx and single-transmit (1Tx) modes. Multiband (MB), single-spoke pTx pulses were designed to image sagittal slices. HCP-style dMRI data (i.e., 1.05-mm resolutions, MB2, b-values = 1000/2000 s/mm 2 , 286 images and 40-min scan) and data with higher accelerations (MB3 and MB4) were acquired with pTx. pTx significantly improved flip-angle detected signal uniformity across the brain, yielding ∼19% increase in temporal SNR (tSNR) averaged over the brain relative to 1Tx. This allowed significantly enhanced estimation of multiple fiber orientations (with ∼21% decrease in dispersion) in HCP-style 7T dMRI datasets. Additionally, pTx pulses achieved substantially lower power deposition, permitting higher accelerations, enabling collection of the same data in 2/3 and 1/2 the scan time or of more data in the same scan time. pTx provides a solution to two major limitations for slice-accelerated high-resolution whole-brain dMRI at 7T; it improves flip-angle uniformity, and enables higher slice acceleration relative to current state-of-the-art. As such, pTx provides significant advantages for rapid acquisition of high-quality, high-resolution truly whole-brain dMRI data. © 2018 International Society for Magnetic Resonance in Medicine.

  4. The DISTO data acquisition system at SATURNE

    International Nuclear Information System (INIS)

    Balestra, F.; Bedfer, Y.; Bertini, R.

    1998-01-01

    The DISTO collaboration has built a large-acceptance magnetic spectrometer designed to provide broad kinematic coverage of multiparticle final states produced in pp scattering. The spectrometer has been installed in the polarized proton beam of the Saturne accelerator in Saclay to study polarization observables in the rvec pp → pK + rvec Y (Y = Λ, Σ 0 or Y * ) reaction and vector meson production (ψ, ω and ρ) in pp collisions. The data acquisition system is based on a VME 68030 CPU running the OS/9 operating system, housed in a single VME crate together with the CAMAC interface, the triple port ECL memories, and four RISC R3000 CPU. The digitization of signals from the detectors is made by PCOS III and FERA front-end electronics. Data of several events belonging to a single Saturne extraction are stored in VME triple-port ECL memories using a hardwired fast sequencer. The buffer, optionally filtered by the RISC R3000 CPU, is recorded on a DLT cassette by DAQ CPU using the on-board SCSI interface during the acceleration cycle. Two UNIX workstations are connected to the VME CPUs through a fast parallel bus and the Local Area Network. They analyze a subset of events for on-line monitoring. The data acquisition system is able to read and record 3,500 ev/burst in the present configuration with a dead time of 15%

  5. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  6. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng

    2017-09-01

    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  7. Nontraumatic temporal subcortical hemorrhage

    International Nuclear Information System (INIS)

    Weisberg, L.A.; Stazio, A.; Shamsnia, M.; Elliott, D.; Charity Hospital, New Orleans, LA

    1990-01-01

    Thirty patients with temporal hematomas were analyzed. Four with frontal extension survived. Of 6 with ganglionic extension, three had residual deficit. Of 8 with parietal extension, 4 had delayed deterioration and died, two patients recovered, and two with peritumoral hemorrhage due to glioblastoma multiforme died. Five patients with posterior temporal hematomas recovered. In 7 patients with basal-inferior temporal hematomas, angiography showed aneurysms in 3 cases, angiomas in 2 cases and no vascular lesion in 2 cases. Of 23 cases with negative angiography and no systemic cause for temporal hematoma, 12 patients were hypertensive and 11 were normotensive. Ten hypertensive patients without evidence of chronic vascular disease had the largest hematomas, extending into the parietal or ganglionic regions. Seven of these patients died; 3 had residual deficit. Eleven normotensive and two hypertensive patients with evidence of chronic vascular change had smaller hematomas. They survived with good functional recovery. (orig.)

  8. Temporal Lobe Seizure

    Science.gov (United States)

    ... functions, including having odd feelings — such as euphoria, deja vu or fear. Temporal lobe seizures are sometimes called ... sudden sense of unprovoked fear or joy A deja vu experience — a feeling that what's happening has happened ...

  9. A tandem parallel plate analyzer

    International Nuclear Information System (INIS)

    Hamada, Y.; Fujisawa, A.; Iguchi, H.; Nishizawa, A.; Kawasumi, Y.

    1996-11-01

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  10. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  11. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  12. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  13. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  14. A parallel form of the Gudjonsson Suggestibility Scale.

    Science.gov (United States)

    Gudjonsson, G H

    1987-09-01

    The purpose of this study is twofold: (1) to present a parallel form of the Gudjonsson Suggestibility Scale (GSS, Form 1); (2) to study test-retest reliabilities of interrogative suggestibility. Three groups of subjects were administered the two suggestibility scales in a counterbalanced order. Group 1 (28 normal subjects) and Group 2 (32 'forensic' patients) completed both scales within the same testing session, whereas Group 3 (30 'forensic' patients) completed the two scales between one week and eight months apart. All the correlations were highly significant, giving support for high 'temporal consistency' of interrogative suggestibility.

  15. PARALLEL SPATIOTEMPORAL SPECTRAL CLUSTERING WITH MASSIVE TRAJECTORY DATA

    Directory of Open Access Journals (Sweden)

    Y. Z. Gu

    2017-09-01

    Full Text Available Massive trajectory data contains wealth useful information and knowledge. Spectral clustering, which has been shown to be effective in finding clusters, becomes an important clustering approaches in the trajectory data mining. However, the traditional spectral clustering lacks the temporal expansion on the algorithm and limited in its applicability to large-scale problems due to its high computational complexity. This paper presents a parallel spatiotemporal spectral clustering based on multiple acceleration solutions to make the algorithm more effective and efficient, the performance is proved due to the experiment carried out on the massive taxi trajectory dataset in Wuhan city, China.

  16. Soudan 2 data acquisition and trigger electronics

    International Nuclear Information System (INIS)

    Dawson, J.; Haberichter, W.; Laird, R.

    1985-01-01

    The 1.1 kton Soudan 2 calorimetric drift-chamber detector is read out by 16K anode wires and 32K cathode strips. Preamps from each wire or strip are bussed together in groups of 8 to reduce the number of ADC channels. The resulting 6144 channels of ionization signal are flash-digitized every 200 ns and stored in RAM. The raw data hit patterns are continually compared with programmable trigger multiplicity and adjacency conditions. The data acquisition process is managed in a system of 24 parallel crates each containing an Intel 80C86 microprocessor, which supervises a pipe-lined data compactor, and allows transfer of the compacted data via CAMAC to the host computer. The 80C86's also manage the local trigger conditions and can perform some parallel processing of the data. Due to the scale of the system and multiplicity of identical channels, semi-custom gate array chips are used for much of the logic, utilizing 2.5 micron CMOS technology

  17. Soudan 2 data acquisition and trigger electronics

    International Nuclear Information System (INIS)

    Dawson, J.; Laird, R.; May, E.; Mondal, N.; Schlereth, J.; Solomey, N.; Thron, J.; Heppelmann, S.

    1985-01-01

    The 1.1 kton Soudan 2 detector is read out by 16K anode wires and 3 2K cathode strips. Preamps from each wire or strip are bussed together in groups of 8 to reduce the number of ADC channels. The resulting 6144 channels of ionization signal are flash-digitized every 150 ns and stored in RAM. The raw data hit patterns are continually compared with programmable trigger multiplicity and adjacency conditions. The data acquisition process is managed in a system of 24 parallel crates each containing an Intel 8086 microprocessors, which supervises a pipe-lined data compactors, and allows transfer of the compacted data via CAMAC to the host computer. The 8086's also manage the local trigger conditions and can perform some parallel processing of the data. Due to the scale of the system and multiplicity of identical channels, semi-custom gate array chips are used for much of the logic, utilizing 2.5 micron CMOS technology

  18. Soudan 2 data acquisition and trigger electronics

    International Nuclear Information System (INIS)

    Dawson, J.; Heppelmann, S.; Laird, R.; May, E.; Mondal, N.; Schlereth, J.; Solomey, N.; Thron, J.

    1985-01-01

    The 1.1 kton Soudan 2 detector is read out by 16K anode wires and 32K cathode strips. Preamps from each wire or strip are bussed together in groups of 8 to reduce the number of ADC channels. The resulting 6144 channels of ionization signal are flash-digitized every 150 ns and stored in RAM. The raw data hit patterns are continually compared with programmable trigger multiplicity and adjacency conditions. The data acquisition process is managed in a system of 24 parallel crates each containing an Intel 8086 microprocessors, which supervises a pipe-lined data compactors, and allows transfer of the compacted data via CAMAC to the host computer. The 8086's also manage the local trigger conditions and can perform some parallel processing of the data. Due to the scale of the system and multiplicity of identical channels, semi-custom gate array chips are used for much of the logic, utilizing 2.5 micron CMOS technology

  19. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  20. SHEEP TEMPORAL BONE

    Directory of Open Access Journals (Sweden)

    Kesavan

    2016-03-01

    Full Text Available INTRODUCTION Human temporal bones are difficult to procure now a days due to various ethical issues. Sheep temporal bone is a good alternative due to morphological similarities, easy to procure and less cost. Many middle ear exercises can be done easily and handling of instruments is done in the procedures like myringoplasty, tympanoplasty, stapedotomy, facial nerve dissection and some middle ear implants. This is useful for resident training programme.

  1. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  2. Applications of Temporal Reasoning to Intensive Care Units

    Directory of Open Access Journals (Sweden)

    J. M. Juarez

    2010-01-01

    Full Text Available Intensive Care Units (ICUs are hospital departments that focus on the evolution of patients. In this scenario, the temporal dimension plays an essential role in understanding the state of the patients from their temporal information. The development of methods for the acquisition, modelling, reasoning and knowledge discovery of temporal information is, therefore, useful to exploit the large amount of temporal data recorded daily in the ICU. During the past decades, some subfields of Artificial Intelligence have been devoted to the study of temporal models and techniques to solve generic problems and towards their practical applications in the medical domain. The main goal of this paper is to present our view of some aspects of practical problems of temporal reasoning in the ICU field, and to describe our practical experience in the field in the last decade. This paper provides a non-exhaustive review of some of the efforts made in the field and our particular contributions in the development of temporal reasoning methods to partially solve some of these problems. The results are a set of software tools that help physicians to better understand the patient's temporal evolution.

  3. Fast-Acquisition/Weak-Signal-Tracking GPS Receiver for HEO

    Science.gov (United States)

    Wintemitz, Luke; Boegner, Greg; Sirotzky, Steve

    2004-01-01

    A report discusses the technical background and design of the Navigator Global Positioning System (GPS) receiver -- . a radiation-hardened receiver intended for use aboard spacecraft. Navigator is capable of weak signal acquisition and tracking as well as much faster acquisition of strong or weak signals with no a priori knowledge or external aiding. Weak-signal acquisition and tracking enables GPS use in high Earth orbits (HEO), and fast acquisition allows for the receiver to remain without power until needed in any orbit. Signal acquisition and signal tracking are, respectively, the processes of finding and demodulating a signal. Acquisition is the more computationally difficult process. Previous GPS receivers employ the method of sequentially searching the two-dimensional signal parameter space (code phase and Doppler). Navigator exploits properties of the Fourier transform in a massively parallel search for the GPS signal. This method results in far faster acquisition times [in the lab, 12 GPS satellites have been acquired with no a priori knowledge in a Low-Earth-Orbit (LEO) scenario in less than one second]. Modeling has shown that Navigator will be capable of acquiring signals down to 25 dB-Hz, appropriate for HEO missions. Navigator is built using the radiation-hardened ColdFire microprocessor and housing the most computationally intense functions in dedicated field-programmable gate arrays. The high performance of the algorithm and of the receiver as a whole are made possible by optimizing computational efficiency and carefully weighing tradeoffs among the sampling rate, data format, and data-path bit width.

  4. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  5. Neuronal representations of stimulus associations develop in the temporal lobe during learning.

    Science.gov (United States)

    Messinger, A; Squire, L R; Zola, S M; Albright, T D

    2001-10-09

    Visual stimuli that are frequently seen together become associated in long-term memory, such that the sight of one stimulus readily brings to mind the thought or image of the other. It has been hypothesized that acquisition of such long-term associative memories proceeds via the strengthening of connections between neurons representing the associated stimuli, such that a neuron initially responding only to one stimulus of an associated pair eventually comes to respond to both. Consistent with this hypothesis, studies have demonstrated that individual neurons in the primate inferior temporal cortex tend to exhibit similar responses to pairs of visual stimuli that have become behaviorally associated. In the present study, we investigated the role of these areas in the formation of conditional visual associations by monitoring the responses of individual neurons during the learning of new stimulus pairs. We found that many neurons in both area TE and perirhinal cortex came to elicit more similar neuronal responses to paired stimuli as learning proceeded. Moreover, these neuronal response changes were learning-dependent and proceeded with an average time course that paralleled learning. This experience-dependent plasticity of sensory representations in the cerebral cortex may underlie the learning of associations between objects.

  6. Foreign Acquisition, Wages and Productivity

    DEFF Research Database (Denmark)

    Bandick, Roger

    This paper studies the effect of foreign acquisition on wages and total factor productivity (TFP) in the years following a takeover by using unique detailed firm-level data for Sweden for the period 1993-2002. The paper takes particular account of the potential endogeneity of the acquisition...

  7. Foreign Acquisition, Wages and Productivity

    DEFF Research Database (Denmark)

    Bandick, Roger

    2011-01-01

    This paper studies the effect of foreign acquisition on wages and total factor productivity (TFP) in the years following a takeover by using unique detailed firm-level data for Sweden for the period 1993-2002. The paper takes particular account of the potential endogeneity of the acquisition...

  8. Human parallels to experimental myopia?

    DEFF Research Database (Denmark)

    Fledelius, Hans C; Goldschmidt, Ernst; Haargaard, Birgitte

    2014-01-01

    acquiring new and basic knowledge, the practical object of the research is to reduce the burden of human myopia around the world. Acquisition and cost of optical correction is one issue, but associated morbidity counts more, with its global load of myopia-associated visual loss and blindness. The object......Raviola and Wiesel's monkey eyelid suture studies of the 1970s laid the cornerstone for the experimental myopia science undertaken since then. The aim has been to clarify the basic humoral and neuronal mechanisms behind induced myopization, its eye tissue transmitters in particular. Besides...... serve as inspiration to the laboratory research, which aims at solving the basic enigmas on a tissue level....

  9. Developing Acquisition IS Integration Capabilities

    DEFF Research Database (Denmark)

    Wynne, Peter J.

    2016-01-01

    An under researched, yet critical challenge of Mergers and Acquisitions (M&A), is what to do with the two organisations’ information systems (IS) post-acquisition. Commonly referred to as acquisition IS integration, existing theory suggests that to integrate the information systems successfully......, an acquiring company must leverage two high level capabilities: diagnosis and integration execution. Through a case study, this paper identifies how a novice acquirer develops these capabilities in anticipation of an acquisition by examining its use of learning processes. The study finds the novice acquirer...... applies trial and error, experimental, and vicarious learning processes, while actively avoiding improvisational learning. The results of the study contribute to the acquisition IS integration literature specifically by exploring it from a new perspective: the learning processes used by novice acquirers...

  10. Triple Arterial Phase MR Imaging with Gadoxetic Acid Using a Combination of Contrast Enhanced Time Robust Angiography, Keyhole, and Viewsharing Techniques and Two-Dimensional Parallel Imaging in Comparison with Conventional Single Arterial Phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Lee, Jeong Min [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of); Yu, Mi Hye [Department of Radiology, Konkuk University Medical Center, Seoul 05030 (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul 04342 (Korea, Republic of); Han, Joon Koo [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of)

    2016-11-01

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ{sup 2} test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  11. Triple arterial phase MR imaging with gadoxetic acid using a combination of contrast enhanced time robust angiography, keyhole, and viewsharing techniques and two-dimensional parallel imaging in comparison with conventional single arterial phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee; Lee, Jeong Min; Han, Joon Koo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Yu, Mi Hye [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul (Korea, Republic of)

    2016-07-15

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ2 test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  12. Data driven parallelism in experimental high energy physics applications

    International Nuclear Information System (INIS)

    Pohl, M.

    1987-01-01

    I present global design principles for the implementation of high energy physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of high energy physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordiate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms). (orig.)

  13. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  14. Data driven parallelism in experimental high energy physics applications

    Science.gov (United States)

    Pohl, Martin

    1987-08-01

    I present global design principles for the implementation of High Energy Physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of High Energy Physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The Task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordinate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms).

  15. 48 CFR 873.105 - Acquisition planning.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Acquisition planning. 873.105 Section 873.105 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS DEPARTMENT... planning. (a) Acquisition planning is an indispensable component of the total acquisition process. (b) For...

  16. 48 CFR 34.004 - Acquisition strategy.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Acquisition strategy. 34... CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 34.004 Acquisition strategy. The program manager, as specified in agency procedures, shall develop an acquisition strategy tailored to the particular...

  17. 48 CFR 3034.004 - Acquisition strategy.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Acquisition strategy. 3034.004 Section 3034.004 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Acquisition strategy. See (HSAR) 48 CFR 3009.570 for policy applicable to acquisition strategies that consider...

  18. 48 CFR 434.004 - Acquisition strategy.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Acquisition strategy. 434.004 Section 434.004 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE SPECIAL CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 434.004 Acquisition strategy. (a) The program...

  19. 48 CFR 234.004 - Acquisition strategy.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Acquisition strategy. 234..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION 234.004 Acquisition strategy. (1) See 209.570 for policy applicable to acquisition strategies that consider the use of lead system...

  20. Comparison of continuous with step and shoot acquisition in SPECT scanning

    International Nuclear Information System (INIS)

    McCarthy, L.; Cotterill, T.; Chu, J.M.G.

    1998-01-01

    Full text: Following the recent advent of continuous acquisition for performing SPECT scanning, it was decided to compare the commonly used Step and Shoot mode of acquisition with the new continuous acquisition mode. The aim of the study is to assess any difference in resolution from the resulting images acquired using the two modes of acquisition. Sequential series of studies were performed on a SPECT phantom using both modes of acquisition. Separate sets of data were collected for both high resolution parallel hole and ultra high resolution fan beam collimators. Clinical data was collected on patients undergoing routine gallium, 99m Tc-MDP bone and 99m Tc-HMPAO brain studies. Separate sequential acquisition in both modes were collected for each patient. The sequence of collection was also alternated. Reconstruction was performed utilising the same parameters for each acquisition. The reconstructed data were assessed visually by blinded observers to detect differences in resolution and image quality. No significant difference in the studies collected by either acquisition modes were detected. The time saved by continuous acquisition could be an advantage

  1. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  2. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  3. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  4. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  5. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  6. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

  7. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  8. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  9. Trigger and data acquisition

    CERN Multimedia

    CERN. Geneva; Gaspar, C

    2001-01-01

    Past LEP experiments generate data at 0.5 MByte/s from particle detectors with over a quarter of a million readout channels. The process of reading out the electronic channels, treating them, and storing the date produced by each collision for further analysis by the physicists is called "Data Acquisition". Not all beam crossings produce interesting physics "events", picking the interesting ones is the task of the "Trigger" system. In order to make sure that the data is collected in good conditions the experiment's operation has to be constantly verified. In all, at LEP experiments over 100 000 parameters were monitored, controlled, and synchronized by the "Monotoring and control" system. In the future, LHC experiments will produce as much data in a single day as a LEP detector did in a full year's running with a raw data rate of 10 - 100 MBytes/s and will have to cope with some 800 million proton-proton collisions a second of these collisions only one in 100 million million is interesting for new particle se...

  10. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Gerry Bauer

    The CMS Storage Manager System The tail-end of the CMS Data Acquisition System is the Storage Manger (SM), which collects output from the HLT and stages the data at Cessy for transfer to its ultimate home in the Tier-0 center. A SM system has been used by CMS for several years with the steadily evolving software within the XDAQ framework, but until relatively recently, only with provisional hardware. The SM is well known to much of the collaboration through the ‘MiniDAQ’ system, which served as the central DAQ system in 2007, and lives on in 2008 for dedicated sub-detector commissioning. Since March of 2008 a first phase of the final hardware was commissioned and used in CMS Global Runs. The system originally planned for 2008 aimed at recording ~1MB events at a few hundred Hz. The building blocks to achieve this are based on Nexsan's SATABeast storage array - a device  housing up to 40 disks of 1TB each, and possessing two controllers each capable of almost 200 MB/sec throughput....

  11. IPNS data acquisition system

    International Nuclear Information System (INIS)

    Worlton, T.G.; Crawford, R.K.; Haumann, J.R.; Daly, R.

    1983-01-01

    The IPNS Data Acquisition System (DAS) was designed to be reliable, flexible, and easy to use. It provides unique methods of acquiring Time-of-Flight neutron scattering data and allows collection, storage, display, and analysis of very large data arrays with a minimum of user input. Data can be collected from normal detectors, linear position-sensitive detectors, and/or area detectors. The data can be corrected for time-delays and can be time-focussed before being binned. Corrections to be made to the data and selection of inputs to be summed are entirely software controlled, as are the time ranges and resolutions for each detector element. Each system can be configured to collect data into millions of channels. Maximum continuous data rates are greater than 2000 counts/sec with full corrections, or 16,000 counts/sec for the simpler binning scheme used with area detectors. Live displays of the data may be made as a function of time, wavevector, wavelength, lattice spacing, or energy. In most cases the complete data analysis can be done on the DAS host computer. The IPNS DAS became operational for four neutron scattering instruments in 1981 and has since been expanded to seven instruments

  12. Advanced data acquisition system for SEVAN

    Science.gov (United States)

    Chilingaryan, Suren; Chilingarian, Ashot; Danielyan, Varuzhan; Eppler, Wolfgang

    2009-02-01

    Huge magnetic clouds of plasma emitted by the Sun dominate intense geomagnetic storm occurrences and simultaneously they are correlated with variations of spectra of particles and nuclei in the interplanetary space, ranging from subtermal solar wind ions till GeV energy galactic cosmic rays. For a reliable and fast forecast of Space Weather world-wide networks of particle detectors are operated at different latitudes, longitudes, and altitudes. Based on a new type of hybrid particle detector developed in the context of the International Heliophysical Year (IHY 2007) at Aragats Space Environmental Center (ASEC) we start to prepare hardware and software for the first sites of Space Environmental Viewing and Analysis Network (SEVAN). In the paper the architecture of the newly developed data acquisition system for SEVAN is presented. We plan to run the SEVAN network under one-and-the-same data acquisition system, enabling fast integration of data for on-line analysis of Solar Flare Events. An Advanced Data Acquisition System (ADAS) is designed as a distributed network of uniform components connected by Web Services. Its main component is Unified Readout and Control Server (URCS) which controls the underlying electronics by means of detector specific drivers and makes a preliminary analysis of the on-line data. The lower level components of URCS are implemented in C and a fast binary representation is used for the data exchange with electronics. However, after preprocessing, the data are converted to a self-describing hybrid XML/Binary format. To achieve better reliability all URCS are running on embedded computers without disk and fans to avoid the limited lifetime of moving mechanical parts. The data storage is carried out by means of high performance servers working in parallel to provide data security. These servers are periodically inquiring the data from all URCS and storing it in a MySQL database. The implementation of the control interface is based on high level

  13. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  14. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  15. An Introduction to Parallel Computation R

    Indian Academy of Sciences (India)

    How are they programmed? This article provides an introduction. A parallel computer is a network of processors built for ... and have been used to solve problems much faster than a single ... in parallel computer design is to select an organization which ..... The most ambitious approach to parallel computing is to develop.

  16. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  17. Temporal network epidemiology

    CERN Document Server

    Holme, Petter

    2017-01-01

    This book covers recent developments in epidemic process models and related data on temporally varying networks. It is widely recognized that contact networks are indispensable for describing, understanding, and intervening to stop the spread of infectious diseases in human and animal populations; “network epidemiology” is an umbrella term to describe this research field. More recently, contact networks have been recognized as being highly dynamic. This observation, also supported by an increasing amount of new data, has led to research on temporal networks, a rapidly growing area. Changes in network structure are often informed by epidemic (or other) dynamics, in which case they are referred to as adaptive networks. This volume gathers contributions by prominent authors working in temporal and adaptive network epidemiology, a field essential to understanding infectious diseases in real society.

  18. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Valencia, Frank Dan

    Concurrent constraint programming (ccp) is a formalism for concurrency in which agents interact with one another by telling (adding) and asking (reading) information in a shared medium. Temporal ccp extends ccp by allowing agents to be constrained by time conditions. This dissertation studies...... temporal ccp by developing a process calculus called ntcc. The ntcc calculus generalizes the tcc model, the latter being a temporal ccp model for deterministic and synchronouss timed reactive systems. The calculus is built upon few basic ideas but it captures several aspects of timed systems. As tcc, ntcc...... structures, robotic devises, multi-agent systems and music applications. The calculus is provided with a denotational semantics that captures the reactive computations of processes in the presence of arbitrary environments. The denotation is proven to be fully-abstract for a substantial fragment...

  19. GRASS GIS: The first Open Source Temporal GIS

    Science.gov (United States)

    Gebbert, Sören; Leppelt, Thomas

    2015-04-01

    GRASS GIS is a full featured, general purpose Open Source geographic information system (GIS) with raster, 3D raster and vector processing support[1]. Recently, time was introduced as a new dimension that transformed GRASS GIS into the first Open Source temporal GIS with comprehensive spatio-temporal analysis, processing and visualization capabilities[2]. New spatio-temporal data types were introduced in GRASS GIS version 7, to manage raster, 3D raster and vector time series. These new data types are called space time datasets. They are designed to efficiently handle hundreds of thousands of time stamped raster, 3D raster and vector map layers of any size. Time stamps can be defined as time intervals or time instances in Gregorian calendar time or relative time. Space time datasets are simplifying the processing and analysis of large time series in GRASS GIS, since these new data types are used as input and output parameter in temporal modules. The handling of space time datasets is therefore equal to the handling of raster, 3D raster and vector map layers in GRASS GIS. A new dedicated Python library, the GRASS GIS Temporal Framework, was designed to implement the spatio-temporal data types and their management. The framework provides the functionality to efficiently handle hundreds of thousands of time stamped map layers and their spatio-temporal topological relations. The framework supports reasoning based on the temporal granularity of space time datasets as well as their temporal topology. It was designed in conjunction with the PyGRASS [3] library to support parallel processing of large datasets, that has a long tradition in GRASS GIS [4,5]. We will present a subset of more than 40 temporal modules that were implemented based on the GRASS GIS Temporal Framework, PyGRASS and the GRASS GIS Python scripting library. These modules provide a comprehensive temporal GIS tool set. The functionality range from space time dataset and time stamped map layer management

  20. Components of action potential repolarization in cerebellar parallel fibres.

    Science.gov (United States)

    Pekala, Dobromila; Baginskas, Armantas; Szkudlarek, Hanna J; Raastad, Morten

    2014-11-15

    Repolarization of the presynaptic action potential is essential for transmitter release, excitability and energy expenditure. Little is known about repolarization in thin, unmyelinated axons forming en passant synapses, which represent the most common type of axons in the mammalian brain's grey matter.We used rat cerebellar parallel fibres, an example of typical grey matter axons, to investigate the effects of K(+) channel blockers on repolarization. We show that repolarization is composed of a fast tetraethylammonium (TEA)-sensitive component, determining the width and amplitude of the spike, and a slow margatoxin (MgTX)-sensitive depolarized after-potential (DAP). These two components could be recorded at the granule cell soma as antidromic action potentials and from the axons with a newly developed miniaturized grease-gap method. A considerable proportion of fast repolarization remained in the presence of TEA, MgTX, or both. This residual was abolished by the addition of quinine. The importance of proper control of fast repolarization was demonstrated by somatic recordings of antidromic action potentials. In these experiments, the relatively broad K(+) channel blocker 4-aminopyridine reduced the fast repolarization, resulting in bursts of action potentials forming on top of the DAP. We conclude that repolarization of the action potential in parallel fibres is supported by at least three groups of K(+) channels. Differences in their temporal profiles allow relatively independent control of the spike and the DAP, whereas overlap of their temporal profiles provides robust control of axonal bursting properties.

  1. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  2. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  3. Empresas de trabajo temporal

    OpenAIRE

    Chico Abad, Virginia

    2015-01-01

    Las empresas de trabajo temporal han ido tomando mayor relevancia debido a la estructura de la sociedad y de la economía. La entrada en vigor de la ley 14/1994 por la que se regulan las empresas de trabajo temporal suposo la incorporación al ordenamiento jurífico español de un tipo de empresas cuya actuación se habia extendido en otros países del entorno europeo. La idea general gira en torno a la flexibilidad de un nuevo marco económico y organizativo y exige a las empresas una capa...

  4. Medial temporal lobe

    International Nuclear Information System (INIS)

    Silver, A.J.; Cross, D.T.; Friedman, D.P.; Bello, J.A.; Hilal, S.K.

    1989-01-01

    To better define the MR appearance of hippocampal sclerosis, the authors have reviewed over 500 MR coronal images of the temporal lobes. Many cysts were noted that analysis showed were of choroid-fissure (arachnoid) origin. Their association with seizures was low. A few nontumorous, static, medial temporal lesions, noted on T2-weighted coronal images, were poorly visualized on T1-weighted images and did not enhance with gadolinium. The margins were irregular, involved the hippocampus, and were often associated with focal atrophy. The lesions usually were associated with seizure disorders and specific electroencephalographic changes, and the authors believe they represented hippocampal sclerosis

  5. Temporal abstraction and temporal Bayesian networks in clinical domains: a survey.

    Science.gov (United States)

    Orphanou, Kalia; Stassopoulou, Athena; Keravnou, Elpida

    2014-03-01

    Temporal abstraction (TA) of clinical data aims to abstract and interpret clinical data into meaningful higher-level interval concepts. Abstracted concepts are used for diagnostic, prediction and therapy planning purposes. On the other hand, temporal Bayesian networks (TBNs) are temporal extensions of the known probabilistic graphical models, Bayesian networks. TBNs can represent temporal relationships between events and their state changes, or the evolution of a process, through time. This paper offers a survey on techniques/methods from these two areas that were used independently in many clinical domains (e.g. diabetes, hepatitis, cancer) for various clinical tasks (e.g. diagnosis, prognosis). A main objective of this survey, in addition to presenting the key aspects of TA and TBNs, is to point out important benefits from a potential integration of TA and TBNs in medical domains and tasks. The motivation for integrating these two areas is their complementary function: TA provides clinicians with high level views of data while TBNs serve as a knowledge representation and reasoning tool under uncertainty, which is inherent in all clinical tasks. Key publications from these two areas of relevance to clinical systems, mainly circumscribed to the latest two decades, are reviewed and classified. TA techniques are compared on the basis of: (a) knowledge acquisition and representation for deriving TA concepts and (b) methodology for deriving basic and complex temporal abstractions. TBNs are compared on the basis of: (a) representation of time, (b) knowledge representation and acquisition, (c) inference methods and the computational demands of the network, and (d) their applications in medicine. The survey performs an extensive comparative analysis to illustrate the separate merits and limitations of various TA and TBN techniques used in clinical systems with the purpose of anticipating potential gains through an integration of the two techniques, thus leading to a

  6. Construction of a FASTBUS data-acquisition system for the ELAN experiment

    International Nuclear Information System (INIS)

    Noel, A.

    1992-06-01

    To use the FASTBUS data acquisition system for the experiment ELAN at the electron stretcher accelerator ELSA a new software tool has been developed. This tool manages to readout parallel CAMAC with a VME front-end-processor and FASTBUS with the special FASTBUS processor segment AEB. Both processors are connected by a 32 bit high speed VSB data bus. (orig.) [de

  7. A simple low cost speed log interface for oceanographic data acquisition system

    Digital Repository Service at National Institute of Oceanography (India)

    Khedekar, V.D.; Phadte, G.M.

    A speed log interface is designed with parallel Binary Coded Decimal output. This design was mainly required for the oceanographic data acquisition system as an interface between the speed log and the computer. However, this can also be used as a...

  8. Expanded Understanding of IS/IT Related Challenges in Mergers and Acquisitions

    DEFF Research Database (Denmark)

    Toppenberg, Gustav

    2015-01-01

    Organizational Mergers and Acquisitions (M&As) occur at an increasingly frequent pace in today’s business life. Paralleling this development, M&As has increasingly attracted attention from the Information Systems (IS) domain. This emerging line of research has started form an understanding...

  9. Acquisition: Acquisition of Targets at the Missile Defense Agency

    National Research Council Canada - National Science Library

    Ugone, Mary L; Meling, John E; James, Harold C; Haynes, Christine L; Heller, Brad M; Pomietto, Kenneth M; Bobbio, Jaime; Chang, Bill; Pugh, Jacqueline

    2005-01-01

    Who Should Read This Report and Why? Missile Defense Agency program managers who are responsible for the acquisition and management of targets used to test the Ballistic Missile Defense System should be interested in this report...

  10. The Acquisition Experiences of Kazoil

    DEFF Research Database (Denmark)

    Minbaeva, Dana; Muratbekova-Touron, Maral

    2016-01-01

    This case describes two diverging post-acquisition experiences of KazOil, an oil drilling company in Kazakhstan, in the years after the dissolution of the Soviet Union. When the company was bought by the Canadian corporation Hydrocarbons Ltd in 1996, exposed to new human resource strategies...... among students that cultural distance is not the main determinant for the success of social integration mechanisms in post-acquisition situations. On the contrary, the relationship between integration instrument and integration success is also governed by contextual factors such as the attractiveness...... of the acquisition target or state of development of HRM in the target country....

  11. Data acquisition techniques using PC

    CERN Document Server

    Austerlitz, Howard

    1991-01-01

    Data Acquisition Techniques Using Personal Computers contains all the information required by a technical professional (engineer, scientist, technician) to implement a PC-based acquisition system. Including both basic tutorial information as well as some advanced topics, this work is suitable as a reference book for engineers or as a supplemental text for engineering students. It gives the reader enough understanding of the topics to implement a data acquisition system based on commercial products. A reader can alternatively learn how to custom build hardware or write his or her own software.

  12. Bootstrapping language acquisition.

    Science.gov (United States)

    Abend, Omri; Kwiatkowski, Tom; Smith, Nathaniel J; Goldwater, Sharon; Steedman, Mark

    2017-07-01

    The semantic bootstrapping hypothesis proposes that children acquire their native language through exposure to sentences of the language paired with structured representations of their meaning, whose component substructures can be associated with words and syntactic structures used to express these concepts. The child's task is then to learn a language-specific grammar and lexicon based on (probably contextually ambiguous, possibly somewhat noisy) pairs of sentences and their meaning representations (logical forms). Starting from these assumptions, we develop a Bayesian probabilistic account of semantically bootstrapped first-language acquisition in the child, based on techniques from computational parsing and interpretation of unrestricted text. Our learner jointly models (a) word learning: the mapping between components of the given sentential meaning and lexical words (or phrases) of the language, and (b) syntax learning: the projection of lexical elements onto sentences by universal construction-free syntactic rules. Using an incremental learning algorithm, we apply the model to a dataset of real syntactically complex child-directed utterances and (pseudo) logical forms, the latter including contextually plausible but irrelevant distractors. Taking the Eve section of the CHILDES corpus as input, the model simulates several well-documented phenomena from the developmental literature. In particular, the model exhibits syntactic bootstrapping effects (in which previously learned constructions facilitate the learning of novel words), sudden jumps in learning without explicit parameter setting, acceleration of word-learning (the "vocabulary spurt"), an initial bias favoring the learning of nouns over verbs, and one-shot learning of words and their meanings. The learner thus demonstrates how statistical learning over structured representations can provide a unified account for these seemingly disparate phenomena. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  14. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  15. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  16. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  17. Pediatric bowel MRI - accelerated parallel imaging in a single breathhold

    International Nuclear Information System (INIS)

    Hohl, C.; Honnef, D.; Krombach, G.; Muehlenbruch, G.; Guenther, R.W.; Niendorf, T.; Ocklenburg, C.; Wenzl, T.G.

    2008-01-01

    Purpose: to compare highly accelerated parallel MRI of the bowel with conventional balanced FFE sequences in children with inflammatory bowel disease (IBD). Materials and methods: 20 children with suspected or proven IBD underwent MRI using a 1.5 T scanner after oral administration of 700-1000 ml of a Mannitol solution and an additional enema. The examination started with a 4-channel receiver coil and a conventional balanced FFE sequence in axial (2.5 s/slice) and coronal (4.7 s/slice) planes. Afterwards highly accelerated (R = 5) balanced FFE sequences in axial (0.5 s/slice) and coronal (0.9 s/slice) were performed using a 32-channel receiver coil and parallel imaging (SENSE). Both receiver coils achieved a resolution of 0.88 x 0.88 mm with a slice thickness of 5 mm (coronal) and 6 mm (axial) respectively. Using the conventional imaging technique, 4 - 8 breathholds were needed to cover the whole abdomen, while parallel imaging shortened the acquisition time down to a single breathhold. Two blinded radiologists did a consensus reading of the images regarding pathological findings, image quality, susceptibility to artifacts and bowel distension. The results for both coil systems were compared using the kappa-(κ)-coefficient, differences in the susceptibility to artifacts were checked with the Wilcoxon signed rank test. Statistical significance was assumed for p = 0.05. Results: 13 of the 20 children had inflammatory bowel wall changes at the time of the examination, which could be correctly diagnosed with both coil systems in 12 of 13 cases (92%). The comparison of both coil systems showed a good agreement for pathological findings (κ = 0.74 - 1.0) and the image quality. Using parallel imaging significantly more artifacts could be observed (κ = 0.47)

  18. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  19. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  20. Optimizing Temporal Queries

    DEFF Research Database (Denmark)

    Toman, David; Bowman, Ivan Thomas

    2003-01-01

    Recent research in the area of temporal databases has proposed a number of query languages that vary in their expressive power and the semantics they provide to users. These query languages represent a spectrum of solutions to the tension between clean semantics and efficient evaluation. Often, t...

  1. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Palamidessi, Catuscia; Valencia, Frank Dan

    2002-01-01

    The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...

  2. Temporal Photon Differentials

    DEFF Research Database (Denmark)

    Schjøth, Lars; Frisvad, Jeppe Revall; Erleben, Kenny

    2010-01-01

    The finite frame rate also used in computer animated films is cause of adverse temporal aliasing effects. Most noticeable of these is a stroboscopic effect that is seen as intermittent movement of fast moving illumination. This effect can be mitigated using non-zero shutter times, effectively...

  3. Information and Temporality

    Directory of Open Access Journals (Sweden)

    Christian Flender

    2016-09-01

    Full Text Available Being able to give reasons for what the world is and how it works is one of the defining characteristics of modernity. Mathematical reason and empirical observation brought science and engineering to unprecedented success. However, modernity has reached a post-state where an instrumental view of technology needs revision with reasonable arguments and evidence, i.e. without falling back to superstition and mysticism. Instrumentally, technology bears the potential to ease and to harm. Easing and harming can't be controlled like the initial development of technology is a controlled exercise for a specific, mostly easing purpose. Therefore, a revised understanding of information technology is proposed based upon mathematical concepts and intuitions as developed in quantum mechanics. Quantum mechanics offers unequaled opportunities because it raises foundational questions in a precise form. Beyond instrumentalism it enables to raise the question of essences as that what remains through time what it is. The essence of information technology is acausality. The time of acausality is temporality. Temporality is not a concept or a category. It is not epistemological. As an existential and thus more comprehensive and fundamental than a concept or a category temporality is ontological; it does not simply have ontic properties. Rather it exhibits general essences. Datability, significance, spannedness and openness are general essences of equiprimordial time (temporality.

  4. Temporal logic motion planning

    CSIR Research Space (South Africa)

    Seotsanyana, M

    2010-01-01

    Full Text Available In this paper, a critical review on temporal logic motion planning is presented. The review paper aims to address the following problems: (a) In a realistic situation, the motion planning problem is carried out in real-time, in a dynamic, uncertain...

  5. Experimental temporal quantum steering

    Czech Academy of Sciences Publication Activity Database

    Bartkiewicz, K.; Černoch, Antonín; Lemr, K.; Miranowicz, A.; Nori, F.

    2016-01-01

    Roč. 6, Nov (2016), 1-8, č. článku 38076. ISSN 2045-2322 R&D Projects: GA ČR GAP205/12/0382 Institutional support: RVO:68378271 Keywords : temporal quantum steering * EPR steering Subject RIV: BH - Optics, Masers, Lasers Impact factor: 4.259, year: 2016

  6. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  7. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  8. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  9. Evolutionary Acquisition and Spiral Development Tutorial

    National Research Council Canada - National Science Library

    Hantos, P

    2005-01-01

    .... NSS Acquisition Policy 03-01 provided some space-oriented customization and, similarly to the original DOD directives, also positioned Evolutionary Acquisition and Spiral Development as preferred...

  10. Communication, Technology, Temporality

    Directory of Open Access Journals (Sweden)

    Mark A. Martinez

    2012-08-01

    Full Text Available This paper proposes a media studies that foregrounds technological objects as communicative and historical agents. Specifically, I take the digital computer as a powerful catalyst of crises in communication theories and certain key features of modernity. Finally, the computer is the motor of “New Media” which is at once a set of technologies, a historical epoch, and a field of knowledge. As such the computer shapes “the new” and “the future” as History pushes its origins further in the past and its convergent quality pushes its future as a predominate medium. As treatment of information and interface suggest, communication theories observe computers, and technologies generally, for the mediated languages they either afford or foreclose to us. My project describes the figures information and interface for the different ways they can be thought of as aspects of communication. I treat information not as semantic meaning, formal or discursive language, but rather as a physical organism. Similarly an interface is not a relationship between a screen and a human visual intelligence, but is instead a reciprocal, affective and physical process of contact. I illustrate that historically there have been conceptions of information and interface complimentary to mine, fleeting as they have been in the face of a dominant temporality of mediation. I begin with a theoretically informed approach to media history, and extend it to a new theory of communication. In doing so I discuss a model of time common to popular, scientific, and critical conceptions of media technologies especially in theories of computer technology. This is a predominate model with particular rules of temporal change and causality for thinking about mediation, and limits the conditions of possibility for knowledge production about communication. I suggest a new model of time as integral to any event of observation and analysis, and that human mediation does not exhaust the

  11. Platform attitude data acquisition system

    Digital Repository Service at National Institute of Oceanography (India)

    Afzulpurkar, S.

    A system for automatic acquisition of underwater platform attitude data has been designed, developed and tested in the laboratory. This is a micro controller based system interfacing dual axis inclinometer, high-resolution digital compass...

  12. Portable Data Acquisition System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Armstrong researchers have developed a portable data acquisition system (PDAT) that can be easily transported and set up at remote locations to display and archive...

  13. New KENS data acquisition system

    International Nuclear Information System (INIS)

    Arai, M.; Furusaka, M.; Satoh, S.

    1989-01-01

    In this report, the authors discuss a data acquisition system, KENSnet, which is newly introduced to the KENS facility. The criteria for the data acquisition system was about 1 MIPS for CPU speed and 150 Mbytes for storage capacity for a computer per spectrometer. VAX computers were chosen with their propreitary operating system, VMS. The Vax computers are connected by a DECnet network mediated by Ethernet. Front-end computers, Apple Macintosh Plus and Macintosh II, were chosen for their user-friendly manipulation and intelligence. New CAMAC-based data acquisition electronics were developed. The data acquisition control program (ICP) and the general data analysis program (Genie) were both developed at ISIS and have been installed. 2 refs., 3 figs., 1 tab

  14. Schizophrenia and second language acquisition.

    Science.gov (United States)

    Bersudsky, Yuly; Fine, Jonathan; Gorjaltsan, Igor; Chen, Osnat; Walters, Joel

    2005-05-01

    Language acquisition involves brain processes that can be affected by lesions or dysfunctions in several brain systems and second language acquisition may depend on different brain substrates than first language acquisition in childhood. A total of 16 Russian immigrants to Israel, 8 diagnosed schizophrenics and 8 healthy immigrants, were compared. The primary data for this study were collected via sociolinguistic interviews. The two groups use language and learn language in very much the same way. Only exophoric reference and blocking revealed meaningful differences between the schizophrenics and healthy counterparts. This does not mean of course that schizophrenia does not induce language abnormalities. Our study focuses on those aspects of language that are typically difficult to acquire in second language acquisition. Despite the cognitive compromises in schizophrenia and the manifest atypicalities in language of speakers with schizophrenia, the process of acquiring a second language seems relatively unaffected by schizophrenia.

  15. Massively parallel diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V.; Pitts, Todd A.

    2017-09-05

    Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

  16. Embodied and Distributed Parallel DJing.

    Science.gov (United States)

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  17. Device for balancing parallel strings

    Science.gov (United States)

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  18. A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Liang, E-mail: gaol@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois at Urbana–Champaign, 306 N. Wright St., Urbana, IL 61801 (United States); Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana–Champaign, 405 North Mathews Avenue, Urbana, IL 61801 (United States); Wang, Lihong V., E-mail: lhwang@wustl.edu [Optical imaging laboratory, Department of Biomedical Engineering, Washington University in St. Louis, One Brookings Dr., MO, 63130 (United States)

    2016-02-29

    Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition–also dubbed snapshot imaging–has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications.

  19. Linear parallel processing machines I

    Energy Technology Data Exchange (ETDEWEB)

    Von Kunze, M

    1984-01-01

    As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.

  20. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  1. Processes Asunder: Acquisition & Planning Misfits

    Science.gov (United States)

    2009-03-26

    Establishing six Business Enterprise Priorities ( BEPs ) to focus the Department’s business transformation efforts, which now guide DoD investment decisions...three phases which look very much like Milestone A, B, and C of the previously existing Life Cycle Management Framework . With this obvious redundancy...February 2002). 30 6 Defense Acquisition University, “Integrated Defense Acquisition, Technology, & Logistics Life Cycle Management Framework , version 5.2

  2. "Healing is a Done Deal": Temporality and Metabolic Healing Among Evangelical Christians in Samoa.

    Science.gov (United States)

    Hardin, Jessica

    2016-01-01

    Drawing on fieldwork in independent Samoa, in this article, I analyze the temporal dimensions of evangelical Christian healing of metabolic disorders. I explore how those suffering with metabolic disorders draw from multiple time-based notions of healing, drawing attention to the limits of biomedicine in contrast with the effectiveness of Divine healing. By simultaneously engaging evangelical and biomedical temporalities, I argue that evangelical Christians create wellness despite sickness and, in turn, re-signify chronic suffering as a long-term process of Christian healing. Positioning biomedical temporality and evangelical temporality as parallel yet distinctive ways of practicing healing, therefore, influences health care choices.

  3. Experience from Tore Supra acquisition system and evolutions

    International Nuclear Information System (INIS)

    Guillerminet, B.; Buravand, Y.; Chatelier, E.; Leroux, F.

    2004-01-01

    The Tore Supra tokamak has been upgraded to explore long duration plasma discharges up to 1000s. Since summer 2001, the acquisition system operates in continuous mode apart of the data processing which is still done after the pulse. In the first part, we explore a few solutions to process continuously the data during the pulse, based on parallel processing on a Linux farm and then on a CONDOR system. The second part is devoted to the Web service exposing the Tore Supra operation. In the last part, the VME acquisition system has been redesigned to keep up with the high data rates required by a few diagnostics. The workflow is now distributed among a few computers. At the end, we give the current status of the realisation and the future planning

  4. Multiplexed capillary microfluidic immunoassay with smartphone data acquisition for parallel mycotoxin detection.

    Science.gov (United States)

    Machado, Jessica M D; Soares, Ruben R G; Chu, Virginia; Conde, João P

    2018-01-15

    The field of microfluidics holds great promise for the development of simple and portable lab-on-a-chip systems. The use of capillarity as a means of fluidic manipulation in lab-on-a-chip systems can potentially reduce the complexity of the instrumentation and allow the development of user-friendly devices for point-of-need analyses. In this work, a PDMS microchannel-based, colorimetric, autonomous capillary chip provides a multiplexed and semi-quantitative immunodetection assay. Results are acquired using a standard smartphone camera and analyzed with a simple gray scale quantification procedure. The performance of this device was tested for the simultaneous detection of the mycotoxins ochratoxin A (OTA), aflatoxin B1 (AFB1) and deoxynivalenol (DON) which are strictly regulated food contaminants with severe detrimental effects on human and animal health. The multiplexed assay was performed approximately within 10min and the achieved sensitivities of<40, 0.1-0.2 and<10ng/mL for OTA, AFB1 and DON, respectively, fall within the majority of currently enforced regulatory and/or recommended limits. Furthermore, to assess the potential of the device to analyze real samples, the immunoassay was successfully validated for these 3 mycotoxins in a corn-based feed sample after a simple sample preparation procedure. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Performance Confirmation Data Acquisition System

    International Nuclear Information System (INIS)

    D.W. Markman

    2000-01-01

    The purpose of this analysis is to identify and analyze concepts for the acquisition of data in support of the Performance Confirmation (PC) program at the potential subsurface nuclear waste repository at Yucca Mountain. The scope and primary objectives of this analysis are to: (1) Review the criteria for design as presented in the Performance Confirmation Data Acquisition/Monitoring System Description Document, by way of the Input Transmittal, Performance Confirmation Input Criteria (CRWMS M and O 1999c). (2) Identify and describe existing and potential new trends in data acquisition system software and hardware that would support the PC plan. The data acquisition software and hardware will support the field instruments and equipment that will be installed for the observation and perimeter drift borehole monitoring, and in-situ monitoring within the emplacement drifts. The exhaust air monitoring requirements will be supported by a data communication network interface with the ventilation monitoring system database. (3) Identify the concepts and features that a data acquisition system should have in order to support the PC process and its activities. (4) Based on PC monitoring needs and available technologies, further develop concepts of a potential data acquisition system network in support of the PC program and the Site Recommendation and License Application

  6. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  7. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  8. Sparse Parallel MRI Based on Accelerated Operator Splitting Schemes.

    Science.gov (United States)

    Cai, Nian; Xie, Weisi; Su, Zhenghang; Wang, Shanshan; Liang, Dong

    2016-01-01

    Recently, the sparsity which is implicit in MR images has been successfully exploited for fast MR imaging with incomplete acquisitions. In this paper, two novel algorithms are proposed to solve the sparse parallel MR imaging problem, which consists of l 1 regularization and fidelity terms. The two algorithms combine forward-backward operator splitting and Barzilai-Borwein schemes. Theoretically, the presented algorithms overcome the nondifferentiable property in l 1 regularization term. Meanwhile, they are able to treat a general matrix operator that may not be diagonalized by fast Fourier transform and to ensure that a well-conditioned optimization system of equations is simply solved. In addition, we build connections between the proposed algorithms and the state-of-the-art existing methods and prove their convergence with a constant stepsize in Appendix. Numerical results and comparisons with the advanced methods demonstrate the efficiency of proposed algorithms.

  9. Parallelization of Rocket Engine Simulator Software (PRESS)

    Science.gov (United States)

    Cezzar, Ruknet

    1998-01-01

    We have outlined our work in the last half of the funding period. We have shown how a demo package for RESSAP using MPI can be done. However, we also mentioned the difficulties with the UNIX platform. We have reiterated some of the suggestions made during the presentation of the progress of the at Fourth Annual HBCU Conference. Although we have discussed, in some detail, how TURBDES/PUMPDES software can be run in parallel using MPI, at present, we are unable to experiment any further with either MPI or PVM. Due to X windows not being implemented, we are also not able to experiment further with XPVM, which it will be recalled, has a nice GUI interface. There are also some concerns, on our part, about MPI being an appropriate tool. The best thing about MPr is that it is public domain. Although and plenty of documentation exists for the intricacies of using MPI, little information is available on its actual implementations. Other than very typical, somewhat contrived examples, such as Jacobi algorithm for solving Laplace's equation, there are few examples which can readily be applied to real situations, such as in our case. In effect, the review of literature on both MPI and PVM, and there is a lot, indicate something similar to the enormous effort which was spent on LISP and LISP-like languages as tools for artificial intelligence research. During the development of a book on programming languages [12], when we searched the literature for very simple examples like taking averages, reading and writing records, multiplying matrices, etc., we could hardly find a any! Yet, so much was said and done on that topic in academic circles. It appears that we faced the same problem with MPI, where despite significant documentation, we could not find even a simple example which supports course-grain parallelism involving only a few processes. From the foregoing, it appears that a new direction may be required for more productive research during the extension period (10/19/98 - 10

  10. ADHD and temporality

    DEFF Research Database (Denmark)

    Nielsen, Mikka

    According to the official diagnostic manual, ADHD is defined by symptoms of inattention, hyperactivity, and impulsivity and patterns of behaviour are characterized as failure to pay attention to details, excessive talking, fidgeting, or inability to remain seated in appropriate situations (DSM-5......). In this paper, however, I will ask if we can understand what we call ADHD in a different way than through the symptom descriptions and will advocate for a complementary, phenomenological understanding of ADHD as a certain being in the world – more specifically as a matter of a phenomenological difference...... in temporal experience and/or rhythm. Inspired by both psychiatry’s experiments with people diagnosed with ADHD and their assessment of time and phenomenological perspectives on mental disorders and temporal disorientation I explore the experience of ADHD as a disruption in the phenomenological experience...

  11. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Valencia Posso, Frank Dan

    2002-01-01

    The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...... reflect the reactive interactions between concurrent constraint processes and their environment, as well as internal interactions between individual processes. Relationships between the suggested notions are studied, and they are all proved to be decidable for a substantial fragment of the calculus...

  12. Performance assessment of the SIMFAP parallel cluster at IFIN-HH Bucharest

    International Nuclear Information System (INIS)

    Adam, Gh.; Adam, S.; Ayriyan, A.; Dushanov, E.; Hayryan, E.; Korenkov, V.; Lutsenko, A.; Mitsyn, V.; Sapozhnikova, T.; Sapozhnikov, A; Streltsova, O.; Buzatu, F.; Dulea, M.; Vasile, I.; Sima, A.; Visan, C.; Busa, J.; Pokorny, I.

    2008-01-01

    Performance assessment and case study outputs of the parallel SIMFAP cluster at IFIN-HH Bucharest point to its effective and reliable operation. A comparison with results on the supercomputing system in LIT-JINR Dubna adds insight on resource allocation for problem solving by parallel computing. The solution of models asking for very large numbers of knots in the discretization mesh needs the migration to high performance computing based on parallel cluster architectures. The acquisition of ready-to-use parallel computing facilities being beyond limited budgetary resources, the solution at IFIN-HH was to buy the hardware and the inter-processor network, and to implement by own efforts the open software concerning both the operating system and the parallel computing standard. The present paper provides a report demonstrating the successful solution of these tasks. The implementation of the well-known HPL (High Performance LINPACK) Benchmark points to the effective and reliable operation of the cluster. The comparison of HPL outputs obtained on parallel clusters of different magnitudes shows that there is an optimum range of the order N of the linear algebraic system over which a given parallel cluster provides optimum parallel solutions. For the SIMFAP cluster, this range can be inferred to correspond to about 1 to 2 x 10 4 linear algebraic equations. For an algorithm of polynomial complexity N α the task sharing among p processors within a parallel solution mainly follows an (N/p)α behaviour under peak performance achievement. Thus, while the problem complexity remains the same, a substantial decrease of the coefficient of the leading order of the polynomial complexity is achieved. (authors)

  13. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  14. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  15. 48 CFR 970.2301 - Sustainable acquisition.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Sustainable acquisition. 970.2301 Section 970.2301 Federal Acquisition Regulations System DEPARTMENT OF ENERGY AGENCY..., Renewable Energy Technologies, Occupational Safety and Drug-Free Work Place 970.2301 Sustainable acquisition...

  16. Program design of data acquisition in Windows

    International Nuclear Information System (INIS)

    Cai Jianxin; Yan Huawen

    2004-01-01

    Several methods for the design of data acquisition program based on Microsoft Windows are introduced. Then their respective advantages and disadvantages are totally analyzed. At the same time, the data acquisition modes applicable to each method are also pointed out. It is convenient for data acquisition programmers to develop data acquisition systems. (authors)

  17. Parallelism at Cern: real-time and off-line applications in the GP-MIMD2 project

    International Nuclear Information System (INIS)

    Calafiura, P.

    1997-01-01

    A wide range of general purpose high-energy physics applications, ranging from Monte Carlo simulation to data acquisition, from interactive data analysis to on-line filtering, have been ported, or developed, and run in parallel on IBM SP-2 and Meiko CS-2 CERN large multi-processor machines. The ESPRIT project GP-MIMD2 has been a catalyst for the interest in parallel computing at CERN. The project provided the 128 processor Meiko CS-2 system that is now succesfully integrated in the CERN computing environment. The CERN experiment NA48 was involved in the GP-MIMD2 project since the beginning. NA48 physicists run, as part of their day-to-day work, simulation and analysis programs parallelized using the message passing interface MPI. The CS-2 is also a vital component of the experiment data acquisition system and will be used to calibrate in real-time the 13000 channels liquid krypton calorimeter. (orig.)

  18. Three-dimensional SPECT [single photon emission computed tomography] reconstruction of combined cone beam and parallel beam data

    International Nuclear Information System (INIS)

    Jaszczak, R.J.; Jianying Li; Huili Wang; Coleman, R.E.

    1992-01-01

    Single photon emission computed tomography (SPECT) using cone beam (CB) collimation exhibits increased sensitivity compared with acquisition geometries using parallel (P) hole collimation. However, CB collimation has a smaller field-of-view which may result in truncated projections and image artifacts. A primary objective of this work is to investigate maximum likelihood-expectation maximization (ML-EM) methods to reconstruct simultaneously acquired parallel and cone beam (P and CB) SPECT data. Simultaneous P and CB acquisition can be performed with commercially available triple camera systems by using two cone-beam collimators and a single parallel-hole collimator. The loss in overall sensitivity (relative to the use of three CB collimators) is about 15 to 20%. The authors have developed three methods to combine P and CB data using modified ML-EM algorithms. (author)

  19. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  20. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  1. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  2. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  3. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  4. Parallel Prediction of Stock Volatility

    Directory of Open Access Journals (Sweden)

    Priscilla Jenq

    2017-10-01

    Full Text Available Volatility is a measurement of the risk of financial products. A stock will hit new highs and lows over time and if these highs and lows fluctuate wildly, then it is considered a high volatile stock. Such a stock is considered riskier than a stock whose volatility is low. Although highly volatile stocks are riskier, the returns that they generate for investors can be quite high. Of course, with a riskier stock also comes the chance of losing money and yielding negative returns. In this project, we will use historic stock data to help us forecast volatility. Since the financial industry usually uses S&P 500 as the indicator of the market, we will use S&P 500 as a benchmark to compute the risk. We will also use artificial neural networks as a tool to predict volatilities for a specific time frame that will be set when we configure this neural network. There have been reports that neural networks with different numbers of layers and different numbers of hidden nodes may generate varying results. In fact, we may be able to find the best configuration of a neural network to compute volatilities. We will implement this system using the parallel approach. The system can be used as a tool for investors to allocating and hedging assets.

  5. Vectoring of parallel synthetic jets

    Science.gov (United States)

    Berk, Tim; Ganapathisubramani, Bharathram; Gomit, Guillaume

    2015-11-01

    A pair of parallel synthetic jets can be vectored by applying a phase difference between the two driving signals. The resulting jet can be merged or bifurcated and either vectored towards the actuator leading in phase or the actuator lagging in phase. In the present study, the influence of phase difference and Strouhal number on the vectoring behaviour is examined experimentally. Phase-locked vorticity fields, measured using Particle Image Velocimetry (PIV), are used to track vortex pairs. The physical mechanisms that explain the diversity in vectoring behaviour are observed based on the vortex trajectories. For a fixed phase difference, the vectoring behaviour is shown to be primarily influenced by pinch-off time of vortex rings generated by the synthetic jets. Beyond a certain formation number, the pinch-off timescale becomes invariant. In this region, the vectoring behaviour is determined by the distance between subsequent vortex rings. We acknowledge the financial support from the European Research Council (ERC grant agreement no. 277472).

  6. A Soft Parallel Kinematic Mechanism.

    Science.gov (United States)

    White, Edward L; Case, Jennifer C; Kramer-Bottiglio, Rebecca

    2018-02-01

    In this article, we describe a novel holonomic soft robotic structure based on a parallel kinematic mechanism. The design is based on the Stewart platform, which uses six sensors and actuators to achieve full six-degree-of-freedom motion. Our design is much less complex than a traditional platform, since it replaces the 12 spherical and universal joints found in a traditional Stewart platform with a single highly deformable elastomer body and flexible actuators. This reduces the total number of parts in the system and simplifies the assembly process. Actuation is achieved through coiled-shape memory alloy actuators. State observation and feedback is accomplished through the use of capacitive elastomer strain gauges. The main structural element is an elastomer joint that provides antagonistic force. We report the response of the actuators and sensors individually, then report the response of the complete assembly. We show that the completed robotic system is able to achieve full position control, and we discuss the limitations associated with using responsive material actuators. We believe that control demonstrated on a single body in this work could be extended to chains of such bodies to create complex soft robots.

  7. Sustaining an Acquisition-based Growth Strategy

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Toppenberg, Gustav; Shanks, Graeme

    Value creating acquisitions are a major challenge for many firms. Our case study of Cisco Systems shows that an advanced Enterprise Architecture (EA) capability can contribute to the acquisition process through a) preparing the acquirer to become ‘acquisition ready’, b) identifying resource...... complementarity, c) directing and governing the integration process, and d) post-acquisition evaluation of the achieved integration and proposing ways forward. Using the EA capability in the acquisition process improves Cisco’s ability to rapidly capture value from its acquisitions and to sustain its acquisition...

  8. Signal displacement in spiral-in acquisitions: simulations and implications for imaging in SFG regions.

    Science.gov (United States)

    Brewer, Kimberly D; Rioux, James A; Klassen, Martyn; Bowen, Chris V; Beyea, Steven D

    2012-07-01

    Susceptibility field gradients (SFGs) cause problems for functional magnetic resonance imaging (fMRI) in regions like the orbital frontal lobes, leading to signal loss and image artifacts (signal displacement and "pile-up"). Pulse sequences with spiral-in k-space trajectories are often used when acquiring fMRI in SFG regions such as inferior/medial temporal cortex because it is believed that they have improved signal recovery and decreased signal displacement properties. Previously postulated theories explain differing reasons why spiral-in appears to perform better than spiral-out; however it is clear that multiple mechanisms are occurring in parallel. This study explores differences in spiral-in and spiral-out images using human and phantom empirical data, as well as simulations consistent with the phantom model. Using image simulations, the displacement of signal was characterized using point spread functions (PSFs) and target maps, the latter of which are conceptually inverse PSFs describing which spatial locations contribute signal to a particular voxel. The magnitude of both PSFs and target maps was found to be identical for spiral-out and spiral-in acquisitions, with signal in target maps being displaced from distant regions in both cases. However, differences in the phase of the signal displacement patterns that consequently lead to changes in the intervoxel phase coherence were found to be a significant mechanism explaining differences between the spiral sequences. The results demonstrate that spiral-in trajectories do preserve more total signal in SFG regions than spiral-out; however, spiral-in does not in fact exhibit decreased signal displacement. Given that this signal can be displaced by significant distances, its recovery may not be preferable for all fMRI applications. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Parallel reconstruction in accelerated multivoxel MR spectroscopy

    NARCIS (Netherlands)

    Boer, V. O.; Klomp, D. W. J.; Laterra, J.; Barker, P. B.

    PurposeTo develop the simultaneous acquisition of multiple voxels in localized MR spectroscopy (MRS) using sensitivity encoding, allowing reduced total scan time compared to conventional sequential single voxel (SV) acquisition methods. MethodsDual volume localization was used to simultaneously

  10. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  11. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  12. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  13. Conformal pure radiation with parallel rays

    International Nuclear Information System (INIS)

    Leistner, Thomas; Paweł Nurowski

    2012-01-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

  14. Compiling Scientific Programs for Scalable Parallel Systems

    National Research Council Canada - National Science Library

    Kennedy, Ken

    2001-01-01

    ...). The research performed in this project included new techniques for recognizing implicit parallelism in sequential programs, a powerful and precise set-based framework for analysis and transformation...

  15. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  16. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  17. Parallel thermal radiation transport in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)

    2003-07-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  18. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  19. KENS data acquisition system KENSnet

    International Nuclear Information System (INIS)

    Arai, Masatoshi; Furusaka, Michihiro; Satoh, Setsuo; Johnson, M.W.

    1988-01-01

    The installation of a new data acquisition system KENSnet has been completed at the KENS neutron facility. For data collection, 160 Mbytes are necessary for temporary disk storage, and 1 MIPS of CPU is required. For the computing system, models were chosen from the VAX family of computers running their proprietary operating system VMS. The VMS operating system has a very user friendly interface, and is well suited to instrument control applications. New data acquisition electronics were developed. A gate module receives a signal of proton extraction time from the accelerator, and checks the veto signals from the sample environment equipment (vacuum, temperature, chopper phasing, etc.). Then the signal is issued to a delay-time module. A time-control module starts timing from the delayed start signal from the delay-time module, and distributes an encoded time-boundary address to memory modules at the preset times anabling the memory modules to accumulate data histograms. The data acquisition control program (ICP) and the general data analysis program (Genie) were both developed at ISIS, and have been installed in the new data acquisition system. They give the experimenter 'user-friendly' data acquisition and a good environment for data manipulation. The ICP controls the DAE and transfers the histogram data into the computers. (N.K.)

  20. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor.

    Science.gov (United States)

    Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng

    2016-09-15

    In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor.

  1. A new method for measuring temporal resolution in electrocardiogram-gated reconstruction image with area-detector computed tomography

    International Nuclear Information System (INIS)

    Kaneko, Takeshi; Takagi, Masachika; Kato, Ryohei; Anno, Hirofumi; Kobayashi, Masanao; Yoshimi, Satoshi; Sanda, Yoshihiro; Katada, Kazuhiro

    2012-01-01

    The purpose of this study was to design and construct a phantom for using motion artifact in the electrocardiogram (ECG)-gated reconstruction image. In addition, the temporal resolution under various conditions was estimated. A stepping motor was used to move the phantom over an arc in a reciprocating manner. The program for controlling the stepping motor permitted the stationary period and the heart rate to be adjusted as desired. Images of the phantom were obtained using a 320-row area-detector computed tomography (ADCT) system under various conditions using the ECG-gated reconstruction method. For estimation, the reconstruction phase was continuously changed and the motion artifacts were quantitatively assessed. The temporal resolution was calculated from the number of motion-free images. Changes in the temporal resolution according to heart rate, rotation time, the number of reconstruction segments and acquisition position in z-axis were also investigated. The measured temporal resolution of ECG-gated half reconstruction is 180 ms, which is in good agreement with the nominal temporal resolution of 175 ms. The measured temporal resolution of ECG-gated segmental reconstruction is in good agreement with the nominal temporal resolution in most cases. The estimated temporal resolution improved to approach the nominal temporal resolution as the number of reconstruction segments was increased. Temporal resolution in changing acquisition position is equal. This study shows that we could design a new phantom for estimating temporal resolution. (author)

  2. Automatic data-acquisition and communications computer network for fusion experiments

    International Nuclear Information System (INIS)

    Kemper, C.O.

    1981-01-01

    A network of more than twenty computers serves the data acquisition, archiving, and analysis requirements of the ISX, EBT, and beam-line test facilities at the Fusion Division of Oak Ridge National Laboratory. The network includes PDP-8, PDP-12, PDP-11, PDP-10, and Interdata 8-32 processors, and is unified by a variety of high-speed serial and parallel communications channels. While some processors are dedicated to experimental data acquisition, and others are dedicated to later analysis and theoretical work, many processors perform a combination of acquisition, real-time analysis and display, and archiving and communications functions. A network software system has been developed which runs in each processor and automatically transports data files from point of acquisition to point or points of analysis, display, and storage, providing conversion and formatting functions are required

  3. Applications of parallel computer architectures to the real-time simulation of nuclear power systems

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1988-01-01

    In this paper the authors report on efforts to utilize parallel computer architectures for the thermal-hydraulic simulation of nuclear power systems and current research efforts toward the development of advanced reactor operator aids and control systems based on this new technology. Many aspects of reactor thermal-hydraulic calculations are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on modern computers. Timing studies indicate faster-than-real-time, high-fidelity physics models can be developed when the computational algorithms are designed to take advantage of the computer's architecture. These capabilities allow for the development of novel control systems and advanced reactor operator aids. Coupled with an integral real-time data acquisition system, evolving parallel computer architectures can provide operators and control room designers improved control and protection capabilities. Current research efforts are currently under way in this area

  4. Simultaneous acquisition of physiological data and nuclear medicine images

    International Nuclear Information System (INIS)

    Rosenthal, M.S.; Klein, H.A.; Orenstein, S.R.

    1988-01-01

    A technique has been developed that allows the simultaneous acquisition of both image and physiological data into a standard nuclear medicine computer system. The physiological data can be displayed along with the nuclear medicine images allowing temporal correlation between the two. This technique has been used to acquire images of gastroesophageal reflux simultaneously with the intraluminal esophageal pH. The resulting data are displayed either as a standard dynamic sequence with the physiological data appearing in a corner of the image or as condensed dynamic images

  5. Parallel transmission techniques in magnetic resonance imaging: experimental realization, applications and perspectives; Parallele Sendetechniken in der Magnetresonanztomographie: experimentelle Realisierung, Anwendungen und Perspektiven

    Energy Technology Data Exchange (ETDEWEB)

    Ullmann, P.

    2007-06-15

    and parallel reception to further reduce the acquisition time. (orig.)

  6. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2017-01-01

    Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.

  7. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  8. Concept Acquisition and Experiential Change

    Directory of Open Access Journals (Sweden)

    William S. Robinson

    2014-12-01

    Full Text Available Many have held the Acquisition of Concepts Thesis (ACT that concept acquisition can change perceptual experience. This paper explains the close relation of ACT to ADT, the thesis that acquisition of dispositions to quickly and reliably recognize a kind of thing can change perceptual experience. It then states a highly developed argument given by Siegel (2010 which, if successful, would offer strong support for ADT and indirect support for ACT. Examination of this argument, however, reveals difficulties that undermine its promise. Distinctions made in this examination help to clarify an alternative view that denies ADT and ACT while accepting that long exposure to a class of materials may induce changes in phenomenology that lie outside perceptual experience itself.

  9. Spatio-Temporal Saliency Perception via Hypercomplex Frequency Spectral Contrast

    Directory of Open Access Journals (Sweden)

    Zhiqiang Tian

    2013-03-01

    Full Text Available Salient object perception is the process of sensing the salient information from the spatio-temporal visual scenes, which is a rapid pre-attention mechanism for the target location in a visual smart sensor. In recent decades, many successful models of visual saliency perception have been proposed to simulate the pre-attention behavior. Since most of the methods usually need some ad hoc parameters or high-cost preprocessing, they are difficult to rapidly detect salient object or be implemented by computing parallelism in a smart sensor. In this paper, we propose a novel spatio-temporal saliency perception method based on spatio-temporal hypercomplex spectral contrast (HSC. Firstly, the proposed HSC algorithm represent the features in the HSV (hue, saturation and value color space and features of motion by a hypercomplex number. Secondly, the spatio-temporal salient objects are efficiently detected by hypercomplex Fourier spectral contrast in parallel. Finally, our saliency perception model also incorporates with the non-uniform sampling, which is a common phenomenon of human vision that directs visual attention to the logarithmic center of the image/video in natural scenes. The experimental results on the public saliency perception datasets demonstrate the effectiveness of the proposed approach compared to eleven state-of-the-art approaches. In addition, we extend the proposed model to moving object extraction in dynamic scenes, and the proposed algorithm is superior to the traditional algorithms.

  10. Parallel Computing for Brain Simulation.

    Science.gov (United States)

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Data Acquisition with GPUs: The DAQ for the Muon $g$-$2$ Experiment at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Gohn, W. [Kentucky U.

    2016-11-15

    Graphical Processing Units (GPUs) have recently become a valuable computing tool for the acquisition of data at high rates and for a relatively low cost. The devices work by parallelizing the code into thousands of threads, each executing a simple process, such as identifying pulses from a waveform digitizer. The CUDA programming library can be used to effectively write code to parallelize such tasks on Nvidia GPUs, providing a significant upgrade in performance over CPU based acquisition systems. The muon $g$-$2$ experiment at Fermilab is heavily relying on GPUs to process its data. The data acquisition system for this experiment must have the ability to create deadtime-free records from 700 $\\mu$s muon spills at a raw data rate 18 GB per second. Data will be collected using 1296 channels of $\\mu$TCA-based 800 MSPS, 12 bit waveform digitizers and processed in a layered array of networked commodity processors with 24 GPUs working in parallel to perform a fast recording of the muon decays during the spill. The described data acquisition system is currently being constructed, and will be fully operational before the start of the experiment in 2017.

  12. The meteorological data acquisition system

    International Nuclear Information System (INIS)

    Bouharrour, S.; Thomas, P.

    1975-07-01

    The 200 m meteorological tower of the Karlsruhe Nuclear Research Center has been equipped with 45 instruments measuring the meteorological parameters near the ground level. Frequent inquiry of the instruments implies data acquisition with on-line data reduction. This task is fulfilled by some peripheral units controlled by a PDP-8/I. This report presents details of the hardware configuration and a short description of the software configuration of the meteorological data acquisition system. The report also serves as an instruction for maintenance and repair work to be carried out at the system. (orig.) [de

  13. A review of spelling acquisition

    DEFF Research Database (Denmark)

    Dich, Nadya; Cohn, Abigail C.

    2013-01-01

    This review article discusses how empirical data on the acquisition of spelling by children inform the question of the psycholinguistic validity of the phoneme, a concept central (at least implicitly) to most phonological theories. The paper reviews data on children's early spelling attempts...... literacy factors into modeling phonological knowledge. In this article, we show that the spelling acquisition data support and are best accounted for by models allowing for a hierarchy of representations, that learning to read and write has a profound effect on the phonological knowledge of an adult...

  14. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  15. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  16. Max CAPR: high-resolution 3D contrast-enhanced MR angiography with acquisition times under 5 seconds.

    Science.gov (United States)

    Haider, Clifton R; Borisch, Eric A; Glockner, James F; Mostardi, Petrice M; Rossman, Phillip J; Young, Phillip M; Riederer, Stephen J

    2010-10-01

    High temporal and spatial resolution is desired in imaging of vascular abnormalities having short arterial-to-venous transit times. Methods that exploit temporal correlation to reduce the observed frame time demonstrate temporal blurring, obfuscating bolus dynamics. Previously, a Cartesian acquisition with projection reconstruction-like (CAPR) sampling method has been demonstrated for three-dimensional contrast-enhanced angiographic imaging of the lower legs using two-dimensional sensitivity-encoding acceleration and partial Fourier acceleration, providing 1mm isotropic resolution of the calves, with 4.9-sec frame time and 17.6-sec temporal footprint. In this work, the CAPR acquisition is further undersampled to provide a net acceleration approaching 40 by eliminating all view sharing. The tradeoff of frame time and temporal footprint in view sharing is presented and characterized in phantom experiments. It is shown that the resultant 4.9-sec acquisition time, three-dimensional images sets have sufficient spatial and temporal resolution to clearly portray arterial and venous phases of contrast passage. It is further hypothesized that these short temporal footprint sequences provide diagnostic quality images. This is tested and shown in a series of nine contrast-enhanced MR angiography patient studies performed with the new method.

  17. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  18. The convergence of parallel Boltzmann machines

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.; Eckmiller, R.; Hartmann, G.; Hauske, G.

    1990-01-01

    We discuss the main results obtained in a study of a mathematical model of synchronously parallel Boltzmann machines. We present supporting evidence for the conjecture that a synchronously parallel Boltzmann machine maximizes a consensus function that consists of a weighted sum of the regular

  19. Customizable Memory Schemes for Data Parallel Architectures

    NARCIS (Netherlands)

    Gou, C.

    2011-01-01

    Memory system efficiency is crucial for any processor to achieve high performance, especially in the case of data parallel machines. Processing capabilities of parallel lanes will be wasted, when data requests are not accomplished in a sustainable and timely manner. Irregular vector memory accesses

  20. Parallel Narrative Structure in Paul Harding's "Tinkers"

    Science.gov (United States)

    Çirakli, Mustafa Zeki

    2014-01-01

    The present paper explores the implications of parallel narrative structure in Paul Harding's "Tinkers" (2009). Besides primarily recounting the two sets of parallel narratives, "Tinkers" also comprises of seemingly unrelated fragments such as excerpts from clock repair manuals and diaries. The main stories, however, told…