Sample records for temporal parallel acquisition

  1. High temporal resolution functional MRI using parallel echo volumar imaging

    Energy Technology Data Exchange (ETDEWEB)

    Rabrait, C.; Ciuciu, P.; Ribes, A.; Poupon, C.; Dehaine-Lambertz, G.; LeBihan, D.; Lethimonnier, F. [CEA Saclay, DSV, I2BM, Neurospin, F-91191 Gif Sur Yvette (France); Le Roux, P. [GEHC, Buc (France); Dehaine-Lambertz, G. [Unite INSERM 562, Gif Sur Yvette (France)


    Purpose: To combine parallel imaging with 3D single-shot acquisition (echo volumar imaging, EVI) in order to acquire high temporal resolution volumar functional MRI (fMRI) data. Materials and Methods: An improved EVI sequence was associated with parallel acquisition and field of view reduction in order to acquire a large brain volume in 200 msec. Temporal stability and functional sensitivity were increased through optimization of all imaging parameters and Tikhonov regularization of parallel reconstruction. Two human volunteers were scanned with parallel EVI in a 1.5 T whole-body MR system, while submitted to a slow event-related auditory paradigm. Results: Thanks to parallel acquisition, the EVI volumes display a low level of geometric distortions and signal losses. After removal of low-frequency drifts and physiological artifacts,activations were detected in the temporal lobes of both volunteers and voxel-wise hemodynamic response functions (HRF) could be computed. On these HRF different habituation behaviors in response to sentence repetition could be identified. Conclusion: This work demonstrates the feasibility of high temporal resolution 3D fMRI with parallel EVI. Combined with advanced estimation tools,this acquisition method should prove useful to measure neural activity timing differences or study the nonlinearities and non-stationarities of the BOLD response. (authors)

  2. Modeling Parallel System Workloads with Temporal Locality (United States)

    Minh, Tran Ngoc; Wolters, Lex

    In parallel systems, similar jobs tend to arrive within bursty periods. This fact leads to the existence of the locality phenomenon, a persistent similarity between nearby jobs, in real parallel computer workloads. This important phenomenon deserves to be taken into account and used as a characteristic of any workload model. Regrettably, this property has received little if any attention of researchers and synthetic workloads used for performance evaluation to date often do not have locality. With respect to this research trend, Feitelson has suggested a general repetition approach to model locality in synthetic workloads [6]. Using this approach, Li et al. recently introduced a new method for modeling temporal locality in workload attributes such as run time and memory [14]. However, with the assumption that each job in the synthetic workload requires a single processor, the parallelism has not been taken into account in their study. In this paper, we propose a new model for parallel computer workloads based on their result. In our research, we firstly improve their model to control locality of a run time process better and then model the parallelism. The key idea for modeling the parallelism is to control the cross-correlation between the run time and the number of processors. Experimental results show that not only the cross-correlation is controlled well by our model, but also the marginal distribution can be fitted nicely. Furthermore, the locality feature is also obtained in our model.

  3. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array. (United States)

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E


    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.

  4. Parallel Temporal Dynamics in Hierarchical Cognitive Control (United States)

    Ranti, Carolyn; Chatham, Christopher H.; Badre, David


    Cognitive control allows us to follow abstract rules in order to choose appropriate responses given our desired outcomes. Cognitive control is often conceptualized as a hierarchical decision process, wherein decisions made at higher, more abstract levels of control asymmetrically influence lower-level decisions. These influences could evolve sequentially across multiple levels of a hierarchical decision, consistent with much prior evidence for central bottlenecks and seriality in decision-making processes. However, here, we show that multiple levels of hierarchical cognitive control are processed primarily in parallel. Human participants selected responses to stimuli using a complex, multiply contingent (third order) rule structure. A response deadline procedure allowed assessment of the accuracy and timing of decisions made at each level of the hierarchy. In contrast to a serial decision process, error rates across levels of the decision mostly declined simultaneously and at identical rates, with only a slight tendency to complete the highest level decision first. Simulations with a biologically plausible neural network model demonstrate how such parallel processing could emerge from a previously developed hierarchically nested frontostriatal architecture. Our results support a parallel processing model of cognitive control, in which uncertainty on multiple levels of a decision is reduced simultaneously. PMID:26051820

  5. Parallel data acquisition system for electron momentum spectrometer

    CERN Document Server

    Pang, W N


    A parallel data acquisition system has been developed for the study of electron impact ionization of atoms and molecules. The system has a large data storage capacity providing good experimental resolution and system flexibility. The system is used to collect and analyze data from electron momentum spectroscopy experiment. Results from electron momentum spectroscopy experiments on C sub 4 H sub 1 sub 0 molecules, at an incident energy of 1200 eV, are presented to demonstrate the performance of the system. (author)

  6. Multidimensional Wavelet-based Regularized Reconstruction for Parallel Acquisition in Neuroimaging

    CERN Document Server

    Chaari, Lotfi; Badillo, Solveig; Pesquet, Jean-Christophe; Ciuciu, Philippe


    Parallel MRI is a fast imaging technique that enables the acquisition of highly resolved images in space or/and in time. The performance of parallel imaging strongly depends on the reconstruction algorithm, which can proceed either in the original k-space (GRAPPA, SMASH) or in the image domain (SENSE-like methods). To improve the performance of the widely used SENSE algorithm, 2D- or slice-specific regularization in the wavelet domain has been deeply investigated. In this paper, we extend this approach using 3D-wavelet representations in order to handle all slices together and address reconstruction artifacts which propagate across adjacent slices. The gain induced by such extension (3D-Unconstrained Wavelet Regularized -SENSE: 3D-UWR-SENSE) is validated on anatomical image reconstruction where no temporal acquisition is considered. Another important extension accounts for temporal correlations that exist between successive scans in functional MRI (fMRI). In addition to the case of 2D+t acquisition schemes ad...

  7. Microprocessor event analysis in parallel with CAMAC data acquisition

    CERN Document Server

    Cords, D; Riege, H


    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a CAMAC System (GEC-ELLIOTT System Crate) and shares the CAMAC access with a Nord-10S computer. Interfaces have been designed and tested for execution of CAMAC cycles, communication with the Nord-10S computer and DMA-transfer from CAMAC to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-10S computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the results of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-10S buffer will be reset and the event omitted from further processing. (5 refs).

  8. Detecting multineuronal temporal patterns in parallel spike trains

    Directory of Open Access Journals (Sweden)

    Kai S. Gansel


    Full Text Available We present a non-parametric and computationally efficient method that detects spatiotemporal firing patterns and pattern sequences in parallel spike trains and tests whether the observed numbers of repeating patterns and sequences on a given timescale are significantly different from those expected by chance. The method is generally applicable and uncovers coordinated activity with arbitrary precision by comparing it to appropriate surrogate data. The analysis of coherent patterns of spatially and temporally distributed spiking activity on various timescales enables the immediate tracking of diverse qualities of coordinated firing related to neuronal state changes and information processing. We apply the method to simulated data and multineuronal recordings from rat visual cortex and show that it reliably discriminates between data sets with random pattern occurrences and with additional exactly repeating spatiotemporal patterns and pattern sequences. Multineuronal cortical spiking activity appears to be precisely coordinated and exhibits a sequential organization beyond the cell assembly concept.

  9. Acquisition of multiple prior distributions in tactile temporal order judgment

    Directory of Open Access Journals (Sweden)

    Yasuhito eNagai


    Full Text Available The Bayesian estimation theory proposes that the brain acquires the prior distribution of a task and integrates it with sensory signals to minimize the effect of sensory noise. Psychophysical studies have demonstrated that our brain actually implements Bayesian estimation in a variety of sensory-motor tasks. However, these studies only imposed one prior distribution on participants within a task period. In this study, we investigated the conditions that enable the acquisition of multiple prior distributions in temporal order judgment (TOJ of two tactile stimuli across the hands. In Experiment 1, stimulation intervals were randomly selected from one of two prior distributions (biased to right hand earlier and biased to left hand earlier in association with color cues (green and red, respectively. Although the acquisition of the two priors was not enabled by the color cues alone, it was significant when participants shifted their gaze (above or below in response to the color cues. However, the acquisition of multiple priors was not significant when participants moved their mouths (opened or closed. In Experiment 2, the spatial cues (above and below were used to identify which eye position or retinal cue position was crucial for the eye-movement-dependent acquisition of multiple priors in Experiment 1. The acquisition of the two priors was significant when participants moved their gaze to the cues (i.e., the cue positions on the retina were constant across the priors, as well as when participants did not shift their gazes (i.e., the cue positions on the retina changed according to the priors. Thus, both eye and retinal cue positions were effective in acquiring multiple priors. Based on previous neurophysiological reports, we discuss possible neural correlates that contribute to the acquisition of multiple priors.

  10. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics. [Data Acquisition by Parallel Histogramming and NEtworking

    Energy Technology Data Exchange (ETDEWEB)

    Welch, L.C.


    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered.

  11. Spatio-temporal light shaping for parallel nano-biophotonics

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin

    followed separate tracks. Width-shaping, or spatial techniques, have mostly ignored light’s thickness (using continuous-wave lasers), while thickness-shaping, or temporal techniques, typically ignored the beam width. This disconnected spatial and temporal track also shows in our own research where we...... the use of shaped light for electrode-free and contact-free switching of brain circuits, e.g. to probe underlying mechanisms in disorders like Alzheimers or Parkinsons....

  12. Does Parallel Distributed Processing Provide a Plausible Framework for Modeling Early Reading Acquisition? (United States)

    McEneaney, John E.

    A study compared a parallel distributed processing (PDP) model with a more traditional symbolic information processing model that accounts for early reading acquisition by human subjects. Two experimental paradigms were simulated. In one paradigm (a "savings" paradigm) subjects were divided into two groups and trained with two sets of…

  13. Modeling parallelization and flexibility improvements in skill acquisition : From dual tasks to complex dynamic skills

    NARCIS (Netherlands)

    Taatgen, N


    Emerging parallel processing and increased flexibility during the acquisition of cognitive skills form a combination that is hard to reconcile with rule-based models that often produce brittle behavior. Rule-based models can exhibit these properties by adhering to 2 principles: that the model

  14. Quantitative metrics for evaluating parallel acquisition techniques in diffusion tensor imaging at 3 Tesla. (United States)

    Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha


    Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.

  15. Temporal Dynamics of Recovery from Extinction Shortly after Extinction Acquisition (United States)

    Archbold, Georgina E.; Dobbek, Nick; Nader, Karim


    Evidence suggests that extinction is new learning. Memory acquisition involves both short-term memory (STM) and long-term memory (LTM) components; however, few studies have examined early phases of extinction retention. Retention of auditory fear extinction was examined at various time points. Shortly (1-4 h) after extinction acquisition…

  16. High spatial and temporal resolution retrospective cine cardiovascular magnetic resonance from shortened free breathing real-time acquisitions. (United States)

    Xue, Hui; Kellman, Peter; Larocca, Gina; Arai, Andrew E; Hansen, Michael S


    Cine cardiovascular magnetic resonance (CMR) is challenging in patients who cannot perform repeated breath holds. Real-time, free-breathing acquisition is an alternative, but image quality is typically inferior. There is a clinical need for techniques that achieve similar image quality to the segmented cine using a free breathing acquisition. Previously, high quality retrospectively gated cine images have been reconstructed from real-time acquisitions using parallel imaging and motion correction. These methods had limited clinical applicability due to lengthy acquisitions and volumetric measurements obtained with such methods have not previously been evaluated systematically. This study introduces a new retrospective reconstruction scheme for real-time cine imaging which aims to shorten the required acquisition. A real-time acquisition of 16-20s per acquired slice was inputted into a retrospective cine reconstruction algorithm, which employed non-rigid registration to remove respiratory motion and SPIRiT non-linear reconstruction with temporal regularization to fill in missing data. The algorithm was used to reconstruct cine loops with high spatial (1.3-1.8 × 1.8-2.1 mm²) and temporal resolution (retrospectively gated, 30 cardiac phases, temporal resolution 34.3 ± 9.1 ms). Validation was performed in 15 healthy volunteers using two different acquisition resolutions (256 × 144/192 × 128 matrix sizes). For each subject, 9 to 12 short axis and 3 long axis slices were imaged with both segmented and real-time acquisitions. The retrospectively reconstructed real-time cine images were compared to a traditional segmented breath-held acquisition in terms of image quality scores. Image quality scoring was performed by two experts using a scale between 1 and 5 (poor to good). For every subject, LAX and three SAX slices were selected and reviewed in the random order. The reviewers were blinded to the reconstruction approach and acquisition protocols and

  17. Big Data GPU-Driven Parallel Processing Spatial and Spatio-Temporal Clustering Algorithms (United States)

    Konstantaras, Antonios; Skounakis, Emmanouil; Kilty, James-Alexander; Frantzeskakis, Theofanis; Maravelakis, Emmanuel


    Advances in graphics processing units' technology towards encompassing parallel architectures [1], comprised of thousands of cores and multiples of parallel threads, provide the foundation in terms of hardware for the rapid processing of various parallel applications regarding seismic big data analysis. Seismic data are normally stored as collections of vectors in massive matrices, growing rapidly in size as wider areas are covered, denser recording networks are being established and decades of data are being compiled together [2]. Yet, many processes regarding seismic data analysis are performed on each seismic event independently or as distinct tiles [3] of specific grouped seismic events within a much larger data set. Such processes, independent of one another can be performed in parallel narrowing down processing times drastically [1,3]. This research work presents the development and implementation of three parallel processing algorithms using Cuda C [4] for the investigation of potentially distinct seismic regions [5,6] present in the vicinity of the southern Hellenic seismic arc. The algorithms, programmed and executed in parallel comparatively, are the: fuzzy k-means clustering with expert knowledge [7] in assigning overall clusters' number; density-based clustering [8]; and a selves-developed spatio-temporal clustering algorithm encompassing expert [9] and empirical knowledge [10] for the specific area under investigation. Indexing terms: GPU parallel programming, Cuda C, heterogeneous processing, distinct seismic regions, parallel clustering algorithms, spatio-temporal clustering References [1] Kirk, D. and Hwu, W.: 'Programming massively parallel processors - A hands-on approach', 2nd Edition, Morgan Kaufman Publisher, 2013 [2] Konstantaras, A., Valianatos, F., Varley, M.R. and Makris, J.P.: 'Soft-Computing Modelling of Seismicity in the Southern Hellenic Arc', Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [3] Papadakis, S. and

  18. Skill Acquisition in Music Performance: Relations between Planning and Temporal Control. (United States)

    Drake, Carolyn; Palmer, Caroline


    This study investigated acquisition of music performance skills over 11 practice trials in novice and expert pianists differing in age, training, and sight-reading ability. The finding of a strong positive relationship between the mastery of temporal constraints and planning abilities within performance suggested that these two cognitive…

  19. Brains for birds and babies: Neural parallels between birdsong and speech acquisition. (United States)

    Prather, Jonathan F; Okanoya, Kazuo; Bolhuis, Johan J


    Language as a computational cognitive mechanism appears to be unique to the human species. However, there are remarkable behavioral similarities between song learning in songbirds and speech acquisition in human infants that are absent in non-human primates. Here we review important neural parallels between birdsong and speech. In both cases there are separate but continually interacting neural networks that underlie vocal production, sensorimotor learning, and auditory perception and memory. As in the case of human speech, neural activity related to birdsong learning is lateralized, and mirror neurons linking perception and performance may contribute to sensorimotor learning. In songbirds that are learning their songs, there is continual interaction between secondary auditory regions and sensorimotor regions, similar to the interaction between Wernicke's and Broca's areas in human infants acquiring speech and language. Taken together, song learning in birds and speech acquisition in humans may provide useful insights into the evolution and mechanisms of auditory-vocal learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A High-performance temporal-spatial discretization method for the parallel computing of river basins (United States)

    Wang, Hao; Fu, Xudong; Wang, Yuanjian; Wang, Guangqian


    The distributed basin model (DBM) has become one of the most effective tools in river basin studies. In order to overcome the efficiency bottleneck of DBM, an effective parallel-computing method, named temporal-spatial discretization method (TSDM), is proposed. In space, TSDM adopts the sub-basin partitioning manner to the river basin. Compared to the existing sub-basin-based parallel methods, more computable units can be supplied, organized and dispatched using TSDM. Through the characteristic of the temporal-spatial dual discretization, TSDM is capable of exploiting the river-basin parallelization degree to the maximum extent and obtaining higher computing performance. A mathematical formula assessing the maximum speedup ratio (MSR) of TSDM is provided as well. TSDM is independent of the implementation of any physical models and is preliminarily tested in the Lhasa River basin with 1-year rainfall-runoff process simulated. The MSR acquired in the existing traditional way is 7.98. Comparatively, the MSR using TSDM equals to 15.04 under the present limited computing resources, which appears to still have potential to keep increasing. The final results demonstrate the effectiveness and applicability of TSDM.

  1. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)


    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  2. Distinct cerebellar lobules process arousal, valence and their interaction in parallel following a temporal hierarchy. (United States)

    Styliadis, Charis; Ioannides, Andreas A; Bamidis, Panagiotis D; Papadelis, Christos


    The cerebellum participates in emotion-related neural circuits formed by different cortical and subcortical areas, which sub-serve arousal and valence. Recent neuroimaging studies have shown a functional specificity of cerebellar lobules in the processing of emotional stimuli. However, little is known about the temporal component of this process. The goal of the current study is to assess the spatiotemporal profile of neural responses within the cerebellum during the processing of arousal and valence. We hypothesized that the excitation and timing of distinct cerebellar lobules is influenced by the emotional content of the stimuli. By using magnetoencephalography, we recorded magnetic fields from twelve healthy human individuals while passively viewing affective pictures rated along arousal and valence. By using a beamformer, we localized gamma-band activity in the cerebellum across time and we related the foci of activity to the anatomical organization of the cerebellum. Successive cerebellar activations were observed within distinct lobules starting ~160ms after the stimuli onset. Arousal was processed within both vermal (VI and VIIIa) and hemispheric (left Crus II) lobules. Valence (left VI) and its interaction (left V and left Crus I) with arousal were processed only within hemispheric lobules. Arousal processing was identified first at early latencies (160ms) and was long-lived (until 980ms). In contrast, the processing of valence and its interaction to arousal was short lived at later stages (420-530ms and 570-640ms respectively). Our findings provide for the first time evidence that distinct cerebellar lobules process arousal, valence, and their interaction in a parallel yet temporally hierarchical manner determined by the emotional content of the stimuli. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Neural processes in symmetry perception: a parallel spatio-temporal model. (United States)

    Zhu, Tao


    Symmetry is usually computationally expensive to detect reliably, while it is relatively easy to perceive. In spite of many attempts to understand the neurofunctional properties of symmetry processing, no symmetry-specific activation was found in earlier cortical areas. Psychophysical evidence relating to the processing mechanisms suggests that the basic processes of symmetry perception would not perform a serial, point-by-point comparison of structural features but rather operate in parallel. Here, modeling of neural processes in psychophysical detection of bilateral texture symmetry is considered. A simple fine-grained algorithm that is capable of performing symmetry estimation without explicit comparison of remote elements is introduced. A computational model of symmetry perception is then described to characterize the underlying mechanisms as one-dimensional spatio-temporal neural processes, each of which is mediated by intracellular horizontal connections in primary visual cortex and adopts the proposed algorithm for the neural computation. Simulated experiments have been performed to show the efficiency and the dynamics of the model. Model and human performances are comparable for symmetry perception of intensity images. Interestingly, the responses of V1 neurons to propagation activities reflecting higher-order perceptual computations have been reported in neurophysiologic experiments.

  4. Effects of temporal stimuli in the acquisition of a serial tracking task

    Directory of Open Access Journals (Sweden)

    Cattuzzo MT


    Full Text Available Maria Teresa Cattuzzo,1 Go Tani21Physical Education Higher School, University of Pernambuco, Recife, Brazil; 2School of Physical Education and Sports, University of São Paulo, São Paulo, BrazilAbstract: This study investigated the effects of temporal stimuli on qualitative responses during the acquisition of a serial tracking task. One hundred and twenty young adult men performed 100 trials of a tracking task that consisted of touching six response keys in a given sequence in response to flashing light-emitting diodes in order to identify and learn the serial pattern. Six experimental groups were created with diverse inter stimuli intervals (ISI: G1: ISI = 300 ms; G2: ISI = 400 ms; G3: ISI = 500 ms; G4: ISI = 600 ms; G5: ISI = 700 ms; and G6: ISI = 800 ms. Performance was assessed by means of four types of responses: omission, error, correct, and anticipatory responses. The results showed differential effects of temporal stimulus uncertainty in the hierarchy of responses as the learning course progressed.Keywords: motor learning, open system, dynamic system, potential information

  5. Spatio-Temporal Patterns of the International Merger and Acquisition Network. (United States)

    Dueñas, Marco; Mastrandrea, Rossana; Barigozzi, Matteo; Fagiolo, Giorgio


    This paper analyses the world web of mergers and acquisitions (M&As) using a complex network approach. We use data of M&As to build a temporal sequence of binary and weighted-directed networks for the period 1995-2010 and 224 countries (nodes) connected according to their M&As flows (links). We study different geographical and temporal aspects of the international M&A network (IMAN), building sequences of filtered sub-networks whose links belong to specific intervals of distance or time. Given that M&As and trade are complementary ways of reaching foreign markets, we perform our analysis using statistics employed for the study of the international trade network (ITN), highlighting the similarities and differences between the ITN and the IMAN. In contrast to the ITN, the IMAN is a low density network characterized by a persistent giant component with many external nodes and low reciprocity. Clustering patterns are very heterogeneous and dynamic. High-income economies are the main acquirers and are characterized by high connectivity, implying that most countries are targets of a few acquirers. Like in the ITN, geographical distance strongly impacts the structure of the IMAN: link-weights and node degrees have a non-linear relation with distance, and an assortative pattern is present at short distances.

  6. Parallel Bimodal Bilingual Acquisition: A Hearing Child Mediated in a Deaf Family (United States)

    Cramér-Wolrath, Emelie


    The aim of this longitudinal case study was to describe bimodal and bilingual acquisition in a hearing child, Hugo, especially the role his Deaf family played in his linguistic education. Video observations of the family interactions were conducted from the time Hugo was 10 months of age until he was 40 months old. The family language was Swedish…

  7. Quantitative assessment of parallel acquisition techniques in diffusion tensor imaging at 3.0 Tesla. (United States)

    Ardekani, S; Sinha, U


    Single shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions which scale with the magnetic field. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor sequences using k-space (GRAPPA) and image domain (SENSE) reconstructions is reported here. Indices quantifying distortions, artifacts and reliability were compared for all voxels in the corpus callosum and showed that GRAPPA with an acceleration factor of 2 was the optimal sequence.

  8. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail:; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)


    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  9. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  10. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images. (United States)

    Afshar, Yaser; Sbalzarini, Ivo F


    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  11. Serum IGF-1 affects skeletal acquisition in a temporal and compartment-specific manner.

    Directory of Open Access Journals (Sweden)

    Hayden-William Courtland


    Full Text Available Insulin-like growth factor-1 (IGF-1 plays a critical role in the development of the growing skeleton by establishing both longitudinal and transverse bone accrual. IGF-1 has also been implicated in the maintenance of bone mass during late adulthood and aging, as decreases in serum IGF-1 levels appear to correlate with decreases in bone mineral density (BMD. Although informative, mouse models to date have been unable to separate the temporal effects of IGF-1 depletion on skeletal development. To address this problem, we performed a skeletal characterization of the inducible LID mouse (iLID, in which serum IGF-1 levels are depleted at selected ages. We found that depletion of serum IGF-1 in male iLID mice prior to adulthood (4 weeks decreased trabecular bone architecture and significantly reduced transverse cortical bone properties (Ct.Ar, Ct.Th by 16 weeks (adulthood. Likewise, depletion of serum IGF-1 in iLID males at 8 weeks of age, resulted in significantly reduced transverse cortical bone properties (Ct.Ar, Ct.Th by 32 weeks (late adulthood, but had no effect on trabecular bone architecture. In contrast, depletion of serum IGF-1 after peak bone acquisition (at 16 weeks resulted in enhancement of trabecular bone architecture, but no significant changes in cortical bone properties by 32 weeks as compared to controls. These results indicate that while serum IGF-1 is essential for bone accrual during the postnatal growth phase, depletion of IGF-1 after peak bone acquisition (16 weeks is compartment-specific and does not have a detrimental effect on cortical bone mass in the older adult mouse.

  12. Role of drug transporters and drug accumulation in the temporal acquisition of drug resistance

    Directory of Open Access Journals (Sweden)

    Veitch Zachary


    Full Text Available Abstract Background Anthracyclines and taxanes are commonly used in the treatment of breast cancer. However, tumor resistance to these drugs often develops, possibly due to overexpression of drug transporters. It remains unclear whether drug resistance in vitro occurs at clinically relevant doses of chemotherapy drugs and whether both the onset and magnitude of drug resistance can be temporally and causally correlated with the enhanced expression and activity of specific drug transporters. To address these issues, MCF-7 cells were selected for survival in increasing concentrations of doxorubicin (MCF-7DOX-2, epirubicin (MCF-7EPI, paclitaxel (MCF-7TAX-2, or docetaxel (MCF-7TXT. During selection cells were assessed for drug sensitivity, drug uptake, and the expression of various drug transporters. Results In all cases, resistance was only achieved when selection reached a specific threshold dose, which was well within the clinical range. A reduction in drug uptake was temporally correlated with the acquisition of drug resistance for all cell lines, but further increases in drug resistance at doses above threshold were unrelated to changes in cellular drug uptake. Elevated expression of one or more drug transporters was seen at or above the threshold dose, but the identity, number, and temporal pattern of drug transporter induction varied with the drug used as selection agent. The pan drug transporter inhibitor cyclosporin A was able to partially or completely restore drug accumulation in the drug-resistant cell lines, but had only partial to no effect on drug sensitivity. The inability of cyclosporin A to restore drug sensitivity suggests the presence of additional mechanisms of drug resistance. Conclusion This study indicates that drug resistance is achieved in breast tumour cells only upon exposure to concentrations of drug at or above a specific selection dose. While changes in drug accumulation and the expression of drug transporters does

  13. Single-heartbeat electromechanical wave imaging with optimal strain estimation using temporally unequispaced acquisition sequences. (United States)

    Provost, Jean; Thiébaut, Stéphane; Luo, Jianwen; Konofagou, Elisa E


    Electromechanical Wave Imaging (EWI) is a non-invasive, ultrasound-based imaging method capable of mapping the electromechanical wave (EW) in vivo, i.e. the transient deformations occurring in response to the electrical activation of the heart. Optimal imaging frame rates, in terms of the elastographic signal-to-noise ratio, to capture the EW cannot be achieved due to the limitations of conventional imaging sequences, in which the frame rate is low and tied to the imaging parameters. To achieve higher frame rates, EWI is typically performed by combining sectors acquired during separate heartbeats, which are then combined into a single view. However, the frame rates achieved remain potentially sub-optimal and this approach precludes the study of non-periodic arrhythmias. This paper describes a temporally unequispaced acquisition sequence (TUAS) for which a wide range of frame rates are achievable independently of the imaging parameters, while maintaining a full view of the heart at high beam density. TUAS is first used to determine the optimal frame rate for EWI in a paced canine heart in vivo and then to image during ventricular fibrillation. These results indicate how EWI can be optimally performed within a single heartbeat, during free breathing and in real time, for both periodic and non-periodic cardiac events.

  14. Assessment of temporal resolution of multi-detector row computed tomography in helical acquisition mode using the impulse method. (United States)

    Ichikawa, Katsuhiro; Hara, Takanori; Urikura, Atsushi; Takata, Tadanori; Ohashi, Kazuya


    The purpose of this study was to propose a method for assessing the temporal resolution (TR) of multi-detector row computed tomography (CT) (MDCT) in the helical acquisition mode using temporal impulse signals generated by a metal ball passing through the acquisition plane. An 11-mm diameter metal ball was shot along the central axis at approximately 5 m/s during a helical acquisition, and the temporal sensitivity profile (TSP) was measured from the streak image intensities in the reconstructed helical CT images. To assess the validity, we compared the measured and theoretical TSPs for the 4-channel modes of two MDCT systems. A 64-channel MDCT system was used to compare TSPs and image quality of a motion phantom for the pitch factors P of 0.6, 0.8, 1.0 and 1.2 with a rotation time R of 0.5 s, and for two R/P combinations of 0.5/1.2 and 0.33/0.8. Moreover, the temporal transfer functions (TFs) were calculated from the obtained TSPs. The measured and theoretical TSPs showed perfect agreement. The TSP narrowed with an increase in the pitch factor. The image sharpness of the 0.33/0.8 combination was inferior to that of the 0.5/1.2 combination, despite their almost identical full width at tenth maximum values. The temporal TFs quantitatively confirmed these differences. The TSP results demonstrated that the TR in the helical acquisition mode significantly depended on the pitch factor as well as the rotation time, and the pitch factor and reconstruction algorithm affected the TSP shape. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Parallel acquisition of q-space using second order magnetic fields for single-shot diffusion measurements. (United States)

    Kittler, W C; Galvosas, P; Hunter, M W


    A proof of concept is presented for the parallel acquisition of q-space under diffusion using a second order magnetic field. The second order field produces a gradient strength which varies in space, allowing a range of gradients to be applied in a single pulse, and q-space encoded into real space. With the use of a read gradient, the spatial information is regained from the NMR signal, and real space mapped onto q-space for a thin slice excitation volume. As the diffusion encoded image for a thin slice can be mapped onto q-space, and the average propagator is the inverse Fourier transform of the q-space data, it follows that the acquisition of the echo is a direct measurement of the average propagator. In the absence of a thin slice selection, the real space to q-space mapping is lost, but the ability to measure the diffusion coefficient retained with an increase in signal to noise. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Parallel search engine optimisation and pay-per-click campaigns: A comparison of cost per acquisition

    Directory of Open Access Journals (Sweden)

    Wouter T. Kritzinger


    Full Text Available Background: It is imperative that commercial websites should rank highly in search engine result pages because these provide the main entry point to paying customers. There are two main methods to achieve high rankings: search engine optimisation (SEO and pay-per-click (PPC systems. Both require a financial investment – SEO mainly at the beginning, and PPC spread over time in regular amounts. If marketing budgets are applied in the wrong area, this could lead to losses and possibly financial ruin.Objectives: The objective of this research was to investigate, using three real-world case studies, the actual expenditure on and income from both SEO and PPC systems. These figures were then compared, and specifically, the cost per acquisition (CPA was used to decide which system yielded the best results.Methodology: Three diverse websites were chosen, and analytics data for all three were compared over a 3-month period. Calculations were performed to reduce the figures to single ratios, to make comparisons between them possible.Results: Some of the resultant ratios varied widely between websites. However, the CPA was shown to be on average 52.1 times lower for SEO than for PPC systems.Conclusion: It was concluded that SEO should be the marketing system of preference for e-commerce-based websites. However, there are cases where PPC would yield better results – when instant traffic is required, and when a large initial expenditure is not possible.

  17. Single-Heartbeat Electromechanical Wave Imaging with Optimal Strain Estimation Using Temporally-Unequispaced Acquisition Sequences (United States)

    Provost, Jean; Thiébaut, Stéphane; Luo, Jianwen; Konofagou, Elisa E.


    Electromechanical Wave Imaging (EWI) is a non-invasive, ultrasound-based imaging method capable of mapping the electromechanical wave (EW) in vivo, i.e., the transient deformations occurring in response to the electrical activation of the heart. Achieving the optimal imaging frame rates, in terms of the elastographic signal-to-noise ratio, to capture the EW in a full-view of the heart poses a technical challenge due to the limitations of conventional imaging sequences, in which the frame rate is low and tied to the imaging parameters. To achieve higher frame rates, EWI is typically performed in multiple small regions of interest acquired over separate heartbeats, which are then combined into a single view. However, the reliance on multiple heartbeats has previously precluded the method from its application in non-periodic arrhythmias such as fibrillation. Moreover, the frame rates achieved remain sub-optimal, because they are determined by the imaging parameters rather than being optimized to image the EW. In this paper, we develop a temporally-unequispaced acquisition sequence (TUAS) for which a wide range of frame rates are achievable independently of the imaging parameters, while maintaining a full view of the heart at high beam density. TUAS is first used to determine the optimal frame rate for EWI in a paced canine heart in vivo. The feasibility of performing single-heartbeat EWI during ventricular fibrillation is then demonstrated. These results indicate that EWI can be performed optimally, within a single heartbeat, during free breathing, and implemented in real time for periodic and non-periodic cardiac events. PMID:22297208

  18. Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex


    Lafer-Sousa, Rosa; Conway, Bevil R.


    Visual-object processing culminates in inferior temporal (IT) cortex. To assess the organization of IT, we measured fMRI responses in alert monkey to achromatic images (faces, fruit, bodies, places) and colored gratings. IT contained multiple color-biased regions, which were typically ventral to face patches and, remarkably, yoked to them, spaced regularly at four locations predicted by known anatomy. Color and face selectivity increased for more anterior regions, indicative of a broad hierar...

  19. Temporal Dynamics of Late Second Language Acquisition: Evidence from Event-Related Brain Potentials (United States)

    Steinhauer, Karsten; White, Erin J.; Drury, John E.


    The ways in which age of acquisition (AoA) may affect (morpho)syntax in second language acquisition (SLA) are discussed. We suggest that event-related brain potentials (ERPs) provide an appropriate online measure to test some such effects. ERP findings of the past decade are reviewed with a focus on recent and ongoing research. It is concluded…

  20. Parallel signatures of selection in temporally isolated lineages of pink salmon

    DEFF Research Database (Denmark)

    Seeb, L. W.; Waples, R. K.; Limborg, M. T.


    Studying the effect of similar environments on diverse genetic backgrounds has long been a goal of evolutionary biologists with studies typically relying on experimental approaches. Pink salmon, a highly abundant and widely ranging salmonid, provide a naturally occurring opportunity to study the ...... be particularly informative in understanding adaptive evolution in pink salmon and exploring how differing genetic backgrounds within a species respond to selection from the same natural environment......Studying the effect of similar environments on diverse genetic backgrounds has long been a goal of evolutionary biologists with studies typically relying on experimental approaches. Pink salmon, a highly abundant and widely ranging salmonid, provide a naturally occurring opportunity to study...... in the southern pair from Puget Sound than in the northern Alaskan population pairs. We identified 15 SNPs reflecting signatures of parallel selection using both a differentiation-based method (BAYESCAN) and an environmental correlation method (BAYENV). These SNPs represent genomic regions that may...

  1. Detection and Evaluation of Spatio-Temporal Spike Patterns in Massively Parallel Spike Train Data with SPADE

    Directory of Open Access Journals (Sweden)

    Pietro Quaglio


    Full Text Available Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs. STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons. In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST. We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE analysis.

  2. Spatio-temporal PLC activation in parallel with intracellular Ca2+ wave propagation in mechanically stimulated single MDCK cells. (United States)

    Tsukamoto, Akira; Hayashida, Yasunori; Furukawa, Katsuko S; Ushida, Takashi


    Intracellular Ca2+ transients are evoked either by the opening of Ca2+ channels on the plasma membrane or by phospholipase C (PLC) activation resulting in IP3 production. Ca2+ wave propagation is known to occur in mechanically stimulated cells; however, it remains uncertain whether and how PLC activation is involved in intracellular Ca2+ wave propagation in mechanically stimulated cells. To answer these questions, it is indispensable to clarify the spatio-temporal relations between intracellular Ca2+ wave propagation and PLC activation. Thus, we visualized both cytosolic Ca2+ and PLC activation using a real-time dual-imaging system in individual Mardin-Darby Canine Kidney (MDCK) cells. This system allowed us to simultaneously observe intracellular Ca2+ wave propagation and PLC activation in a spatio-temporal manner in a single mechanically stimulated MDCK cell. The results showed that PLC was activated not only in the mechanically stimulated region but also in other subcellular regions in parallel with intracellular Ca2+ wave propagation. These results support a model in which PLC is involved in Ca2+ signaling amplification in mechanically stimulated cells. 2009 Elsevier Ltd. All rights reserved.

  3. High-spatial-resolution whole-body MR angiography with high-acceleration parallel acquisition and 32-channel 3.0-T unit: initial experience. (United States)

    Nael, Kambiz; Fenchel, Michael; Krishnam, Mayil; Laub, Gerhard; Finn, J Paul; Ruehm, Stefan G


    The purpose of this HIPAA-compliant study was to prospectively evaluate the technical feasibility of a multistation high-spatial-resolution whole-body magnetic resonance (MR) angiography protocol in which high-acceleration parallel imaging (with acceleration factors of three and four) is performed with a 32-channel 3.0-T MR system. After institutional review board approval and written informed consent were obtained, 10 healthy volunteers (four men and six women aged 23-68 years) and four patients (two men and two women aged 56-79 years) suspected of having peripheral vascular disease underwent multistation whole-body contrast material-enhanced MR angiography. Use of multiarray surface coil technology and highly accelerated generalized autocalibrating partially parallel acquisition enabled the acquisition of isotropic high-spatial-resolution three-dimensional data sets for multiple stations. Two radiologists independently evaluated arterial image quality and presence of arterial stenoses. All examinations yielded good or excellent image quality. Interobserver agreement was excellent (kappa = 0.92; 95% confidence interval: 0.86, 0.96). Multistation whole-body MR angiography with high-acceleration parallel acquisition is feasible at 3.0 T. Further clinical studies combined with ongoing optimization of radiofrequency systems and coils seem warranted to advance the potential of this technology. (c) RSNA, 2007.

  4. the different effect of parallel and sequential language acquisition on the cortical organisation of language ; a fMRI study


    Wagelaar, Inken-Ulrike


    During the last decades cognitive neuroscience got more and more interested in the topic of biligualism. The influence of Age of Acquisition and Proficiency Level became the question of most interest. Neuroimaging studies didn’t consider an eventually different influence of the Age of Acquisition and the Proficiency Level on grammatical and semantical processes. This study investigated the influence of the Age of Acquisition and the Proficiency Level on the neural correlates of grammatical an...

  5. Cardiac imaging with multi-sector data acquisition in volumetric CT: variation of effective temporal resolution and its potential clinical consequences (United States)

    Tang, Xiangyang; Hsieh, Jiang; Taha, Basel H.; Vass, Melissa L.; Seamans, John L.; Okerlund, Darin R.


    With increasing longitudinal detector dimension available in diagnostic volumetric CT, step-and-shoot scan is becoming popular for cardiac imaging. In comparison to helical scan, step-and-shoot scan decouples patient table movement from cardiac gating/triggering, which facilitates the cardiac imaging via multi-sector data acquisition, as well as the administration of inter-cycle heart beat variation (arrhythmia) and radiation dose efficiency. Ideally, a multi-sector data acquisition can improve temporal resolution at a factor the same as the number of sectors (best scenario). In reality, however, the effective temporal resolution is jointly determined by gantry rotation speed and patient heart beat rate, which may significantly lower than the ideal or no improvement (worst scenario). Hence, it is clinically relevant to investigate the behavior of effective temporal resolution in cardiac imaging with multi-sector data acquisition. In this study, a 5-second cine scan of a porcine heart, which cascades 6 porcine cardiac cycles, is acquired. In addition to theoretical analysis and motion phantom study, the clinical consequences due to the effective temporal resolution variation are evaluated qualitative or quantitatively. By employing a 2-sector image reconstruction strategy, a total of 15 (the permutation of P(6, 2)) cases between the best and worst scenarios are studied, providing informative guidance for the design and optimization of CT cardiac imaging in volumetric CT with multi-sector data acquisition.

  6. Implicit temporal tuning of working memory strategy during cognitive skill acquisition. (United States)

    Sohn, Myeong-Ho; Carlson, Richard A


    Complex cognitive tasks such as multiple-step arithmetic entail strategies for coordinating mental processes such as calculation with processes for managing working memory (WM). Such strategies must be sensitive to factors such as the time needed for calculation. In 2 experiments we tested whether people can learn the timing constraints on WM demands when those constraints are implicitly imposed. We varied the retention period for intermediate results using the well-known digit size effect: The larger the operands, the longer it takes to perform addition. During learning participants practiced multiple-step arithmetic routines combined with large or small digits. At transfer, they performed both practiced and novel combinations. Practice performance was affected by digit size and WM demands. However, the transfer performance was not fully explained by the digit size effect or the practice effect. We argue that participants acquired temporal tuning of the WM strategy to the implicit retention interval imposed by the digit size and kept using the tuning mode to unpracticed data set.

  7. MR-sialography: optimisation and evaluation of an ultra-fast sequence in parallel acquisition technique and different functional conditions of salivary glands; MR-Sialographie: Optimierung und Bewertung ultraschneller Sequenzen mit paralleler Bildgebung und oraler Stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Habermann, C.R.; Cramer, M.C.; Aldefeld, D.; Weiss, F.; Kaul, M.G.; Adam, G. [Radiologisches Zentrum, Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie, Universitaetsklinikum Hamburg-Eppendorf (Germany); Graessner, J. [Siemens Medical Systems, Hamburg (Germany); Reitmeier, F.; Jaehne, M. [Kopf- und Hautzentrum, Klinik und Poliklinik fuer Hals-, Nasen- und Ohrenheilkunde, Universitaetsklinikum Hamburg-Eppendorf (Germany); Petersen, K.U. [Zentrum fuer Psychosoziale Medizin, Klinik und Poliklinik fuer Psychiatrie und Psychotherapie, Universitaetsklinikum Hamburg-Eppendorf (Germany)


    Purpose: To optimise a fast sequence for MR-sialography and to compare a parallel and non-parallel acquisition technique. Additionally, the effect of oral stimulation regarding the image quality was evaluated. Material and Methods: All examinations were performed by using a 1.5-T superconducting system. After developing a sufficient sequence for MR-sialography, a single-shot turbo-spin-echo sequence (ss-TSE) with an acquisition time of 2.8 sec was used in transverse and oblique sagittal orientation in 27 healthy volunteers. All images were performed with and without parallel imaging technique. The assessment of the ductal system of the submandibular and parotid gland was performed using a 1 to 5 visual scale for each side separately. Images were evaluated by four independent experienced radiologists. For statistical evaluation, an ANOVA with post-hoc comparisons was used with an overall two-tailed significance level of P=.05. For evaluation of interobserver variability, an intraclass correlation was computed and correlation >.08 was determined to indicate a high correlation. Results: All parts of salivary excretal ducts could be visualised in all volunteers, with an overall rating for all ducts of 2.26 (SD{+-}1.09). Between the four observers a high correlation could be obtained with an intraclass correlation of 0.9475. A significant influence regarding the slice angulations could not be obtained (p=0.74). In all healthy volunteers the visibility of excretory ducts improved significantly after oral application of a Sialogogum (p<0.001; {eta}{sup 2}=0.049). The use of a parallel imaging technique did not lead to an improvement of visualisation, showing a significant loss of image quality compared to an acquistion technique without parallel imaging (p<0.001; {eta}{sup 2}=0.013). Conclusion: The optimised ss-TSE MR-sialography seems to be a fast and sufficient technique for visualisation of excretory ducts of the main salivary glands, with no elaborate post

  8. Multiband multislice GE-EPI at 7 tesla, with 16-fold acceleration using partial parallel imaging with application to high spatial and temporal whole-brain fMRI. (United States)

    Moeller, Steen; Yacoub, Essa; Olman, Cheryl A; Auerbach, Edward; Strupp, John; Harel, Noam; Uğurbil, Kâmil


    Parallel imaging in the form of multiband radiofrequency excitation, together with reduced k-space coverage in the phase-encode direction, was applied to human gradient echo functional MRI at 7 T for increased volumetric coverage and concurrent high spatial and temporal resolution. Echo planar imaging with simultaneous acquisition of four coronal slices separated by 44mm and simultaneous 4-fold phase-encoding undersampling, resulting in 16-fold acceleration and up to 16-fold maximal aliasing, was investigated. Task/stimulus-induced signal changes and temporal signal behavior under basal conditions were comparable for multiband and standard single-band excitation and longer pulse repetition times. Robust, whole-brain functional mapping at 7 T, with 2 x 2 x 2mm(3) (pulse repetition time 1.25 sec) and 1 x 1 x 2mm(3) (pulse repetition time 1.5 sec) resolutions, covering fields of view of 256 x 256 x 176 mm(3) and 192 x 172 x 176 mm(3), respectively, was demonstrated with current gradient performance. (c) 2010 Wiley-Liss, Inc.

  9. Mouse model of enlarged vestibular aqueducts defines temporal requirement of Slc26a4 expression for hearing acquisition. (United States)

    Choi, Byung Yoon; Kim, Hyoung-Mi; Ito, Taku; Lee, Kyu-Yup; Li, Xiangming; Monahan, Kelly; Wen, Yaqing; Wilson, Elizabeth; Kurima, Kiyoto; Saunders, Thomas L; Petralia, Ronald S; Wangemann, Philine; Friedman, Thomas B; Griffith, Andrew J


    Mutations in human SLC26A4 are a common cause of hearing loss associated with enlarged vestibular aqueducts (EVA). SLC26A4 encodes pendrin, an anion-base exchanger expressed in inner ear epithelial cells that secretes HCO3- into endolymph. Studies of Slc26a4-null mice indicate that pendrin is essential for inner ear development, but have not revealed whether pendrin is specifically necessary for homeostasis. Slc26a4-null mice are profoundly deaf, with severe inner ear malformations and degenerative changes that do not model the less severe human phenotype. Here, we describe studies in which we generated a binary transgenic mouse line in which Slc26a4 expression could be induced with doxycycline. The transgenes were crossed onto the Slc26a4-null background so that all functional pendrin was derived from the transgenes. Varying the temporal expression of Slc26a4 revealed that E16.5 to P2 was the critical interval in which pendrin was required for acquisition of normal hearing. Lack of pendrin during this period led to endolymphatic acidification, loss of the endocochlear potential, and failure to acquire normal hearing. Doxycycline initiation at E18.5 or discontinuation at E17.5 resulted in partial hearing loss approximating the human EVA auditory phenotype. These data collectively provide mechanistic insight into hearing loss caused by SLC26A4 mutations and establish a model for further studies of EVA-associated hearing loss.

  10. Transputer-based parallel system for acquisition and on-line analysis of single-fiber electromyographic signals. (United States)

    Ayala, G F; Boscaino, R; Concas, G; Fornili, S L; Lapis, M


    We describe a transputer-based system suitable for accurate measurements of single-fiber electromyographic jitter. It consists of a conventional electromyograph, a home-made interface and a commercially available transputer-based board installed within a PC/AT compatible. Taking advantage of the concurrent operation of two transputer modules, the system features simultaneous data acquisition and statistical signal processing: while data are acquired and analyzed, a real-time visualization of the signal latency and its variability is provided. In the present configuration, the system can acquire and analyze up to 40,000 consecutive action potentials, which can be grouped into up to eight sets at different stimulation rates programmable up to 16 Hz. Since the determination of the electromyographic signal latency relies on least-squares smoothing and interpolation of the acquired data rather than on amplitude-threshold triggering, a low value (0.7 microsecond) of so called technical jitter is achieved. Computing power and memory can be easily extended by addition of transputer-based modules. Typical results of data acquisition and on-line analysis are reported.

  11. MR angiography with parallel acquisition for assessment of the visceral arteries: comparison with conventional MR angiography and 64-detector-row computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sutter, Reto [University Hospital Zurich, Institute of Diagnostic Radiology, Zurich (Switzerland); Cantonal Hospital Winterthur, Department of Radiology, Winterthur (Switzerland); Heilmaier, Christina [University Hospital Essen, Department of Diagnostic and Interventional Radiology and Neuroradiology, Essen (Germany); Lutz, Amelie M.; Willmann, Juergen K. [University Hospital Zurich, Institute of Diagnostic Radiology, Zurich (Switzerland); Stanford University School of Medicine, Department of Radiology, Stanford, CA (United States); Weishaupt, Dominik [University Hospital Zurich, Institute of Diagnostic Radiology, Zurich (Switzerland); Hospital Triemli, Department of Radiology, Zurich (Switzerland); Seifert, Burkhardt [University of Zurich, Biostatistics Unit, Institute of Social and Preventive Medicine, Zurich (Switzerland)


    The purpose of the study was to retrospectively compare three-dimensional gadolinium-enhanced magnetic resonance angiography (conventional MRA) with MRA accelerated by a parallel acquisition technique (fast MRA) for the assessment of visceral arteries, using 64-detector-row computed tomography angiography (MDCTA) as the reference standard. Eighteen patients underwent fast MRA (imaging time 17 s), conventional MRA (29 s) and MDCTA of the abdomen and pelvis. Two independent readers assessed subjective image quality and the presence of arterial stenosis. Data were analysed on per-patient and per-segment bases. Fast MRA yielded better subjective image quality in all segments compared with conventional MRA (P = 0.012 for reader 1, P = 0.055 for reader 2) because of fewer motion-induced artefacts. Sensitivity and specificity of fast MRA for the detection of arterial stenosis were 100% for both readers. Sensitivity of conventional MRA was 89% for both readers, and specificity was 100% (reader 1) and 99% (reader 2). Differences in sensitivity between the two types of MRA were not significant for either reader. Interobserver agreement for the detection of arterial stenosis was excellent for fast ({kappa} = 1.00) and good for conventional MRA ({kappa} = 0.76). Thus, subjective image quality of visceral arteries remains good on fast MRA compared with conventional MRA, and the two techniques do not differ substantially in the grading of arterial stenosis, despite the markedly reduced acquisition time of fast MRA. (orig.)

  12. Temporally Resolved Ion Fluorescence Measurements of the Interaction of a Field-Parallel Laser Produced Plasma and an Ambient Magnetized Plasma (United States)

    Dorst, R. S.; Heuer, P. V.; Bondarenko, A. S.; Shaffer, D. B.; Contantin, G.; Vincena, S.; Tripathi, S.; Gekelman, W.; Weidl, M.; Winske, D.; Niemann, C.


    We present measurements of the collisionless coupling between an exploding laser-produced plasma (LPP) and a large, magnetized ambient plasma. The LPP was created by focusing the Raptor laser (400J, 40ns) on a planar plastic target embedded in the ambient Large Plasma Device (LAPD) plasma at the University of Californa, Los Angeles. The resulting ablated material moved parallel to the background magnetic field, interacting with the ambient plasma along the full 17m length of the LAPD. A high temporal and spectral resolution monochrometer measured fluorescence from debris and ambient ions to deter- mine the debris velocity distribution by charge state and study the fast electron precursor to the LPP. Measurements are compared to hybrid simulations of quasi-parallel shocks.

  13. Is High Temporal Resolution Achievable for Paediatric Cardiac Acquisitions during Several Heart Beats? Illustration with Cardiac Phase Contrast Cine-MRI.

    Directory of Open Access Journals (Sweden)

    Laurent Bonnemains

    Full Text Available During paediatric cardiac Cine-MRI, data acquired during cycles of different lengths must be combined. Most of the time, Feinstein's model is used to project multiple cardiac cycles of variable lengths into a mean cycle.To assess the effect of Feinstein projection on temporal resolution of Cine-MRI.1/The temporal errors during Feinstein's projection were computed in 306 cardiac cycles fully characterized by tissue Doppler imaging with 6-phase analysis (from a population of 7 children and young adults. 2/The effects of these temporal errors on tissue velocities were assessed by simulating typical tissue phase mapping acquisitions and reconstructions. 3/Myocardial velocities curves, extracted from high-resolution phase-contrast cine images, were compared for the 6 volunteers with lowest and highest heart rate variability, within a population of 36 young adults.1/The mean of temporal misalignments was 30 ms over the cardiac cycle but reached 60 ms during early diastole. 2/During phase contrast MRI simulation, early diastole velocity peaks were diminished by 6.1 cm/s leading to virtual disappearance of isovolumic relaxation peaks. 3/The smoothing and erasing of isovolumic relaxation peaks was confirmed on tissue phase mapping velocity curves, between subjects with low and high heart rate variability (p = 0.05.Feinstein cardiac model creates temporal misalignments that impair high temporal resolution phase contrast cine imaging when beat-to-beat heart rate is changing.

  14. Spectral and temporal electroencephalography measures reveal distinct neural networks for the acquisition, consolidation, and interlimb transfer of motor skills in healthy young adults. (United States)

    Veldman, M P; Maurits, N M; Nijland, M A M; Wolters, N E; Mizelle, J C; Hortobágyi, T


    Plasticity of the central nervous system likely underlies motor learning. It is however unclear, whether plasticity in cortical motor networks is motor learning stage-, activity-, or connectivity-dependent. From electroencephalography (EEG) data, we quantified effective connectivity by the phase slope index (PSI), neuronal activity by event-related desynchronization, and sensorimotor integration by N30 during the stages of visuomotor skill acquisition, consolidation, and interlimb transfer. Although N30 amplitudes and event-related desynchronization in parietal electrodes increased with skill acquisition, changes in PSI correlated most with motor performance in all stages of motor learning. Specifically, changes in PSI between the premotor, supplementary motor, and primary motor cortex (M1) electrodes correlated with skill acquisition, whereas changes in PSI between electrodes representing M1 and the parietal and primary sensory cortex (S1) correlated with skill consolidation. The magnitude of consolidated interlimb transfer correlated with PSI between bilateral M1s and between S1 and M1 in the non-practiced hemisphere. Spectral and temporal EEG measures but especially PSI correlated with improvements in complex motor behavior and revealed distinct neural networks in the acquisition, consolidation, and interlimb transfer of motor skills. A complete understanding of the neuronal mechanisms underlying motor learning can contribute to optimizing rehabilitation protocols. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  15. Selection and integration of a network of parallel processors in the real time acquisition system of the 4{pi} DIAMANT multidetector: modeling, realization and evaluation of the software installed on this network; Choix et integration d`un reseau de processeurs paralleles dans le systeme d`acquisition temps reel du multidetecteur 4{pi} DIAMANT: modelisation, realisation et evaluation du logiciel implante sur ce reseau

    Energy Technology Data Exchange (ETDEWEB)

    Guirande, F. [Ecole Doctorale des Sciences Physiques et de l`Ingenieur, Bordeaux-1 Univ., 33 (France)


    The increase in sensitivity of 4{pi} arrays such as EUROBALL or DIAMANT has led to an increase in the data flow rate into the data acquisition system. If at the electronic level, the data flow has been distributed onto several data acquisition buses, it is necessary in the data processing system to increase the processing power. This work regards the modelling and implementation of the software allocated onto an architecture of parallel processors. Object analysis and formal methods were used, benchmark and evolution in the future of this architecture are presented. The thesis consists of two parts. Part A, devoted to `Nuclear Spectroscopy with 4 {pi} multidetectors`, contains a first chapter entitled `The Physics of 4{pi} multidetectors` and a second chapter entitled `Integral architecture of 4{pi} multidetectors`. Part B, devoted to `Parallel acquisition system of DIAMANT` contains three chapters entitled `Material architecture`, `Software architecture` and `Validation and Performances`. Four appendices and a term glossary close this work. (author) 58 refs.

  16. Pregnancy-Related Group A Streptococcal Infections: Temporal Relationships Between Bacterial Acquisition, Infection Onset, Clinical Findings, and Outcome (United States)

    Hamilton, Stephanie M.; Stevens, Dennis L.; Bryant, Amy E.


    Puerperal sepsis caused by group A Streptococcus (GAS) remains an important cause of maternal and infant mortality worldwide, including countries with modern antibiotic regimens, intensive care measures and infection control practices. To provide insights into the genesis of modern GAS puerperal sepsis, we reviewed the published cases and case series from 1974 to 2009, specifically seeking relationships between the likely source of pathogen acquisition, clinical signs, and symptoms at infection onset and patient outcomes that could provide clues for early diagnosis. Results suggest that the pathogenesis of pregnancy-related GAS infections in modern times is complex and not simply the result of exposure to GAS in the hospital setting. Additional research is needed to further explore the source of GAS, the specific M types involved, and the pathogenesis of these pregnancy-related infections to generate novel preventative and therapeutic strategies. PMID:23645851

  17. Aquisição de uma tarefa temporal (DRL por ratos submetidos a lesão seletiva do giro denteado The acquisition of a temporal task (DRL by dentate gyrus-selective colchicine lesioned rats

    Directory of Open Access Journals (Sweden)

    José Lino Oliveira Bueno


    Full Text Available A lesão seletiva do giro denteado (DG reduz a eficiência do desempenho de ratos treinados pré-operatoriamente em um esquema de reforçamento diferencial de baixas taxas (DRL; embora os animais lesados sejam capazes de suprimir a resposta de pressão na barra por determinado intervalo de tempo após a resposta anterior, eles subestimam esse intervalo, resultando em um desempenho menos eficiente. Como os animais tinham recebido treinamento pré-operatório, não ficou claro se a lesão interfere na aquisição da discriminação temporal. Este estudo avaliou o efeito da lesão do DG na aquisição de uma tarefa de DRL-20 s. Ratos foram submetidos à neurocirurgia e então ao treino na tarefa de DRL-20 s. Os resultados mostraram que embora os animais lesados se beneficiem do treinamento na tarefa, sua aquisição não é tão eficiente quanto a exibida pelos animais controle. Os resultados sugerem ainda que a lesão do giro denteado interfere na acuidade da discriminação temporal.Previous studies have shown that dentate gyrus damage render rats less efficient than sham-operated controls in the performance of a differential reinforcement of low rates of responding (DRL-20 s task acquired prior to the lesion; even though the lesioned rats were able to postpone their responses after a previous bar press, they seem to underestimate time relative to sham-operated controls, which interferes with their performance. This study investigated the effects of multiplesite, intradentate, colchicine injections on the acquisition and performance of a DRL-20 s task in rats not exposed to preoperatory training, i.e., trained after the lesion. Results showed that the lesioned rats improved along repetitive training in the DRL-20 s task; however, relative to the sham-operated controls, their acquisition rate was slower and the level of proficiency achieved was poorer, indicating that damage to the dentate gyrus interferes with temporal discrimination.

  18. IMAGO: a complete system for acquisition, processing, two/three-dimensional and temporal display of microscopic bio-images. (United States)

    Diaspro, A; Adami, M; Sartore, M; Nicolini, C


    This work describes IMAGO, an integrated bio-imaging system developed in our laboratory. The whole system consists of a personal computer, a commercially available frame grabber directly plugged into a personal computer, video input/output modules, specific hardware for z-axis movement and light shuttering, and a software package. IMAGO is user-friendly, menu driven and enables one to perform image acquisition with different methods: optical sectioning, flashing epifluorescence, transmitted and phase contrast microscopy. It makes various functions possible, including: image transfer, gray scale processing, conventional and advanced filtering, logical operations, look-up table management, three-dimensional (3D) editing, 3D representation and auto-correlation techniques. More than 100 image processing functions have been implemented and can be easily managed through IMAGO. Examples are given in the area of biophysical research, like 3D representation of nuclei and of electron microscopic images, in situ microscopy of living cells. IMAGO processes information in an x, y, z, t space.

  19. High Frequency Burst Firing of Granule Cells Ensures Transmission at the Parallel Fiber to Purkinje Cell Synapse at the Cost of Temporal Coding.

    Directory of Open Access Journals (Sweden)

    Boeke Job van Beugen


    Full Text Available Cerebellar granule cells (GrCs convey information from mossy fibers (MFs to Purkinje cells (PCs via their parallel fibers (PFs. MF to GrC signaling allows transmission of frequencies up to 1 kHz and GrCs themselves can also fire bursts of action potentials with instantaneous frequencies up to 1 kHz. So far, in the scientific literature no evidence has been shown that these high-frequency bursts also exist in awake, behaving animals. More so, it remains to be shown whether such high-frequency bursts can transmit temporally coded information from MFs to PCs and/or whether these patterns of activity contribute to the spatiotemporal filtering properties of the granule cell layer. Here, we show that, upon sensory stimulation both in un-anesthetized rabbits and mice, GrCs can show bursts that consist of tens of spikes at instantaneous frequencies over 800 Hz. In vitro recordings from individual GrC-PC pairs following high-frequency stimulation revealed an overall low initial release probability of ~0.17. Nevertheless, high-frequency burst activity induced a short-lived facilitation to ensure signaling within the first few spikes, which was rapidly followed by a reduction in transmitter release to prevent immediate postsynaptic saturation. The facilitation rate among individual GrC-PC pairs was heterogeneously distributed and could be classified as either ‘reluctant’ or ‘responsive’ according to their release characteristics. Despite the variety of efficacy at individual connections, grouped activity in GrCs resulted in a linear relationship between PC response and PF burst duration at frequencies up to 300 Hz allowing rate coding to persist at the network level. Together, these findings support the hypothesis that the cerebellar granular layer acts as a spatiotemporal filter between MF input and PC output (D’Angelo and De Zeeuw, 2009.

  20. Layer-parallel shortening across the Sevier fold-thrust belt and Laramide foreland of Wyoming: spatial and temporal evolution of a complex geodynamic system (United States)

    Weil, Arlo Brandon; Yonkee, W. Adolph


    Varying patterns of layer-parallel shortening (LPS) and vertical-axis rotations from the thin-skin Sevier fold-thrust belt to the thick-skin Laramide foreland of Wyoming are quantified from integrated structural, anisotropy of magnetic susceptibility (AMS), and paleomagnetic analyses. Within the Sevier belt, widespread early LPS was accommodated by spaced cleavage, fracture sets, minor folds, and minor faults. LPS directions are subperpendicular to structural trends of systematically curved thrust sheets of the Wyoming salient, reflecting a combination of primary dispersion and secondary rotation during thrusting. Within the Laramide foreland, limited LPS was accommodated mostly by minor faults with conjugate wedge and strike-slip geometries. LPS directions in gentler fold limbs vary from perpendicular to acute with structural trends of variably oriented, anastomosing basement-cored arches. Steep forelimbs display more complex relations, including younger fault sets that developed during evolving stress states and localized vertical-axis rotations. Although internal strain is limited, weak AMS lineations defined by kinked and rotated phyllosilicates are widely developed and consistently oriented perpendicular to measured LPS directions. Palinspastically restored LPS directions, corrected for paleomagnetically determined vertical-axis rotations, vary on average from W-E in the Sevier belt to WSW-ENE in the Laramide foreland. In detail, LPS directions display deflections related to primary sedimentary wedge geometry and basement fabrics. LPS in the Sevier belt is interpreted to partly reflect stress transmitted from the hinterland through the growing orogenic wedge and topographic stress along the front of the wedge. LPS in the Laramide foreland is interpreted to partly reflect basal traction during flat-slab subduction beneath thick cratonic lithosphere, with spatial-temporal variations in stress trajectories related to basement heterogeneities and evolving fault

  1. Pre-learning stress that is temporally removed from acquisition exerts sex-specific effects on long-term memory. (United States)

    Zoladz, Phillip R; Warnecke, Ashlee J; Woelke, Sarah A; Burke, Hanna M; Frigo, Rachael M; Pisansky, Julia M; Lyle, Sarah M; Talbot, Jeffery N


    We have examined the influence of sex and the perceived emotional nature of learned information on pre-learning stress-induced alterations of long-term memory. Participants submerged their dominant hand in ice cold (stress) or warm (no stress) water for 3 min. Thirty minutes later, they studied 30 words, rated the words for their levels of emotional valence and arousal and were then given an immediate free recall test. Twenty-four hours later, participants' memory for the word list was assessed via delayed free recall and recognition assessments. The resulting memory data were analyzed after categorizing the studied words (i.e., distributing them to "positive-arousing", "positive-non-arousing", "negative-arousing", etc. categories) according to participants' valence and arousal ratings of the words. The results revealed that participants exhibiting a robust cortisol response to stress exhibited significantly impaired recognition memory for neutral words. More interestingly, however, males displaying a robust cortisol response to stress demonstrated significantly impaired recall, overall, and a marginally significant impairment of overall recognition memory, while females exhibiting a blunted cortisol response to stress demonstrated a marginally significant impairment of overall recognition memory. These findings support the notion that a brief stressor that is temporally separated from learning can exert deleterious effects on long-term memory. However, they also suggest that such effects depend on the sex of the organism, the emotional salience of the learned information and the degree to which stress increases corticosteroid levels. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. High frequency burst firing of granule cells ensures transmission at the parallel fiber to purkinje cell synapse at the cost of temporal coding.

    NARCIS (Netherlands)

    B.J. van Beugen (Boeke); Z. Gao (Zhenyu); H.J. Boele (Henk-Jan); F.E. Hoebeek (Freek); C.I. de Zeeuw (Chris)


    textabstractCerebellar granule cells (GrCs) convey information from mossy fibers (MFs) to Purkinje cells (PCs) via their parallel fibers (PFs). MF to GrC signaling allows transmission of frequencies up to 1 kHz and GrCs themselves can also fire bursts of action potentials with instantaneous

  3. Sequencing the hypervariable regions of human mitochondrial DNA using massively parallel sequencing: Enhanced data acquisition for DNA samples encountered in forensic testing. (United States)

    Davis, Carey; Peters, Dixie; Warshauer, David; King, Jonathan; Budowle, Bruce


    Mitochondrial DNA testing is a useful tool in the analysis of forensic biological evidence. In cases where nuclear DNA is damaged or limited in quantity, the higher copy number of mitochondrial genomes available in a sample can provide information about the source of a sample. Currently, Sanger-type sequencing (STS) is the primary method to develop mitochondrial DNA profiles. This method is laborious and time consuming. Massively parallel sequencing (MPS) can increase the amount of information obtained from mitochondrial DNA samples while improving turnaround time by decreasing the numbers of manipulations and more so by exploiting high throughput analyses to obtain interpretable results. In this study 18 buccal swabs, three different tissue samples from five individuals, and four bones samples from casework were sequenced at hypervariable regions I and II using STS and MPS. Sample enrichment for STS and MPS was PCR-based. Library preparation for MPS was performed using Nextera® XT DNA Sample Preparation Kit and sequencing was performed on the MiSeq™ (Illumina, Inc.). MPS yielded full concordance of base calls with STS results, and the newer methodology was able to resolve length heteroplasmy in homopolymeric regions. This study demonstrates short amplicon MPS of mitochondrial DNA is feasible, can provide information not possible with STS, and lays the groundwork for development of a whole genome sequencing strategy for degraded samples. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. A Parallel Tracking Method for Acoustic Radiation Force Impulse Imaging (United States)

    Dahl, Jeremy J.; Pinton, Gianmarco F.; Mark, L; Agrawal, Vineet; Nightingale, Kathryn R.; Trahey, Gregg E.


    Radiation force-based techniques have been developed by several groups for imaging the mechanical properties of tissue. Acoustic Radiation Force Impulse (ARFI) imaging is one such method that uses commercially available scanners to generate localized radiation forces in tissue. The response of the tissue to the radiation force is determined using conventional B-mode imaging pulses to track micron-scale displacements in tissue. Current research in ARFI imaging is focused on producing real-time images of tissue displacements and related mechanical properties. Obstacles to producing a real-time ARFI imaging modality include data acquisition, processing power, data transfer rates, heating of the transducer, and patient safety concerns. We propose a parallel receive beamforming technique to reduce transducer heating and patient acoustic exposure, and to facilitate data acquisition for real-time ARFI imaging. Custom beam sequencing was used with a Siemens SONOLINE AntaresTM scanner to track tissue displacements with parallel-receive beam-forming in tissue-mimicking phantoms. Using simulations, the effects of material properties on parallel tracking are observed. Transducer and tissue heating for parallel tracking are compared to standard ARFI beam sequencing. The effects of tracking beam position and size of the tracked region are also discussed in relation to the size and temporal response of the region of applied force, and the impact on ARFI image contrast and signal-to-noise ratio are quantified. PMID:17328327

  5. Parallel computations

    CERN Document Server


    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  6. Uma interface lab-made para aquisição de sinais analógicos instrumentais via porta paralela do microcomputador A lab-made interface for acquisition of instrumental analog signals at the parallel port of a microcomputer

    Directory of Open Access Journals (Sweden)

    Edvaldo da Nóbrega Gaião


    Full Text Available A lab-made interface for acquisition of instrumental analog signals between 0 and 5 V at a frequency up to 670 kHz at the parallel port of a microcomputer is described. Since it uses few and small components, it was built into the connector of a printer parallel cable. Its performance was evaluated by monitoring the signals of four different instruments and similar analytical curves were obtained with the interface and from readings from the instrument' displays. Because the components are cheap (~U$35,00 and easy to get, the proposed interface is a simple and economical alternative for data acquisition in small laboratories for routine work, research and teaching.

  7. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves


    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  8. Retrospective Reconstruction of High Temporal Resolution Cine Images from Real-Time MRI using Iterative Motion Correction

    DEFF Research Database (Denmark)

    Hansen, Michael Schacht; Sørensen, Thomas Sangild; Arai, Andrew


    Cardiac function has traditionally been evaluated using breath-hold cine acquisitions. However, there is a great need for free breathing techniques in patients who have difficulty in holding their breath. Real-time cardiac MRI is a valuable alternative to the traditional breath-hold imaging...... approach, but the real-time images are often inferior in spatial and temporal resolution. This article presents a general method for reconstruction of high spatial and temporal resolution cine images from a real-time acquisition acquired over multiple cardiac cycles. The method combines parallel imaging...... and motion correction based on nonrigid registration and can be applied to arbitrary k-space trajectories. The method is demonstrated with real-time Cartesian imaging and Golden Angle radial acquisitions, and the motion-corrected acquisitions are compared with raw real-time images and breath-hold cine...

  9. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model (United States)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie


    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  10. Parallel R

    CERN Document Server

    McCallum, Ethan


    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  11. Temporal compressive sensing systems

    Energy Technology Data Exchange (ETDEWEB)

    Reed, Bryan W.


    Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

  12. Retrospectively gated cardiac cine imaging with temporal and spatial acceleration. (United States)

    Madore, Bruno; Hoge, W Scott; Chao, Tzu-Cheng; Zientara, Gary P; Chu, Renxin


    Parallel imaging methods are routinely used to accelerate the image acquisition process in cardiac cine imaging. The addition of a temporal acceleration method, whereby k-space is sampled differently for different time frames, has been shown in prior work to improve image quality as compared to parallel imaging by itself. However, such temporal acceleration strategies prove difficult to combine with retrospectively gated cine imaging. The only currently published method to feature such combination, by Hansen et al. [Magn Reson Med 55 (2006) 85-91] tends to be associated with prohibitively long reconstruction times. The goal of the present work was to develop a retrospectively gated cardiac cine method that features both parallel imaging and temporal acceleration, capable of achieving significant acceleration factors on commonly available hardware and associated with reconstruction times short enough for practical use in a clinical context. Seven cardiac patients and a healthy volunteer were recruited and imaged, with acceleration factors of 3.5 or 4.5, using an eight-channel product cardiac array on a 1.5-T system. The prescribed FOV value proved slightly too small in three patients, and one of the patients had a bigemini condition. Despite these additional challenges, good-quality results were obtained for all slices and all patients, with a reconstruction time of 0.98±0.07 s per frame, or about 20 s for a 20-frame slice, using a single processor on a single PC. As compared to using parallel imaging by itself, the addition of a temporal acceleration strategy provided much resistance to artifacts. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner


    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  14. Mergers & Acquisitions

    DEFF Research Database (Denmark)

    Fomcenco, Alex

    MERGERS & ACQUISITIONS: Counseling and Choice of Method describes and analyzes the current state of law in Europe in regard to some relevant selected elements related to mergers and acquisitions, and the adviser’s counsel in this regard. The focus is aimed and maintained at application...


    National Research Council Canada - National Science Library

    Rodgers Mavhiki


      Mergers and acquisitions (M&A) tend to be dominated by M&A specialists such as lawyers, valuations specialists and investment gurus, leaving the funding function for the Chief Financial Officer (CFO...

  16. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)


    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  17. SSC/BCD data acquisition system proposal

    Energy Technology Data Exchange (ETDEWEB)

    Barsotti, E.; Bowden, M.; Swoboda, C. [Fermilab, Batavia, IL (United States)


    The proposed new data acquisition system architecture takes event fragments off a detector over fiber optics and to a parallel event building switch. The parallel event building switch concept, taken from the telephone communications industry, along with expected technology improvements in fiber-optic data transmission speeds over the next few years, should allow data acquisition system rates to increase dramatically and exceed those rates needed for the SSC. This report briefly describes the switch architecture and fiber optics for a SSC data acquisition system.

  18. Multiband multislice GE‐EPI at 7 tesla, with 16‐fold acceleration using partial parallel imaging with application to high spatial and temporal whole‐brain fMRI

    National Research Council Canada - National Science Library

    Moeller, Steen; Yacoub, Essa; Olman, Cheryl A; Auerbach, Edward; Strupp, John; Harel, Noam; Uğurbil, Kâmil


    ... ‐space coverage in the phase‐encode direction, was applied to human gradient echo functional MRI at 7 T for increased volumetric coverage and concurrent high spatial and temporal resolution...

  19. Mergers + acquisitions. (United States)

    Hoppszallern, Suzanna


    The hospital sector in 2001 led the health care field in mergers and acquisitions. Most deals involved a network augmenting its presence within a specific region or in a market adjacent to its primary service area. Analysts expect M&A activity to increase in 2002.

  20. Mergers & Acquisitions

    DEFF Research Database (Denmark)

    Fomcenco, Alex

    This dissertation is a legal dogmatic thesis, the goal of which is to describe and analyze the current state of law in Europe in regard to some relevant selected elements related to mergers and acquisitions, and the adviser’s counsel in this regard. Having regard to the topic of the dissertation...

  1. Nucleus accumbens core acetylcholine is preferentially activated during acquisition of drug- vs food-reinforced behavior. (United States)

    Crespo, Jose A; Stöckl, Petra; Zorn, Katja; Saria, Alois; Zernig, Gerald


    Acquisition of drug-reinforced behavior is accompanied by a systematic increase of release of the neurotransmitter acetylcholine (ACh) rather than dopamine, the expected prime reward neurotransmitter candidate, in the nucleus accumbens core (AcbC), with activation of both muscarinic and nicotinic ACh receptors in the AcbC by ACh volume transmission being necessary for the drug conditioning. The present findings suggest that the AcbC ACh system is preferentially activated by drug reinforcers, because (1) acquisition of food-reinforced behavior was not paralleled by activation of ACh release in the AcbC whereas acquisition of morphine-reinforced behavior, like that of cocaine or remifentanil (tested previously), was, and because (2) local intra-AcbC administration of muscarinic or nicotinic ACh receptor antagonists (atropine or mecamylamine, respectively) did not block the acquisition of food-reinforced behavior whereas acquisition of drug-reinforced behavior had been blocked. Interestingly, the speed with which a drug of abuse distributed into the AcbC and was eliminated from the AcbC determined the size of the AcbC ACh signal, with the temporally more sharply delineated drug stimulus producing a more pronounced AcbC ACh signal. The present findings suggest that muscarinic and nicotinic ACh receptors in the AcbC are preferentially involved during reward conditioning for drugs of abuse vs sweetened condensed milk as a food reinforcer.

  2. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen


    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  3. Fat-suppressed, three-dimensional T1-weighted imaging using high-acceleration parallel acquisition and a dual-echo Dixon technique for gadoxetic acid-enhanced liver MRI at 3 T. (United States)

    Yoon, Jeong Hee; Lee, Jeong Min; Yu, Mi Hye; Kim, Eun Ju; Han, Joon Koo; Choi, Byung Ihn


    Parallel imaging (PI) techniques are used for overcoming lower spatial and time resolution for magnetic resonance imaging (MRI). There is clinical need to overcome inevitable noise by decreased voxel size and signal-to-noise issue by using high-acceleration factor (AF). To determine whether the combination of a modified Dixon three-dimensional (3D) T1-weighted (T1W) gradient echo technique (mDixon-3D-GRE) and high-acceleration ([HA], AF = 5) PI can provide breath-hold (BH) T1W imaging with better image quality than conventional fat-suppressed 3D-T1W-GRE (SPAIR-3D-GRE) for Gd-EOB-DTPA-enhanced liver MR. This retrospective study was approved by our institutional review board and informed consent was waived. There were 138 patients who underwent Gd-EOB-DTPA-enhanced liver MR at 3 T using either standard SPAIR-3D-GRE sequences with an AF of 2.6 (n = 68, Standard group) or mDixon-3D-GRE with an AF of 5 (n = 70, HA group). In the HA group, hepatobiliary phase was obtained three times using HA-mDixon-3D-GRE (AF = 5), HA-SPAIR-3D-GRE (AF = 5), and standard-SPAIR-3D-GRE (AF = 2.6). Image noise, quality, and anatomic depiction of dynamic phase were compared between standard and HA groups, and those of hepatobiliary phase were compared among the three image sets in HA group. As for dynamic imaging, the HA-mDixon-3D-GRE images showed better anatomic details and overall image quality than standard-SPAIR-3D-GRE sequence (arterial phase: 3.56 ± 0.63 vs. 2.66 ± 0.69, P acceleration PI provided better quality BH-T1W imaging compared with conventional SPAIR-3D-GRE for Gd-EOB-DTPA-enhanced liver MRI. © The Foundation Acta Radiologica 2014.

  4. Ultrasound Vector Flow Imaging: Part II: Parallel Systems

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav Ivanov; Yu, Alfred C. H.


    The paper gives a review of the current state-of-theart in ultrasound parallel acquisition systems for flow imaging using spherical and plane waves emissions. The imaging methods are explained along with the advantages of using these very fast and sensitive velocity estimators. These experimental...... ultrasound imaging for studying brain function in animals. The paper explains the underlying acquisition and estimation methods for fast 2-D and 3-D velocity imaging and gives a number of examples. Future challenges and the potentials of parallel acquisition systems for flow imaging are also discussed....

  5. Development and application of efficient strategies for parallel magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Breuer, F.


    Virtually all existing MRI applications require both a high spatial and high temporal resolution for optimum detection and classification of the state of disease. The main strategy to meet the increasing demands of advanced diagnostic imaging applications has been the steady improvement of gradient systems, which provide increased gradient strengths and faster switching times. Rapid imaging techniques and the advances in gradient performance have significantly reduced acquisition times from about an hour to several minutes or seconds. In order to further increase imaging speed, much higher gradient strengths and much faster switching times are required which are technically challenging to provide. In addition to significant hardware costs, peripheral neuro-stimulations and the surpassing of admissable acoustic noise levels may occur. Today's whole body gradient systems already operate just below the allowed safety levels. For these reasons, alternative strategies are needed to bypass these limitations. The greatest progress in further increasing imaging speed has been the development of multi-coil arrays and the advent of partially parallel acquisition (PPA) techniques in the late 1990's. Within the last years, parallel imaging methods have become commercially available,and are therefore ready for broad clinical use. The basic feature of parallel imaging is a scan time reduction, applicable to nearly any available MRI method, while maintaining the contrast behavior without requiring higher gradient system performance. PPA operates by allowing an array of receiver surface coils, positioned around the object under investigation, to partially replace time-consuming spatial encoding which normally is performed by switching magnetic field gradients. Using this strategy, spatial resolution can be improved given a specific imaging time, or scan times can be reduced at a given spatial resolution. Furthermore, in some cases, PPA can even be used to reduce image

  6. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G


    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  7. Practical parallel computing

    CERN Document Server

    Morse, H Stephen


    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  8. High Spatiotemporal Resolution Dynamic Contrast-Enhanced MR Enterography in Crohn Disease Terminal Ileitis Using Continuous Golden-Angle Radial Sampling, Compressed Sensing, and Parallel Imaging. (United States)

    Ream, Justin M; Doshi, Ankur; Lala, Shailee V; Kim, Sooah; Rusinek, Henry; Chandarana, Hersh


    The purpose of this article was to assess the feasibility of golden-angle radial acquisition with compress sensing reconstruction (Golden-angle RAdial Sparse Parallel [GRASP]) for acquiring high temporal resolution data for pharmacokinetic modeling while maintaining high image quality in patients with Crohn disease terminal ileitis. Fourteen patients with biopsy-proven Crohn terminal ileitis were scanned using both contrast-enhanced GRASP and Cartesian breath-hold (volume-interpolated breath-hold examination [VIBE]) acquisitions. GRASP data were reconstructed with 2.4-second temporal resolution and fitted to the generalized kinetic model using an individualized arterial input function to derive the volume transfer coefficient (K(trans)) and interstitial volume (v(e)). Reconstructions, including data from the entire GRASP acquisition and Cartesian VIBE acquisitions, were rated for image quality, artifact, and detection of typical Crohn ileitis features. Inflamed loops of ileum had significantly higher K(trans) (3.36 ± 2.49 vs 0.86 ± 0.49 min(-1), p < 0.005) and v(e) (0.53 ± 0.15 vs 0.20 ± 0.11, p < 0.005) compared with normal bowel loops. There were no significant differences between GRASP and Cartesian VIBE for overall image quality (p = 0.180) or detection of Crohn ileitis features, although streak artifact was worse with the GRASP acquisition (p = 0.001). High temporal resolution data for pharmacokinetic modeling and high spatial resolution data for morphologic image analysis can be achieved in the same acquisition using GRASP.

  9. Ordered k-space acquisition in contrast enhanced magnetic resonance angiography (CE-MRA) (United States)

    Wu, B.; Maclaren, J. R.; Millane, R. P.; Watts, R.; Bones, P. J.


    A new way of performing contrast enhanced magnetic resonance angiography (CE-MRA) is presented, in which the entire k-space is decomposed into interlaced subsets that are acquired sequentially. Based on a new parallel imaging technique, Generalized Unaliasing Incorporating object Support constraint and sensitivity Encoding (GUISE), reconstructions can be made using different subsets of k-space to reveal the level of contrast agent in the corresponding data acquisition time period. A proof-of-concept study using a custom made phantom was carried out to examine the utility of the new method. A quantity of contrast agent (copper sulfate solution) was injected into water flowing within a tube while data was acquired using an 8-coil receiver and the modified MRI sequence. A sequence of images was successfully reconstructed at high temporal resolution. This eliminated the need to precisely synchronize data acquisition with contrast arrival. Furthermore, subtraction of a pre-contrast data set prior to reconstruction, which eliminates the need for recovering the static background signal, has proven to be an effective way to improve the SNR and allow a higher temporal resolution to be achieved in recovering the dynamic signal containing contrast level change. Acceptably good reconstruction results were obtained at a temporal resolution equivalent to a 16-fold speed up compared to the time taken to fully sample k-space.

  10. Learning in Parallel Universes


    Berthold, Michael R.; Wiswedel, Bernd


    This abstract summarizes a brief, preliminary formalization of learning in parallel universes. It also attempts to highlight a few neighboring learning paradigms to illustrate how parallel learning fits into the greater picture.

  11. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C


    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  12. Parallel processing ITS

    Energy Technology Data Exchange (ETDEWEB)

    Fan, W.C.; Halbleib, J.A. Sr.


    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  13. Verbal and Visual Parallelism (United States)

    Fahnestock, Jeanne


    This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

  14. Parallel simulation today (United States)

    Nicol, David; Fujimoto, Richard


    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  15. Melting and dehydration within mantle plumes and the formation of sub-parallel volcanic trends at intra-plate hotspots: Analysis of physical properties on spatial and temporal evolution of viscous plug formation (United States)

    Kundargi, R.; Hall, P. S.


    Recent volcanism associated with the Hawaiian hotspot has long been recognized as occurring along two physically distinct, sub-parallel volcanic chains, known as the Loa and Kea trends [e.g., Jackson, 1972]. Recently, several additional intra-plate hotspots, including Samoa [Workman et al., 2004], Marquesas [Chauvel et al., 2009; Huang et al., 2011], and Societies [Payne et al., in press], have been shown to exhibit dual-chain volcanism similar to that at Hawaii. Despite the prevalence of this pattern of volcanism at hotspots, its cause is not well understood. Previous explanations for the presence of dual-chain volcanism at Hawaii focused on magma migration to explain the spatial distribution of volcanism. In particular, Hieronymus and Bercovici [1999] developed a model in which lithospheric flexure induced by loading from the growth of volcanic edifices alters magma migration pathways through the lithosphere over time. In this model, a perturbation to the magma supply, such as might be expected as the result of a change in plate motion, can result in the surface expression of magmatism being focused into two sub-parallel chains. Here, we investigate an alternative hypothesis for the formation of dual-chain volcanism, in which melting and dehydration of upwelling peridotite within a plume conduit leads to the creation of a plug of viscous, buoyant residuum that inhibits upward flow at the center of the plume conduit near the base of the lithosphere. This suppresses the rate of melt generation above the center of the conduit and results in a bifurcated distribution of melt production. We report on a series of 3-D numerical experiments in which mantle upwelling within a plume conduit impinges on the base of an overriding oceanic plate far from any plate boundaries. The experiments were conducted using CitcomCU. Melting and dehydration were modeled using a Lagrangian particle method, and a diffusion creep rheology that explicitly includes the effects of water on

  16. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert


    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...

  17. Ultrafast analysis of individual grain behavior during grain growth by parallel computing (United States)

    Kühbach, M.; Barrales-Mora, L. A.; Mießen, C.; Gottstein, G.


    The possibility to characterize in an automatized way the spatial-temporal evolution of individual grains and their properties is essential to the understanding of annealing phenomena. The development of advanced experimental techniques, computational models and tools helps the acquisition of real time and real space-resolved datasets. Whereas the reconstruction of 3D grain representatives from serial-sectioning or tomography datasets becomes more common and microstructure simulations on parallel computers become ever larger and longer lasting, few efforts have materialized in the development of tools that allow the continuous tracking of properties at the grain scale. In fact, such analyses are often left neglected in practice due to the large size of the datasets that exceed the available physical memory of a computer or the shared-memory cluster. We identified the key tasks that have to be solved in order to define suitable and lean data structures and computational methods to evaluate spatio-temporal grain property datasets by working with parallel computer architectures. This is exemplified with data from grain growth simulations.

  18. 3D Hyperpolarized C-13 EPI with Calibrationless Parallel Imaging

    DEFF Research Database (Denmark)

    Gordon, Jeremy W.; Hansen, Rie Beck; Shin, Peter J.


    and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling...

  19. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)



    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  20. 2017 NAIP Acquisition Map (United States)

    Farm Service Agency, Department of Agriculture — Planned States for 2017 NAIP acquisition and acquisition status layer (updated daily). Updates to the acquisition seasons may be made during the season to...

  1. Project Temporalities

    DEFF Research Database (Denmark)

    Tryggestad, Kjell; Justesen, Lise; Mouritsen, Jan


    Purpose – The purpose of this paper is to explore how animals can become stakeholders in interaction with project management technologies and what happens with project temporalities when new and surprising stakeholders become part of a project and a recognized matter of concern to be taken...... into account. Design/methodology/approach – The paper is based on a qualitative case study of a project in the building industry. The authors use actor-network theory (ANT) to analyze the emergence of animal stakeholders, stakes and temporalities. Findings – The study shows how project temporalities can...... multiply in interaction with project management technologies and how conventional linear conceptions of project time may be contested with the emergence of new non-human stakeholders and temporalities. Research limitations/implications – The study draws on ANT to show how animals can become stakeholders...

  2. Parallel Acquisition of Awareness and Differential Delay Eyeblink Conditioning (United States)

    Weidemann, Gabrielle; Antees, Cassandra


    There is considerable debate about whether differential delay eyeblink conditioning can be acquired without awareness of the stimulus contingencies. Previous investigations of the relationship between differential-delay eyeblink conditioning and awareness of the stimulus contingencies have assessed awareness after the conditioning session was…

  3. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva


    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  4. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H


    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  5. Parallel digital forensics infrastructure.

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick


    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  6. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  7. The second language acquisition of French tense, aspect, mood and modality

    CERN Document Server

    Ayoun, Dalila


    Temporal-aspectual systems have a great potential of informing our understanding of the developing competence of second language learners. So far, the vast majority of empirical studies investigating L2 acquisition have largely focused on past temporality, neglecting the acquisition of the expression of the present and future temporalities with rare exceptions (aside from ESL learners), leaving unanswered the question of how the investigation of different types of temporality may inform our understanding of the acquisition of temporal, aspectual and mood systems as a whole. This monograph addr

  8. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.


    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  9. Patterns For Parallel Programming

    CERN Document Server

    Mattson, Timothy G; Massingill, Berna L


    From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.

  10. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)


    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  11. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.


    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  12. What is parallelism? (United States)

    Scotland, Robert W


    Although parallel and convergent evolution are discussed extensively in technical articles and textbooks, their meaning can be overlapping, imprecise, and contradictory. The meaning of parallel evolution in much of the evolutionary literature grapples with two separate hypotheses in relation to phenotype and genotype, but often these two hypotheses have been inferred from only one hypothesis, and a number of subsidiary but problematic criteria, in relation to the phenotype. However, examples of parallel evolution of genetic traits that underpin or are at least associated with convergent phenotypes are now emerging. Four criteria for distinguishing parallelism from convergence are reviewed. All are found to be incompatible with any single proposition of homoplasy. Therefore, all homoplasy is equivalent to a broad view of convergence. Based on this concept, all phenotypic homoplasy can be described as convergence and all genotypic homoplasy as parallelism, which can be viewed as the equivalent concept of convergence for molecular data. Parallel changes of molecular traits may or may not be associated with convergent phenotypes but if so describe homoplasy at two biological levels-genotype and phenotype. Parallelism is not an alternative to convergence, but rather it entails homoplastic genetics that can be associated with and potentially explain, at the molecular level, how convergent phenotypes evolve. © 2011 Wiley Periodicals, Inc.

  13. Compositional C++: Compositional Parallel Programming


    Chandy, K. Mani; Kesselman, Carl


    A compositional parallel program is a program constructed by composing component programs in parallel, where the composed program inherits properties of its components. In this paper, we describe a small extension of C++ called Compositional C++ or CC++ which is an object-oriented notation that supports compositional parallel programming. CC++ integrates different paradigms of parallel programming: data-parallel, task-parallel and object-parallel paradigms; imperative and declarative programm...

  14. Speed in Acquisitions

    DEFF Research Database (Denmark)

    Meglio, Olimpia; King, David R.; Risberg, Annette


    The advantage of speed is often invoked by academics and practitioners as an essential condition during post-acquisition integration, frequently without consideration of the impact earlier decisions have on acquisition speed. In this article, we examine the role speed plays in acquisitions across...... the acquisition process using research organized around characteristics that display complexity with respect to acquisition speed. We incorporate existing research with a process perspective of acquisitions in order to present trade-offs, and consider the influence of both stakeholders and the pre......-deal-completion context on acquisition speed, as well as the organization’s capabilities to facilitating that speed. Observed trade-offs suggest both that acquisition speed often requires longer planning time before an acquisition and that associated decisions require managerial judgement. A framework for improving...

  15. Acquisition of Oocyte Polarity. (United States)

    Clapp, Mara; Marlow, Florence L


    Acquisition of oocyte polarity involves complex translocation and aggregation of intracellular organelles, RNAs, and proteins, along with strict posttranscriptional regulation. While much is still unknown regarding the formation of the animal-vegetal axis, an early marker of polarity, animal models have contributed to our understanding of these early processes controlling normal oogenesis and embryo development. In recent years, it has become clear that proteins with self-assembling properties are involved in assembling discrete subcellular compartments or domains underlying subcellular asymmetries in the early mitotic and meiotic cells of the female germline. These include asymmetries in duplication of the centrioles and formation of centrosomes and assembly of the organelle and RNA-rich Balbiani body, which plays a critical role in oocyte polarity. Notably, at specific stages of germline development, these transient structures in oocytes are temporally coincident and align with asymmetries in the position and arrangement of nuclear components, such as the nuclear pore and the chromosomal bouquet and the centrioles and cytoskeleton in the cytoplasm. Formation of these critical, transient structures and arrangements involves microtubule pathways, intrinsically disordered proteins (proteins with domains that tend to be fluid or lack a rigid ordered three-dimensional structure ranging from random coils, globular domains, to completely unstructured proteins), and translational repressors and activators. This review aims to examine recent literature and key players in oocyte polarity.

  16. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.


    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at

  17. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.


    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at (c.f. Appendix A).

  18. Parallelism viewpoint: An architecture viewpoint to model parallelism behaviour of parallelism-intensive software systems


    Muhammad, Naeem; Boucké, Nelis; Berbers, Yolande


    The use of parallelism enhances the performance of a software system. However, its excessive use can degrade the system performance. In this report we propose a parallelism viewpoint to optimize the use of parallelism by eliminating unnecessarily used parallelism in legacy systems. The parallelism viewpoint describes parallelism of the system in order to analyze multiple overheads associated with its threads. We use the proposed viewpoint to find parallelism specific performance overheads of ...

  19. Scalable parallel communications (United States)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.


    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  20. CLUVI Parallel Corpus


    Universidade de Vigo. Grupo de investigación TALG


    The CLUVI Corpus of the University of Vigo is an open collection of parallel text corpora developed under the direction of Xavier Gómez Guinovart (2003-2012) that covers specific areas of the contemporary Galician language. With 23 million words, the CLUVI Corpus comprises six main parallel corpora belonging to five specialised registers or domains (fiction, computing, popular science, law and administration) and involving five different language combinations (Galician-Spanish bilingual trans...

  1. DPS - Dynamic Parallel Schedules


    IEEE Press; Gerlach, S.; Hersch, R. D.


    Dynamic Parallel Schedules (DPS) is a high-level framework for developing parallel applications on distributed memory computers (e.g. clusters of PC). Its model relies on compositional customizable split-compute-merge graphs of operations (directed acyclic flow graphs). The graphs and the mapping of operations to processing nodes are specified dynamically at runtime. DPS applications are pipelined and multithreaded by construction, ensuring a maximal overlap of computations and communications...

  2. Parallel Genetic Algorithm System


    Nagaraju Sangepu; Vikram, K.


    Genetic Algorithm (GA) is a popular technique to find the optimum of transformation, because of its simple implementation procedure. In image processing GAs are used as a parameter-search-for procedure, this processing requires very high performance of the computer. Recently, parallel processing used to reduce the time by distributing the appropriate amount of work to each computer in the clustering system. The processing time reduces with the number of dedicated computers. Parallel implement...

  3. Language Acquisition without an Acquisition Device (United States)

    O'Grady, William


    Most explanatory work on first and second language learning assumes the primacy of the acquisition phenomenon itself, and a good deal of work has been devoted to the search for an "acquisition device" that is specific to humans, and perhaps even to language. I will consider the possibility that this strategy is misguided and that language…

  4. Temporal naturalism (United States)

    Smolin, Lee


    Two people may claim both to be naturalists, but have divergent conceptions of basic elements of the natural world which lead them to mean different things when they talk about laws of nature, or states, or the role of mathematics in physics. These disagreements do not much affect the ordinary practice of science which is about small subsystems of the universe, described or explained against a background, idealized to be fixed. But these issues become crucial when we consider including the whole universe within our system, for then there is no fixed background to reference observables to. I argue here that the key issue responsible for divergent versions of naturalism and divergent approaches to cosmology is the conception of time. One version, which I call temporal naturalism, holds that time, in the sense of the succession of present moments, is real, and that laws of nature evolve in that time. This is contrasted with timeless naturalism, which holds that laws are immutable and the present moment and its passage are illusions. I argue that temporal naturalism is empirically more adequate than the alternatives, because it offers testable explanations for puzzles its rivals cannot address, and is likely a better basis for solving major puzzles that presently face cosmology and physics. This essay also addresses the problem of qualia and experience within naturalism and argues that only temporal naturalism can make a place for qualia as intrinsic qualities of matter.

  5. Modelling live forensic acquisition

    CSIR Research Space (South Africa)

    Grobler, MM


    Full Text Available This paper discusses the development of a South African model for Live Forensic Acquisition - Liforac. The Liforac model is a comprehensive model that presents a range of aspects related to Live Forensic Acquisition. The model provides forensic...

  6. Managing acquisitions and mergers. (United States)

    Shorr, A S


    Acquisitions and mergers must be well managed if they are to capture market share and achieve profitability. Acquisition strategies must reflect sensitivity to legal, financial and community factors and to the internal strengths, needs and weaknesses of both organizations.

  7. Playing at Serial Acquisitions

    NARCIS (Netherlands)

    J.T.J. Smit (Han); T. Moraitis (Thras)


    textabstractBehavioral biases can result in suboptimal acquisition decisions-with the potential for errors exacerbated in consolidating industries, where consolidators design serial acquisition strategies and fight escalating takeover battles for platform companies that may determine their future

  8. Calo trigger acquisition system

    CERN Multimedia

    Franchini, Matteo


    Calo trigger acquisition system - Evolution of the acquisition system from a multiple boards system (upper, orange cables) to a single board one (below, light blue cables) where all the channels are collected in a single board.

  9. Improving quality of arterial spin labeling MR imaging at 3 Tesla with a 32-channel coil and parallel imaging. (United States)

    Ferré, Jean-Christophe; Petr, Jan; Bannier, Elise; Barillot, Christian; Gauvrit, Jean-Yves


    To compare 12-channel and 32-channel phased-array coils and to determine the optimal parallel imaging (PI) technique and factor for brain perfusion imaging using Pulsed Arterial Spin labeling (PASL) at 3 Tesla (T). Twenty-seven healthy volunteers underwent 10 different PASL perfusion PICORE Q2TIPS scans at 3T using 12-channel and 32-channel coils without PI and with GRAPPA or mSENSE using factor 2. PI with factor 3 and 4 were used only with the 32-channel coil. Visual quality was assessed using four parameters. Quantitative analyses were performed using temporal noise, contrast-to-noise and signal-to-noise ratios (CNR, SNR). Compared with 12-channel acquisition, the scores for 32-channel acquisition were significantly higher for overall visual quality, lower for noise and higher for SNR and CNR. With the 32-channel coil, artifact compromise achieved the best score with PI factor 2. Noise increased, SNR and CNR decreased with PI factor. However mSENSE 2 scores were not always significantly different from acquisition without PI. For PASL at 3T, the 32-channel coil at 3T provided better quality than the 12-channel coil. With the 32-channel coil, mSENSE 2 seemed to offer the best compromise for decreasing artifacts without significantly reducing SNR, CNR. Copyright © 2012 Wiley Periodicals, Inc.

  10. Pattern recognition with parallel associative memory (United States)

    Toth, Charles K.; Schenk, Toni


    An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.

  11. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.


    The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti-parallel...... chain, especially at the end of the TFO strand. On the other hand, the thermal stability of the anti-parallel triplex was dramatically decreased when the TFO strand was modified with the LNA monomer analog Z in the middle of the TFO strand (ΔTm = -9.1 °C). Also the thermal stability decreased...

  12. Massively parallel multicanonical simulations (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard


    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  13. Acquisition: Acquisition of the Evolved SEASPARROW Missile

    National Research Council Canada - National Science Library


    .... The Evolved SEASPARROW Missile, a Navy Acquisition Category II program, is an improved version of the RIM-7P SEASPARROW missile that will intercept high-speed maneuvering, anti-ship cruise missiles...

  14. Adaptive parallel logic networks (United States)

    Martinez, Tony R.; Vidal, Jacques J.


    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  15. Parallel programming with Python

    CERN Document Server

    Palach, Jan


    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.


    Directory of Open Access Journals (Sweden)

    Zafer DEMİR


    Full Text Available In this study, at first, Parallel Virtual Machine is reviewed. Since It is based upon parallel processing, it is similar to parallel systems in principle in terms of architecture. Parallel Virtual Machine is neither an operating system nor a programming language. It is a specific software tool that supports heterogeneous parallel systems. However, it takes advantage of the features of both to make users close to parallel systems. Since tasks can be executed in parallel on parallel systems by Parallel Virtual Machine, there is an important similarity between PVM and distributed systems and multiple processors. In this study, the relations in question are examined by making use of Master-Slave programming technique. In conclusion, the PVM is tested with a simple factorial computation on a distributed system to observe its adaptation to parallel architects.

  17. Parallel Robots with Configurable Platforms

    NARCIS (Netherlands)

    Lambert, P.


    This thesis explores the fundamentals of a new class of parallel mechanisms called parallel mechanisms with configurable platforms as well as the design and analysis of parallel robots that are based on those mechanisms. Pure parallel robots are formed by two rigid links, the base and the

  18. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)


    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  19. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab


    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  20. Note on parallel universes


    Adams, Niall; Hand, David J.


    The parallel universes idea is an attempt to integrate several aspects of learning which share some common aspects. This is an interesting idea: if successful, insights could cross-fertilise, leading to advances in each area. The ‘multi-view’ perspective seems to us to have particular potential.

  1. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)


    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  2. Parallel universes beguile science

    CERN Multimedia


    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  3. Parallel Adams methods

    NARCIS (Netherlands)

    P.J. van der Houwen; E. Messina


    textabstractIn the literature, various types of parallel methods for integrating nonstiff initial-value problems for first-order ordinary differential equation have been proposed. The greater part of them are based on an implicit multistage method in which the implicit relations are solved by the

  4. Expressing Parallelism with ROOT (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.


    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  5. Practical parallel programming

    CERN Document Server

    Bauer, Barr E


    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  6. Parallel Splash Belief Propagation (United States)


    Service, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of...heaps: An alternative to Fibonacci heaps with applications to parallel computation. Communications of the ACM, 31:1343–1354, 1988. G. Elidan, I. Mcgraw

  7. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)


    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  8. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.


    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  9. Parallel computers and parallel algorithms for CFD: An introduction (United States)

    Roose, Dirk; Vandriessche, Rafael


    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  10. Mergers by Partial Acquisition


    Lindqvist, Tobias


    This paper evaluates partial acquisition strategies. The model allows for buying a share of a firm before the actual acquisition takes place. Holding a share in a competing firm before the acquisition of another firm, outsider-toehold, eliminates the insiders' dilemma, i.e. profitable mergers do not occur. This strategy may thus be more profitable for a buyer than acquiring entire firms at once. Furthermore, the insiders' dilemma arises from the assumption of a positive externality on the out...

  11. Ultrascalable petaflop parallel supercomputer (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY


    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.


    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu


    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  13. Parallel grid population (United States)

    Wald, Ingo; Ize, Santiago


    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  14. Seeing in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Little, J.J.; Poggio, T.; Gamble, E.B. Jr.


    Computer algorithms have been developed for early vision processes that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. The MIT Vision Machine is a computer system that integrates several early vision modules to achieve high-performance recognition and navigation in unstructured environments. It is also an experimental environment for theoretical progress in early vision algorithms, their parallel implementation, and their integration. The Vision Machine consists of a movable, two-camera Eye-Head input device and an 8K Connection Machine. The authors have developed and implemented several parallel early vision algorithms that compute edge detection, stereopsis, motion, texture, and surface color in close to real time. The integration stage, based on coupled Markov random field models, leads to a cartoon-like map of the discontinuities in the scene, with partial labeling of the brightness edges in terms of their physical origin.

  15. Homology, convergence and parallelism. (United States)

    Ghiselin, Michael T


    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. © 2015 The Author(s).

  16. Parallel Anisotropic Tetrahedral Adaptation (United States)

    Park, Michael A.; Darmofal, David L.


    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  17. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.


    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  18. Stability of parallel flows

    CERN Document Server

    Betchov, R


    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  19. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B


    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  20. Acquisition Research Program Homepage



    Includes an image of the main page on this date and compressed file containing additional web pages. Established in 2003, Naval Postgraduate School’s (NPS) Acquisition Research Program provides leadership in innovation, creative problem solving and an ongoing dialogue, contributing to the evolution of Department of Defense acquisition strategies.

  1. Acquisition of teleological descriptions (United States)

    Franke, David W.


    Teleology descriptions capture the purpose of an entity, mechanism, or activity with which they are associated. These descriptions can be used in explanation, diagnosis, and design reuse. We describe a technique for acquiring teleological descriptions expressed in the teleology language TeD. Acquisition occurs during design by observing design modifications and design verification. We demonstrate the acquisition technique in an electronic circuit design.

  2. Robotization in Seismic Acquisition

    NARCIS (Netherlands)

    Blacquière, G.; Berkhout, A.J.


    The amount of sources and detectors in the seismic method follows "Moore’s Law of seismic data acquisition", i.e., it increases approximately by a factor of 10 every 10 years. Therefore automation is unavoidable, leading to robotization of seismic data acquisition. Recently, we introduced a new

  3. Mergers and Acquisitions

    DEFF Research Database (Denmark)

    Risberg, Annette

    Introduction to the study of mergers and acquisitions. This book provides an understanding of the mergers and acquisitions process, how and why they occur, and also the broader implications for organizations. It presents issues including motives and planning, partner selection, integration......, employee experiences and communication. Mergers and acquisitions remain one of the most common forms of growth, yet they present considerable challenges for the companies and management involved. The effects on stakeholders, including shareholders, managers and employees, must be considered as well...... by editorial commentaries and reflects the important organizational and behavioural aspects which have often been ignored in the past. By providing this in-depth understanding of the mergers and acquisitions process, the reader understands not only how and why mergers and acquisitions occur, but also...

  4. All-optical coaxial framing photography using parallel coherence shutters. (United States)

    Guanghua, Chen; Jianfeng, Li; Qixian, Peng; Shouxian, Liu; Jun, Liu


    An all-optical framing camera has been developed to obtain serial images of high temporal and spatial resolution with identical spatial benchmark, identical temporal benchmark, and identical chromatic benchmark in a single shot. A train of laser probe pulses with identical wavelength coaxially illuminate the target and form sequentially timed images by means of parallel coherence shutters. A coherence shutter only selects one of the probe pulses to form a nonmultiplexing hologram. The other probe pulses superpose incoherently on the hologram as the background. By this method, each hologram is entirely separated from the others both in spatial and temporal domains. Two kinds of ultrafast physical process experiments, including laser driving air and laser driving aluminum foil, were performed to verify the feasibility of the parallel coherence shutters.

  5. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan


    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  6. Parallel Eclipse Project Checkout (United States)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.


    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  7. Temporal Planning for Compilation of Quantum Approximate Optimization Algorithm Circuits (United States)

    Venturelli, Davide; Do, Minh Binh; Rieffel, Eleanor Gilbert; Frank, Jeremy David


    We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus our initial experiments on Quantum Approximate Optimization Algorithm (QAOA) circuits that have few ordering constraints and allow highly parallel plans. We report on experiments using several temporal planners to compile circuits of various sizes to a realistic hardware. This early empirical evaluation suggests that temporal planning is a viable approach to quantum circuit compilation.

  8. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack


    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  9. Massively Parallel QCD

    Energy Technology Data Exchange (ETDEWEB)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G


    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  10. Theory of Parallel Mechanisms

    CERN Document Server

    Huang, Zhen; Ding, Huafeng


    This book contains mechanism analysis and synthesis. In mechanism analysis, a mobility methodology is first systematically presented. This methodology, based on the author's screw theory, proposed in 1997, of which the generality and validity was only proved recently,  is a very complex issue, researched by various scientists over the last 150 years. The principle of kinematic influence coefficient and its latest developments are described. This principle is suitable for kinematic analysis of various 6-DOF and lower-mobility parallel manipulators. The singularities are classified by a new point of view, and progress in position-singularity and orientation-singularity is stated. In addition, the concept of over-determinate input is proposed and a new method of force analysis based on screw theory is presented. In mechanism synthesis, the synthesis for spatial parallel mechanisms is discussed, and the synthesis method of difficult 4-DOF and 5-DOF symmetric mechanisms, which was first put forward by the a...

  11. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva


    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  12. Simultaneous acquisition of three NMR spectra in a single ...

    Indian Academy of Sciences (India)

    form (GFT) NMR spectroscopy, parallel data acquisition and non-uniform sampling. The following spectra are acquired ... experiments take minutes to hours to acquire, whereas. 3D experiments, which can take up to a few days, ... ing of 902 peaks reported to be present in the blood serum. The chemical shift values were ...

  13. Unified dataflow model for the analysis of data and pipeline parallelism, and buffer sizing

    NARCIS (Netherlands)

    Hausmans, J.P.H.M.; Geuns, S.J.; Wiggers, M.H.; Bekooij, Marco Jan Gerrit


    Real-time stream processing applications such as software defined radios are usually executed concurrently on multiprocessor systems. Exploiting coarse-grained data parallelism by duplicating tasks is often required, besides pipeline parallelism, to meet the temporal constraints of the applications.

  14. Interactive knowledge acquisition tools (United States)

    Dudziak, Martin J.; Feinstein, Jerald L.


    The problems of designing practical tools to aid the knowledge engineer and general applications used in performing knowledge acquisition tasks are discussed. A particular approach was developed for the class of knowledge acquisition problem characterized by situations where acquisition and transformation of domain expertise are often bottlenecks in systems development. An explanation is given on how the tool and underlying software engineering principles can be extended to provide a flexible set of tools that allow the application specialist to build highly customized knowledge-based applications.

  15. Indexing mergers and acquisitions


    Gang, Jianhua; Guo, Jie (Michael); Hu, Nan; Li, Xi


    We measure the efficiency of mergers and acquisitions by putting forward an index (the ‘M&A Index’) based on stochastic frontier analysis. The M&A Index is calculated for each takeover deal and is standardized between 0 and 1. An acquisition with a higher index encompasses higher efficiency. We find that takeover bids with higher M&A Indices are more likely to succeed. Moreover, the M&A Index shows a strong and positive relation with the acquirers’ post-acquisition stock perfo...


    Directory of Open Access Journals (Sweden)

    D. Rasshyvalov


    Full Text Available One-third of worldwide mergers and acquisitions involving firms from different countries make M&A one of the key drivers of internationalization. Over the past five years insurance cross-border merger and acquisition activities have globally paralleled deep financial crisis.

  17. C++ and Massively Parallel Computers

    Directory of Open Access Journals (Sweden)

    Daniel J. Lickly


    Full Text Available Our goal is to apply the software engineering advantages of object-oriented programming to the raw power of massively parallel architectures. To do this we have constructed a hierarchy of C++ classes to support the data-parallel paradigm. Feasibility studies and initial coding can be supported by any serial machine that has a C++ compiler. Parallel execution requires an extended Cfront, which understands the data-parallel classes and generates C* code. (C* is a data-parallel superset of ANSI C developed by Thinking Machines Corporation. This approach provides potential portability across parallel architectures and leverages the existing compiler technology for translating data-parallel programs onto both SIMD and MIMD hardware.

  18. Computer Assisted Parallel Program Generation

    CERN Document Server

    Kawata, Shigeo


    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  19. Parallelism in the brain's visual form system. (United States)

    Shigihara, Yoshihito; Zeki, Semir


    We used magnetoencephalography (MEG) to determine whether increasingly complex forms constituted from the same elements (lines) activate visual cortex with the same or different latencies. Twenty right-handed healthy adult volunteers viewed two different forms, lines and rhomboids, representing two levels of complexity. Our results showed that the earliest responses produced by lines and rhomboids in both striate and prestriate cortex had similar peak latencies (40 ms) although lines produced stronger responses than rhomboids. Dynamic causal modeling (DCM) showed that a parallel multiple input model to striate and prestriate cortex accounts best for the MEG response data. These results lead us to conclude that the perceptual hierarchy between lines and rhomboids is not mirrored by a temporal hierarchy in latency of activation and thus that a strategy of parallel processing appears to be used to construct forms, without implying that a hierarchical strategy may not be used in separate visual areas, in parallel. © 2013 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Acquisition Workforce Annual Report 2006 (United States)

    General Services Administration — This is the Federal Acquisition Institute's (FAI's) Annual demographic report on the Federal acquisition workforce, showing trends by occupational series, employment...

  1. Acquisition Workforce Annual Report 2008 (United States)

    General Services Administration — This is the Federal Acquisition Institute's (FAI's) Annual demographic report on the Federal acquisition workforce, showing trends by occupational series, employment...

  2. Developmental parallelism in primates. (United States)

    Sikorska-Piwowska, Z M; Dawidowicz, A L


    The authors examined a large random sample of skulls from two species of macaques: rhesus monkeys and cynomolgus monkeys. The skulls were measured, divided into age and sex groups and thoroughly analysed using statistical methods. The analysis shows that skulls of young rhesuses are considerably more domed, i.e. have better-developed neurocrania, than their adult counterparts. Male and female skulls, on the other hand, were found to be very similar, which means that sexual dimorphism of the rhesus macaque was suppressed. Both of these patterns are known from the human evolutionary pattern. No such parallelism to the development of Homo sapiens was found in the cynomolgus monkeys. The authors conclude that mosaic hominisation trends may have featured in the evolution of all primates. This would mean that apes were not a necessary step on the evolutionary way leading to the development of Homo sapiens, who may have started to evolve at an earlier stage of monkeys.

  3. Parallel Polarization State Generation

    CERN Document Server

    She, Alan


    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristi...

  4. FWS Approved Acquisition Boundaries (United States)

    US Fish and Wildlife Service, Department of the Interior — This data layer depicts the external boundaries of lands and waters that are approved for acquisition by the U.S. Fish and Wildlife Service (USFWS) in North America,...

  5. Documentation and knowledge acquisition (United States)

    Rochowiak, Daniel; Moseley, Warren


    Traditional approaches to knowledge acquisition have focused on interviews. An alternative focuses on the documentation associated with a domain. Adopting a documentation approach provides some advantages during familiarization. A knowledge management tool was constructed to gain these advantages.

  6. Acquisition IT Integration

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Øhrgaard, Christian


    The IT integration of acquisitions consists an important challenge for the many acquiring organizations. Complementing existing research, this paper searches for explanation to differences in acquirers’ abilities for acquisition IT integration in the external of the acquirer, by a study of the use...... of temporary agency workers. Following an analytic induction approach, theoretically grounded in the re-source-based view of the firm, we identify the complimentary and supplementary roles consultants can assume in acquisition IT integration. Through case studies of three acquirers, we investigate how...... the acquirers appropriate the use of agency workers as part of its acquisition strategy. For the investigated acquirers, assigning roles to agency workers is contingent on balancing the needs of knowledge induction and knowledge retention, as well as experience richness and in-depth under-standing. Composition...

  7. Updating representations of temporal intervals. (United States)

    Danckert, James; Anderson, Britt


    Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings.

  8. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN


    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  9. Topic 7: parallel computer architecture and instruction level parallelism


    Ayguadé Parra, Eduard; Wolfgang, Kark; De Bosschere, Koen; Francois, Jean Collard


    We welcome you to the two Parallel Computer Architecture and Instruction Level Parallelism sessions of Euro-Par 2006 conference being held in Dresden, Germany. The call for papers for this Euro-Par topic area sought papers on all hardware/software aspects of parallel computer architecture, processor architecture and microarchitecture. This year 12 papers were submitted to this topic area. Among the submissions, 5 papers were accepted as full papers for the conference (41% acceptance rate). ...

  10. Towards General Temporal Aggregation

    DEFF Research Database (Denmark)

    Boehlen, Michael H.; Gamper, Johann; Jensen, Christian Søndergaard


    Most database applications manage time-referenced, or temporal, data. Temporal data management is difficult when using conventional database technology, and many contributions have been made for how to better model, store, and query temporal data. Temporal aggregation illustrates well the problem...

  11. The Spacing Effect and Its Relevance to Second Language Acquisition (United States)

    Rogers, John


    This commentary discusses some theoretical and methodological issues related to research on the spacing effect in second language acquisition research (SLA). There has been a growing interest in SLA in how the temporal distribution of input might impact language development. SLA research in this area has frequently drawn upon the rich field of…

  12. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis


    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  13. Learned Attention in Adult Language Acquisition: A Replication and Generalization Study and Meta-Analysis (United States)

    Ellis, Nick C.; Sagarra, Nuria


    This study investigates associative learning explanations of the limited attainment of adult compared to child language acquisition in terms of learned attention to cues. It replicates and extends Ellis and Sagarra (2010) in demonstrating short- and long-term learned attention in the acquisition of temporal reference in Latin. In Experiment 1,…

  14. Post-Acquisition IT Integration

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Yetton, Philip


    The extant research on post-acquisition IT integration analyzes how acquirers realize IT-based value in individual acquisitions. However, serial acquirers make 60% of acquisitions. These acquisitions are not isolated events, but are components in growth-by-acquisition programs. To explain how...... serial acquirers realize IT-based value, we develop three propositions on the sequential effects on post-acquisition IT integration in acquisition programs. Their combined explanation is that serial acquirers must have a growth-by-acquisition strategy that includes the capability to improve...... IT integration capabilities, to sustain high alignment across acquisitions and to maintain a scalable IT infrastructure with a flat or decreasing cost structure. We begin the process of validating the three propositions by investigating a longitudinal case study of a growth-by-acquisition program....

  15. Three-dimensional functional MRI with parallel acceleration: balanced SSFP versus PRESTO. (United States)

    Vallée, Emilie; Håberg, Asta K; Kristoffersen, Anders


    To compare the sensitivity and specificity of three-dimensional (3D) principles of echo shifting using a train of observations (PRESTO) and passband balanced steady-state free precession (SSFP) functional MRI (fMRI) sequences combined with integrated parallel acquisition techniques (iPAT) at 3 Tesla (T). The 3D fMRI was performed using PRESTO and passband balanced SSFP with 3 mm and 1.9 mm isotropic voxels combined with iPAT, while volunteers underwent visual stimulation. From whole-brain activation maps and predefined regions of interest in the visual cortex, Z-score distributions, percentage of fMRI signal change, and the fMRI signals' temporal profile were compared between the sequences for the two spatial resolutions to estimate sensitivity and specificity. For PRESTO, the Z-score distributions had higher mean and maximum Z-values for both resolutions than for SSFP, and the activated voxels were located to the visual cortex with high sensitivity. For SSFP, the activation was more scattered, and voxels with the highest sensitivity were found in the draining veins. The good functional contrast in PRESTO allowed for separation of voxels with early (5 s) signal change; thus, the signal from larger draining veins could be removed. The ensuing PRESTO activation maps represent the early fMRI signal, probably more closely related to neuronal activity. The 3D fMRI PRESTO sequence demonstrated better sensitivity and specificity than SSFP. Copyright © 2013 Wiley Periodicals, Inc.

  16. Acquisition of Romance languages. Introduction


    Gavarró, Anna


    Generative grammar addressed for the first time acquisition as a central issue in the study of grammar. This perspective has given rise over the years to a considerable body of work, mainly on first language acquisition, but also on second language acquistion, bilingual acquisition, and the acquisition by children affected by SLI. If we assume continuity, i.e. that all stages in acquisition reflect possible natural languages, we must posit a mutual dependency between grammatical theory and th...

  17. Parallel Computational Protein Design. (United States)

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang


    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  18. Parallel Adaptive Mesh Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A


    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  19. Assessment of cardiac time intervals using high temporal resolution real-time spiral phase contrast with UNFOLDed-SENSE. (United States)

    Kowalik, Grzegorz T; Knight, Daniel S; Steeden, Jennifer A; Tann, Oliver; Odille, Freddy; Atkinson, David; Taylor, Andrew; Muthurangu, Vivek


    To develop a real-time phase contrast MR sequence with high enough temporal resolution to assess cardiac time intervals. The sequence utilized spiral trajectories with an acquisition strategy that allowed a combination of temporal encoding (Unaliasing by fourier-encoding the overlaps using the temporal dimension; UNFOLD) and parallel imaging (Sensitivity encoding; SENSE) to be used (UNFOLDed-SENSE). An in silico experiment was performed to determine the optimum UNFOLD filter. In vitro experiments were carried out to validate the accuracy of time intervals calculation and peak mean velocity quantification. In addition, 15 healthy volunteers were imaged with the new sequence, and cardiac time intervals were compared to reference standard Doppler echocardiography measures. For comparison, in silico, in vitro, and in vivo experiments were also carried out using sliding window reconstructions. The in vitro experiments demonstrated good agreement between real-time spiral UNFOLDed-SENSE phase contrast MR and the reference standard measurements of velocity and time intervals. The protocol was successfully performed in all volunteers. Subsequent measurement of time intervals produced values in keeping with literature values and good agreement with the gold standard echocardiography. Importantly, the proposed UNFOLDed-SENSE sequence outperformed the sliding window reconstructions. Cardiac time intervals can be successfully assessed with UNFOLDed-SENSE real-time spiral phase contrast. Real-time MR assessment of cardiac time intervals may be beneficial in assessment of patients with cardiac conditions such as diastolic dysfunction. © 2014 Wiley Periodicals, Inc.

  20. Knowledge Transfers following Acquisition

    DEFF Research Database (Denmark)

    Gammelgaard, Jens


    study of 54 Danish acquisitions taking place abroad from 1994 to 1998 demonstrated that when there was a high level of trust between the acquiring firm and the target firm before the take-over, then medium and strong tie-binding knowledge transfer mechanisms, such as project groups and job rotation......Prior relations between the acquiring firm and the target company pave the way for knowledge transfers subsequent to the acquisitions. One major reason is that through the market-based relations the two actors build up mutual trust and simultaneously they learn how to communicate. An empirical...


    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова


    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on > Buy now

  2. Parallel context-free languages

    DEFF Research Database (Denmark)

    Skyum, Sven


    The relation between the family of context-free languages and the family of parallel context-free languages is examined in this paper. It is proved that the families are incomparable. Finally we prove that the family of languages of finite index is contained in the family of parallel context...

  3. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo


    adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...... a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... cerebellum during the symmetric movements. These findings suggest the presence of different error-monitoring mechanisms for symmetric and parallel movements. The results indicate that separate areas within PMd and SMA are responsible for both perception and performance of ongoing movements...

  4. Compressive Temporal Summation in Human Visual Cortex. (United States)

    Zhou, Jingyang; Benson, Noah C; Kay, Kendrick N; Winawer, Jonathan


    Combining sensory inputs over space and time is fundamental to vision. Population receptive field models have been successful in characterizing spatial encoding throughout the human visual pathways. A parallel question, how visual areas in the human brain process information distributed over time, has received less attention. One challenge is that the most widely used neuroimaging method, fMRI, has coarse temporal resolution compared with the time-scale of neural dynamics. Here, via carefully controlled temporally modulated stimuli, we show that information about temporal processing can be readily derived from fMRI signal amplitudes in male and female subjects. We find that all visual areas exhibit subadditive summation, whereby responses to longer stimuli are less than the linear prediction from briefer stimuli. We also find fMRI evidence that the neural response to two stimuli is reduced for brief interstimulus intervals (indicating adaptation). These effects are more pronounced in visual areas anterior to V1-V3. Finally, we develop a general model that shows how these effects can be captured with two simple operations: temporal summation followed by a compressive nonlinearity. This model operates for arbitrary temporal stimulation patterns and provides a simple and interpretable set of computations that can be used to characterize neural response properties across the visual hierarchy. Importantly, compressive temporal summation directly parallels earlier findings of compressive spatial summation in visual cortex describing responses to stimuli distributed across space. This indicates that, for space and time, cortex uses a similar processing strategy to achieve higher-level and increasingly invariant representations of the visual world. SIGNIFICANCE STATEMENT Combining sensory inputs over time is fundamental to seeing. Two important temporal phenomena are summation, the accumulation of sensory inputs over time, and adaptation, a response reduction for repeated

  5. Semantics of Temporal Models with Multiple Temporal Dimensions

    DEFF Research Database (Denmark)

    Kraft, Peter; Sørensen, Jens Otto

    Semantics of temporal models with multi temporal dimensions are examined progressing from non-temporal models unto uni-temporal, and further unto bi- and tri-temporal models. An example of a uni-temporal model is the valid time model, an example of a bi-temporal model is the valid time/transactio...

  6. Temporal Coding of Volumetric Imagery (United States)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  7. Autonomous Robot Skill Acquisition (United States)


    single demonstration using either a learned Hidden Markov Model (HMM) (Pook and Ballard, 1993; Hovland et al., 1996; Dixon and Khosla, 2004) or...of the Nineteenth International Conference on Machine Learning, pages 243– 250. Hovland , G., Sikka, P., and McCarragher, B. (1996). Skill acquisition

  8. Leading Acquisition Reform (United States)


    Army and Undersecretary of Defense for Acquisition, for a reduction of the Excalibur and Accelerated Precision Mortar initiative rounds.25...Can’t Dance? Leading a Great Enterprise Through Dramatic Change, (New York: HarperCollins Publishers, 2003), 235. 13 John P. Kotter , Leading Change

  9. Merger and acquisition medicine. (United States)

    Powell, G S


    This discussion of the ramifications of corporate mergers and acquisitions for employees recognizes that employee adaptation to the change can be a long and complex process. The author describes a role the occupational physician can take in helping to minimize the potential adverse health impact of major organizational change.

  10. Acquisition reconfiguration capability

    NARCIS (Netherlands)

    Amiryany Araghy, N.; Huysman, M.H.; de Man, A.P.; Cloodt, M.


    Purpose: Acquiring knowledge-intensive firms in order to gain access to their knowledge to innovate is not a strategy to achieve easily. Knowledge acquisitions demand that organizations integrate various dispersed knowledge-based resources and thus share knowledge to innovate. However, despite the

  11. Competencies: requirements and acquisition

    NARCIS (Netherlands)

    Kuenn, A.C.; Meng, C.M.; Peters, Z.; Verhagen, A.M.C.


    Higher education is given the key task to prepare the highly talented among the young to fulfil highly qualified roles in the labour market. Successful labour market performance of graduates is generally associated with the acquisition of the correct competencies. Education as an individual

  12. Surviving mergers & acquisitions. (United States)

    Dixon, Diane L


    Mergers and acquisitions are never easy to implement. The health care landscape is a minefield of failed mergers and uneasy alliances generating great turmoil and pain. But some mergers have been successful, creating health systems that benefit the communities they serve. Five prominent leaders offer their advice on minimizing the difficulties of M&As.

  13. Parallel integer sorting with medium and fine-scale parallelism (United States)

    Dagum, Leonardo


    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  14. Template based parallel checkpointing in a massively parallel computer system (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN


    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.


    Energy Technology Data Exchange (ETDEWEB)



    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  16. Data acquisition system for a proton imaging apparatus

    CERN Document Server

    Sipala, V; Bruzzi, M; Bucciolini, M; Candiano, G; Capineri, L; Cirrone, G A P; Civinini, C; Cuttone, G; Lo Presti, D; Marrazzo, L; Mazzaglia, E; Menichelli, D; Randazzo, N; Talamonti, C; Tesi, M; Valentini, S


    New developments in the proton-therapy field for cancer treatments, leaded Italian physics researchers to realize a proton imaging apparatus consisting of a silicon microstrip tracker to reconstruct the proton trajectories and a calorimeter to measure their residual energy. For clinical requirements, the detectors used and the data acquisition system should be able to sustain about 1 MHz proton rate. The tracker read-out, using an ASICs developed by the collaboration, acquires the signals detector and sends data in parallel to an FPGA. The YAG:Ce calorimeter generates also the global trigger. The data acquisition system and the results obtained in the calibration phase are presented and discussed.

  17. Temporal subtraction contrast-enhanced dedicated breast CT (United States)

    Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.


    implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake.

  18. Cloud parallel processing of tandem mass spectrometry based proteomics data. (United States)

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus


    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  19. OpenMP parallelization of a gridded SWAT (SWATG) (United States)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin


    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  20. Numerical Algorithms for Parallel Computers (United States)


    NUMERICAL ALGORITHMS FOR PARALLEL COMPUTERS 12. PERSONAL AUTHOR(S) Loce M. Adams 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Yea, Month. Day...editions are obsolete. SEC n&"S2IVAQelftS PAGE 90 01 11 131 . . AP06w.TR. 8 9 -l1N5 Numerical Algorithms for Parallel Computers Loyce M. Adams Department of...Conference on Applied Linear Algebra, Loyce Adams presented minisym- posium talk Preconditioners on Parallel Computers , Madison, WI., May 1988. Third

  1. Parallel education: what is it?


    Amos, Michelle Peta


    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  2. Parallel Event Analysis Under Unix (United States)

    Looney, S.; Nilsson, B. S.; Oest, T.; Pettersson, T.; Ranjard, F.; Thibonnier, J.-P.

    The ALEPH experiment at LEP, the CERN CN division and Digital Equipment Corp. have, in a joint project, developed a parallel event analysis system. The parallel physics code is identical to ALEPH's standard analysis code, ALPHA, only the organisation of input/output is changed. The user may switch between sequential and parallel processing by simply changing one input "card". The initial implementation runs on an 8-node DEC 3000/400 farm, using the PVM software, and exhibits a near-perfect speed-up linearity, reducing the turn-around time by a factor of 8.

  3. IOPA: I/O-aware parallelism adaption for parallel programs. (United States)

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei


    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.

  4. RF merger and acquisition market


    Evgeniya V. Vidishcheva; Merab S. Sichinava; Marina V. Kogan


    The article gives brief characteristic of merger and acquisition market situation in Russia, brings in overview of bargains in 2009–2010. Russian merger and acquisition market is in early stage of development.

  5. RF merger and acquisition market

    Directory of Open Access Journals (Sweden)

    Evgeniya V. Vidishcheva


    Full Text Available The article gives brief characteristic of merger and acquisition market situation in Russia, brings in overview of bargains in 2009–2010. Russian merger and acquisition market is in early stage of development.

  6. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo


    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  7. A Parallel Compact Hash Table

    NARCIS (Netherlands)

    van der Vegt, Steven; Laarman, Alfons; Vojnar, Tomas


    We present the first parallel compact hash table algorithm. It delivers high performance and scalability due to its dynamic region-based locking scheme with only a fraction of the memory requirements of a regular hash table.

  8. The STAPL Parallel Graph Library

    KAUST Repository



    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  9. Fuzzy Clustering in Parallel Universes


    Wiswedel, Bernd; Berthold, Michael R.


    We propose a modified fuzzy c-means algorithm that operates on different feature spaces, so-called parallel universes, simultaneously. The method assigns membership values of patterns to different universes, which are then adopted throughout the training. This leads to better clustering results since patterns not contributing to clustering in a universe are (completely or partially) ignored. The outcome of the algorithm are clusters distributed over different parallel universes, each modeling...

  10. Data acquisition instruments: Psychopharmacology

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III


    This report contains the results of a Direct Assistance Project performed by Lockheed Martin Energy Systems, Inc., for Dr. K. O. Jobson. The purpose of the project was to perform preliminary analysis of the data acquisition instruments used in the field of psychiatry, with the goal of identifying commonalities of data and strategies for handling and using the data in the most advantageous fashion. Data acquisition instruments from 12 sources were provided by Dr. Jobson. Several commonalities were identified and a potentially useful data strategy is reported here. Analysis of the information collected for utility in performing diagnoses is recommended. In addition, further work is recommended to refine the commonalities into a directly useful computer systems structure.

  11. Amplitudes, acquisition and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Bloor, Robert


    Accurate seismic amplitude information is important for the successful evaluation of many prospects and the importance of such amplitude information is increasing with the advent of time lapse seismic techniques. It is now widely accepted that the proper treatment of amplitudes requires seismic imaging in the form of either time or depth migration. A key factor in seismic imaging is the spatial sampling of the data and its relationship to the imaging algorithms. This presentation demonstrates that acquisition caused spatial sampling irregularity can affect the seismic imaging and perturb amplitudes. Equalization helps to balance the amplitudes, and the dealing strategy improves the imaging further when there are azimuth variations. Equalization and dealiasing can also help with the acquisition irregularities caused by shot and receiver dislocation or missing traces. 2 refs., 2 figs.

  12. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva


    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  13. The NICMOS Parallel Observing Program (United States)

    McCarthy, Patrick


    We propose to manage the default set of pure parallels with NICMOS. Our experience with both our GO NICMOS parallel program and the public parallel NICMOS programs in cycle 7 prepared us to make optimal use of the parallel opportunities. The NICMOS G141 grism remains the most powerful survey tool for HAlpha emission-line galaxies at cosmologically interesting redshifts. It is particularly well suited to addressing two key uncertainties regarding the global history of star formation: the peak rate of star formation in the relatively unexplored but critical 1extinction. Our proposed deep G141 exposures will increase the sample of known HAlpha emission- line objects at z ~ 1.3 by roughly an order of magnitude. We will also obtain a mix of F110W and F160W images along random sight-lines to examine the space density and morphologies of the reddest galaxies. The nature of the extremely red galaxies remains unclear and our program of imaging and grism spectroscopy provides unique information regarding both the incidence of obscured star bursts and the build up of stellar mass at intermediate redshifts. In addition to carrying out the parallel program we will populate a public database with calibrated spectra and images, and provide limited ground- based optical and near-IR data for the deepest parallel fields.

  14. Conjoined Constraints and Phonological Acquisition

    Directory of Open Access Journals (Sweden)

    Giovana Bonilha


    Full Text Available Since the start of Optimality Theory (Prince & Smolensky, 1993, research on phonological acquisition has explored the explanatory potential of constraint theories. This study, also based on Optimality Theory, attempts to analyze the acquisition of CVVC syllable structure by Brazilian Portuguese children and addresses the issue of Local Conjunction (Smolensky, 1995, 1997 in research that deals with problems of phonological acquisition.

  15. First Language Acquisition and Teaching (United States)

    Cruz-Ferreira, Madalena


    "First language acquisition" commonly means the acquisition of a single language in childhood, regardless of the number of languages in a child's natural environment. Language acquisition is variously viewed as predetermined, wondrous, a source of concern, and as developing through formal processes. "First language teaching" concerns schooling in…

  16. Internationalize Mergers and Acquisitions


    Zhou, Lili


    As globalization processes, an increasing number of companies use mergers and acquisitions as a tool to achieve company growth in the international business world. The purpose of this thesis is to investigate the process of an international M&A and analyze the factors leading to success. The research started with reviewing different academic theory. The important aspects in both pre-M&A phase and post-M&A phase have been studied in depth. Because of the complexity in international...

  17. Competencies: requirements and acquisition


    Kuenn, A.C.; Meng, C.M.; Peters, Z.; Verhagen, A.M.C.


    Higher education is given the key task to prepare the highly talented among the young to fulfil highly qualified roles in the labour market. Successful labour market performance of graduates is generally associated with the acquisition of the correct competencies. Education as an individual investment in human capital is a viewpoint dating back to the 17th century and the writings of Sir William Petty (1662), and includes later work by Adam Smith (1776). The idea was formalized and brought in...

  18. Second language acquisition. (United States)

    Juffs, Alan


    Second language acquisition (SLA) is a field that investigates child and adult SLA from a variety of theoretical perspectives. This article provides a survey of some key areas of concern including formal generative theory and emergentist theory in the areas of morpho-syntax and phonology. The review details the theoretical stance of the two different approaches to the nature of language: generative linguistics and general cognitive approaches. Some results of key acquisition studies from the two theoretical frameworks are discussed. From a generative perspective, constraints on wh-movement, feature geometry and syllable structure, and morphological development are highlighted. From a general cognitive point of view, the emergence of tense and aspect marking from a prototype account of inherent lexical aspect is reviewed. Reference is made to general cognitive learning theories and to sociocultural theory. The article also reviews individual differences research, specifically debate on the critical period in adult language acquisition, motivation, and memory. Finally, the article discusses the relationship between SLA research and second language pedagogy. Suggestions for further reading from recent handbooks on SLA are provided. WIREs Cogni Sci 2011 2 277-286 DOI: 10.1002/wcs.106 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  19. Complexity in language acquisition. (United States)

    Clark, Alexander; Lappin, Shalom


    Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form. Copyright © 2013 Cognitive Science Society, Inc.

  20. Frames of reference in spatial language acquisition. (United States)

    Shusterman, Anna; Li, Peggy


    Languages differ in how they encode spatial frames of reference. It is unknown how children acquire the particular frame-of-reference terms in their language (e.g., left/right, north/south). The present paper uses a word-learning paradigm to investigate 4-year-old English-speaking children's acquisition of such terms. In Part I, with five experiments, we contrasted children's acquisition of novel word pairs meaning left-right and north-south to examine their initial hypotheses and the relative ease of learning the meanings of these terms. Children interpreted ambiguous spatial terms as having environment-based meanings akin to north and south, and they readily learned and generalized north-south meanings. These studies provide the first direct evidence that children invoke geocentric representations in spatial language acquisition. However, the studies leave unanswered how children ultimately acquire "left" and "right." In Part II, with three more experiments, we investigated why children struggle to master body-based frame-of-reference words. Children successfully learned "left" and "right" when the novel words were systematically introduced on their own bodies and extended these words to novel (intrinsic and relative) uses; however, they had difficulty learning to talk about the left and right sides of a doll. This difficulty was paralleled in identifying the left and right sides of the doll in a non-linguistic memory task. In contrast, children had no difficulties learning to label the front and back sides of a doll. These studies begin to paint a detailed account of the acquisition of spatial terms in English, and provide insights into the origins of diverse spatial reference frames in the world's languages. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.


    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  2. Compilation Techniques for Embedded Data Parallel Languages


    Catanzaro, Bryan Christopher


    Contemporary parallel microprocessors exploit Chip Multiprocessing along with Single Instruction, Multiple Data parallelism to deliver high performance on applications that expose substantial fine-grained data parallelism. Although data parallelism is widely available in many computations, implementing data parallel algorithms in low-level efficiency languages such as C++ is often a difficult task, since the programmer is burdened with mapping data parallelism from an application onto the ha...

  3. Non-Cartesian Parallel Imaging Reconstruction of Undersampled IDEAL Spiral 13C CSI Data

    DEFF Research Database (Denmark)

    Hansen, Rie Beck; Hanson, Lars G.; Ardenkjær-Larsen, Jan Henrik

    scan times based on spatial information inherent to each coil element. In this work, we explored the combination of non-cartesian parallel imaging reconstruction and spatially undersampled IDEAL spiral CSI1 acquisition for efficient encoding of multiple chemical shifts within a large FOV with high...

  4. Parallel imaging enhanced MR colonography using a phantom model.

    LENUS (Irish Health Repository)

    Morrin, Martina M


    To compare various Array Spatial and Sensitivity Encoding Technique (ASSET)-enhanced T2W SSFSE (single shot fast spin echo) and T1-weighted (T1W) 3D SPGR (spoiled gradient recalled echo) sequences for polyp detection and image quality at MR colonography (MRC) in a phantom model. Limitations of MRC using standard 3D SPGR T1W imaging include the long breath-hold required to cover the entire colon within one acquisition and the relatively low spatial resolution due to the long acquisition time. Parallel imaging using ASSET-enhanced T2W SSFSE and 3D T1W SPGR imaging results in much shorter imaging times, which allows for increased spatial resolution.


    Directory of Open Access Journals (Sweden)

    S. A. Arustamov


    Full Text Available The article deals with implementation of a scalable parallel algorithm for structure learning of Bayesian network. Comparative analysis of sequential and parallel algorithms is done.

  6. Data acquisition and real-time bolometer tomography using LabVIEW RT

    Energy Technology Data Exchange (ETDEWEB)

    Giannone, L., E-mail: [Max-Planck-Institute for Plasma Physics, EURATOM-IPP Association, D-85748 Garching (Germany); Eich, T.; Fuchs, J.C. [Max-Planck-Institute for Plasma Physics, EURATOM-IPP Association, D-85748 Garching (Germany); Ravindran, M.; Ruan, Q.; Wenzel, L.; Cerna, M.; Concezzi, S. [National Instruments, Austin, TX 78759-3504 (United States)


    The currently available multi-core PCI Express systems running LabVIEW RT (real-time), equipped with FPGA cards for data acquisition and real-time parallel signal processing, greatly shorten the design and implementation cycles of large-scale, real-time data acquisition and control systems. This paper details a data acquisition and real-time tomography system using LabVIEW RT for the bolometer diagnostic on the ASDEX Upgrade tokamak (Max Planck Institute for Plasma Physics, Garching, Germany). The transformation matrix for tomography is pre-computed based on the geometry of distributed radiation sources and sensors. A parallelized iterative algorithm is adapted to solve a constrained linear system for the reconstruction of the radiated power density. Real-time bolometer tomography is performed with LabVIEW RT. Using multi-core machines to execute the parallelized algorithm, a cycle time well below 1 ms is reached.

  7. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler


    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  8. Parallel plasma fluid turbulence calculations (United States)

    Leboeuf, J. N.; Carreras, B. A.; Charlton, L. A.; Drake, J. B.; Lynch, V. E.; Newman, D. E.; Sidikman, K. L.; Spong, D. A.

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

  9. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers


    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  10. Inductive Temporal Logic Programming


    Kolter, Robert


    We study the extension of techniques from Inductive Logic Programming (ILP) to temporal logic programming languages. Therefore we present two temporal logic programming languages and analyse the learnability of programs from these languages from finite sets of examples. In first order temporal logic the following topics are analysed: - How can we characterize the denotational semantics of programs? - Which proof techniques are best suited? - How complex is the learning task? In propositional ...

  11. 4D Wavelet-Based Regularization for Parallel MRI Reconstruction: Impact on Subject and Group-Levels Statistical Sensitivity in fMRI

    CERN Document Server

    Chaari, Lotfi; Badillo, Solveig; Pesquet, Jean-Christophe; Ciuciu, Philippe


    Parallel MRI is a fast imaging technique that enables the acquisition of highly resolved images in space. It relies on $k$-space undersampling and multiple receiver coils with complementary sensitivity profiles in order to reconstruct a full Field-Of-View (FOV) image. The performance of parallel imaging mainly depends on the reconstruction algorithm, which can proceed either in the original $k$-space (GRAPPA, SMASH) or in the image domain (SENSE-like methods). To improve the performance of the widely used SENSE algorithm, 2D- or slice-specific regularization in the wavelet domain has been efficiently investigated. In this paper, we extend this approach using 3D-wavelet representations in order to handle all slices together and address reconstruction artifacts which propagate across adjacent slices. The extension also accounts for temporal correlations that exist between successive scans in functional MRI (fMRI). The proposed 4D reconstruction scheme is fully \\emph{unsupervised} in the sense that all regulariz...

  12. Indeterministic Temporal Logic

    Directory of Open Access Journals (Sweden)

    Trzęsicki Kazimierz


    Full Text Available The questions od determinism, causality, and freedom have been the main philosophical problems debated since the beginning of temporal logic. The issue of the logical value of sentences about the future was stated by Aristotle in the famous tomorrow sea-battle passage. The question has inspired Łukasiewicz’s idea of many-valued logics and was a motive of A. N. Prior’s considerations about the logic of tenses. In the scheme of temporal logic there are different solutions to the problem. In the paper we consider indeterministic temporal logic based on the idea of temporal worlds and the relation of accessibility between them.

  13. The NUSTAR data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Loeher, B.; Toernqvist, H.T. [TU Darmstadt (Germany); GSI (Germany); Agramunt, J. [IFIC, CSIC (Spain); Bendel, M.; Gernhaeuser, R.; Le Bleis, T.; Winkel, M. [TU Muenchen (Germany); Charpy, A.; Heinz, A.; Johansson, H.T. [Chalmers University of Technology (Sweden); Coleman-Smith, P.; Lazarus, I.H.; Pucknell, V.F.E. [STFC Daresbury (United Kingdom); Czermak, A. [IFJ (Poland); Kurz, N.; Nociforo, C.; Pietri, S.; Schaffner, H.; Simon, H. [GSI (Germany); Scheit, H. [TU Darmstadt (Germany); Taieb, J. [CEA (France)


    The diversity of upcoming experiments within the NUSTAR collaboration, including experiments in storage rings, reactions at relativistic energies and high-precision spectroscopy, is reflected in the diversity of the required detection systems. A challenging task is to incorporate the different needs of individual detectors within the unified NUSTAR Data AcQuisition (NDAQ). NDAQ takes up this challenge by providing a high degree of availability via continuously running systems, high flexibility via experiment-specific configuration files for data streams and trigger logic, distributed timestamps and trigger information on km distances, all built on the solid basis of the GSI Multi-Branch System. NDAQ ensures interoperability between individual NUSTAR detectors and allows merging of formerly separate data streams according to the needs of all experiments, increasing reliability in NUSTAR data acquisition. An overview of the NDAQ infrastructure and the current progress is presented. The NUSTAR (NUclear STructure, Astrophysics and Reactions) collaboration represents one of the four pillars motivating the construction of the international FAIR facility. The diversity of NUSTAR experiments, including experiments in storage rings, reactions at relativistic energies and high-precision spectroscopy, is reflected in the diversity of the required detection systems. A challenging task is to incorporate the different needs of individual detectors and components under the umbrella of the unified NUSTAR Data AQuisition (NDAQ) infrastructure. NDAQ takes up this challenge by providing a high degree of availability via continuously running systems, high flexibility via experiment-specific configuration files for data streams and trigger logic, and distributed time stamps and trigger information on km distances, all built on the solid basis of the GSI Multi-Branch System (MBS). NDAQ ensures interoperability between individual NUSTAR detectors and allows merging of formerly separate

  14. Interlanguage Development by Two Korean Speakers of English with a Focus on Temporality. (United States)

    Lee, Eun-Joo


    Investigates the acquisition of temporality in English by Korean speakers over a period of 24 months. Temporality is examined from two perspectives: the expression of past-time events and semantic aspect and verb morphology. Results are discusses. (Author/VWL)

  15. First language acquisition. (United States)

    Goodluck, Helen


    This article reviews current approaches to first language acquisition, arguing in favor of the theory that attributes to the child an innate knowledge of universal grammar. Such knowledge can accommodate the systematic nature of children's non-adult linguistic behaviors. The relationships between performance devices (mechanisms for comprehension and production of speech), non-linguistic aspects of cognition, and child grammars are also discussed. WIREs Cogn Sci 2011 2 47-54 DOI: 10.1002/wcs.95 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  16. Brain maps and parallel computers. (United States)

    Nelson, M E; Bower, J M


    It is well known that neural responses in many brain regions are organized in characteristic spatial patterns referred to as brain maps. It is likely that these patterns in some way reflect aspects of the neural computations being performed, but to date there are no general guiding principles for relating the structure of a brain map to the properties of the associated computation. In the field of parallel computing, maps similar to brain maps arise when computations are distributed across the multiple processors of a parallel computer. In this case, the relationship between maps and computations is well understood and general principles for optimally mapping computations onto parallel computers have been developed. In this paper we discuss how these principles may help illuminate the relationship between maps and computations in the nervous system.

  17. Fast data parallel polygon rendering

    Energy Technology Data Exchange (ETDEWEB)

    Ortega, F.A.; Hansen, C.D.


    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  18. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine


    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  19. WFIRST: Science from the Guest Investigator and Parallel Observation Programs (United States)

    Postman, Marc; Nataf, David; Furlanetto, Steve; Milam, Stephanie; Robertson, Brant; Williams, Ben; Teplitz, Harry; Moustakas, Leonidas; Geha, Marla; Gilbert, Karoline; Dickinson, Mark; Scolnic, Daniel; Ravindranath, Swara; Strolger, Louis; Peek, Joshua; Marc Postman


    The Wide Field InfraRed Survey Telescope (WFIRST) mission will provide an extremely rich archival dataset that will enable a broad range of scientific investigations beyond the initial objectives of the proposed key survey programs. The scientific impact of WFIRST will thus be significantly expanded by a robust Guest Investigator (GI) archival research program. We will present examples of GI research opportunities ranging from studies of the properties of a variety of Solar System objects, surveys of the outer Milky Way halo, comprehensive studies of cluster galaxies, to unique and new constraints on the epoch of cosmic re-ionization and the assembly of galaxies in the early universe.WFIRST will also support the acquisition of deep wide-field imaging and slitless spectroscopic data obtained in parallel during campaigns with the coronagraphic instrument (CGI). These parallel wide-field imager (WFI) datasets can provide deep imaging data covering several square degrees at no impact to the scheduling of the CGI program. A competitively selected program of well-designed parallel WFI observation programs will, like the GI science above, maximize the overall scientific impact of WFIRST. We will give two examples of parallel observations that could be conducted during a proposed CGI program centered on a dozen nearby stars.

  20. Regularization of parallel MRI reconstruction using in vivo coil sensitivities (United States)

    Duan, Qi; Otazo, Ricardo; Xu, Jian; Sodickson, Daniel K.


    Parallel MRI can achieve increased spatiotemporal resolution in MRI by simultaneously sampling reduced k-space data with multiple receiver coils. One requirement that different parallel MRI techniques have in common is the need to determine spatial sensitivity information for the coil array. This is often done by smoothing the raw sensitivities obtained from low-resolution calibration images, for example via polynomial fitting. However, this sensitivity post-processing can be both time-consuming and error-prone. Another important factor in Parallel MRI is noise amplification in the reconstruction, which is due to non-unity transformations in the image reconstruction associated with spatially correlated coil sensitivity profiles. Generally, regularization approaches, such as Tikhonov and SVD-based methods, are applied to reduce SNR loss, at the price of introducing residual aliasing. In this work, we present a regularization approach using in vivo coil sensitivities in parallel MRI to overcome these potential errors into the reconstruction. The mathematical background of the proposed method is explained, and the technique is demonstrated with phantom images. The effectiveness of the proposed method is then illustrated clinically in a whole-heart 3D cardiac MR acquisition within a single breath-hold. The proposed method can not only overcome the sensitivity calibration problem, but also suppress a substantial portion of reconstruction-related noise without noticeable introduction of residual aliasing artifacts.

  1. Innovative Uses of Parallel Computers (United States)


    by the title of our Proposal to AFOSP?: "Innovative Uses of Parallel Computers ." It aims to use advanced computers in innovative ways that bypass both...8217- ATI" NES ’) . ARPOT OATS S. Ri(,ST TIt ANO) I May 1, 190 Final Report Nov 88 to 31 Oct 89 Z < n o 109THO - S FUBilNdingu4S Innovative Uses of Parallel ... Computers AFOSR-89-0119 61102F 2304/A3 pGerard Yichniac n IMPONA €R AUMNl~ NAW(S) ANO AONESS AS) L H9Mgum OONJA~ nf Plasma Fusion Centerd

  2. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J


    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  3. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari


    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... common ancestors, tree contraction and expression tree evaluation. We also study the problems of computing the connected and biconnected components of a graph, minimum spanning tree of a connected graph and ear decomposition of a biconnected graph. All our solutions on a P-processor PEM model provide...

  4. Parallelism in practice: approaches to parallelism in bioassays. (United States)

    Fleetwood, Kelly; Bursa, Francis; Yellowlees, Ann


    Relative potency bioassays are used to estimate the potency of a test biological product relative to a standard or reference product. It is established practice to assess the parallelism of the dose-response curves of the products prior to calculating relative potency. This paper provides a review of parallelism testing for bioassays. In particular three common methods for parallelism testing are reviewed: two significance tests (the F-test, the χ(2)-test) and an equivalence test. Simulation is used to compare these methods. We compare the sensitivity and specificity and receiver operating characteristic curves, and find that both the χ(2)-test and the equivalence test outperform the F-test on average, unless the assay-to-assay variation is considerable. No single method is optimal in all situations. We describe how bioassay scientists and statisticians can work together to determine the best approach for each bioassay, taking into account its properties and the context in which it is applied. Bioassays are experiments that use living organisms, tissues, or cells to measure the concentration of a pharmaceutical. Typically the response of the living matter to a test sample with an unknown concentration of a pharmaceutical is compared to the response to a standard reference sample with a known concentration. An important step in the analysis of bioassays is checking that the test sample is responding like a diluted copy of the reference sample; this is known as testing for parallelism. There are three statistical methods commonly used to test for parallelism: the F-test, the χ(2)-test, and the equivalence test. This paper compares the three methods using computer simulations. We conclude that different methods are best in different situations, and we provide guidelines to help bioassay scientists and statisticians decide which method to use. © PDA, Inc. 2015.

  5. Parallel multiscale simulations of a brain aneurysm (United States)

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em


    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future

  6. Parallel multiscale simulations of a brain aneurysm (United States)

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em


    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver NɛκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NɛκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future

  7. Parallel multiscale simulations of a brain aneurysm

    Energy Technology Data Exchange (ETDEWEB)

    Grinberg, Leopold [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Fedosov, Dmitry A. [Institute of Complex Systems and Institute for Advanced Simulation, Forschungszentrum Jülich, Jülich 52425 (Germany); Karniadakis, George Em, E-mail: [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)


    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in

  8. About Certain Semantic Annotation in Parallel Corpora

    Directory of Open Access Journals (Sweden)

    Violetta Koseska-Toszewa


    Full Text Available About Certain Semantic Annotation in Parallel Corpora The semantic notation analyzed in this works is contained in the second stream of semantic theories presented here – in the direct approach semantics. We used this stream in our work on the Bulgarian-Polish Contrastive Grammar. Our semantic notation distinguishes quantificational meanings of names and predicates, and indicates aspectual and temporal meanings of verbs. It relies on logical scope-based quantification and on the contemporary theory of processes, known as “Petri nets”. Thanks to it, we can distinguish precisely between a language form and its contents, e.g. a perfective verb form has two meanings: an event or a sequence of events and states, finally ended with an event. An imperfective verb form also has two meanings: a state or a sequence of states and events, finally ended with a state. In turn, names are quantified universally or existentially when they are “undefined”, and uniquely (using the iota operator when they are “defined”. A fact worth emphasizing is the possibility of quantifying not only names, but also the predicate, and then quantification concerns time and aspect.  This is a novum in elaborating sentence-level semantics in parallel corpora. For this reason, our semantic notation is manual. We are hoping that it will raise the interest of computer scientists working on automatic methods for processing the given natural languages. Semantic annotation defined like in this work will facilitate contrastive studies of natural languages, and this in turn will verify the results of those studies, and will certainly facilitate human and machine translations.

  9. Continued Data Acquisition Development

    Energy Technology Data Exchange (ETDEWEB)

    Schwellenbach, David [National Security Technologies, LLC. (NSTec), Mercury, NV (United States)


    This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muon tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.

  10. Unsupervised Language Acquisition (United States)

    de Marcken, Carl


    This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions that must hold for this general learning strategy to arrive at linguistically desirable grammars. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars, and a learning strategy that separates the ``content'' of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning vocabularies and grammars from unsegmented text and continuous speech, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language.

  11. Temporal properties of stereopsis

    NARCIS (Netherlands)

    Gheorghiu, E.


    The goal of the research presented in this thesis was to investigate temporal properties of disparity processing and depth perception in human subjects, in response to dynamic stimuli. The results presented in various chapters, reporting findings about different temporal aspects of disparity

  12. Temporal Linear System Structure

    NARCIS (Netherlands)

    Willigenburg, van L.G.; Koning, de W.L.


    Piecewise constant rank systems and the differential Kalman decomposition are introduced in this note. Together these enable the detection of temporal uncontrollability/unreconstructability of linear continuous-time systems. These temporal properties are not detected by any of the four conventional

  13. Temporal Photon Differentials

    DEFF Research Database (Denmark)

    Schjøth, Lars; Frisvad, Jeppe Revall; Erleben, Kenny


    , constituting a temporal smoothing of rapidly changing illumination. In global illumination temporal smoothing can be achieved with distribution ray tracing (Cook et al., 1984). Unfortunately, this, and resembling methods, requires a high temporal resolution as samples has to be drawn from in-between frames. We...... present a novel method which is able to produce high quality temporal smoothing for indirect illumination without using in-between frames. Our method is based on ray differentials (Igehy, 1999) as it has been extended in (Sporring et al., 2009). Light rays are traced as bundles creating footprints, which......The finite frame rate also used in computer animated films is cause of adverse temporal aliasing effects. Most noticeable of these is a stroboscopic effect that is seen as intermittent movement of fast moving illumination. This effect can be mitigated using non-zero shutter times, effectively...

  14. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter


    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  15. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens


    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  16. Scientific computing on bulk synchronous parallel architectures

    NARCIS (Netherlands)

    Bisseling, R.H.; McColl, W.F.


    Bulk synchronous parallel architectures oer the prospect of achieving both scalable parallel performance and architecture independent parallel software. They provide a robust model on which to base the future development of general purpose parallel computing systems. In this paper, we theoretically

  17. Parallel distributed computing using Python (United States)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro


    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  18. Reflections on parallel functional languages

    NARCIS (Netherlands)

    Vrancken, J.L.M.

    Are parallel functional languages feasible? The large majority of the current projects investigating this question are based on MIMD machines and the current set of implementation methods for functional languages which is graph rewriting and combinators. We regret that we have to come to a

  19. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  20. [Falsified medicines in parallel trade]. (United States)

    Muckenfuß, Heide


    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  1. Elongation Cutoff Technique: Parallel Performance

    Directory of Open Access Journals (Sweden)

    Jacek Korchowiec


    Full Text Available It is demonstrated that the elongation cutoff technique (ECT substantially speeds up thequantum-chemical calculation at Hartree-Fock (HF level of theory and is especially wellsuited for parallel performance. A comparison of ECT timings for water chains with thereference HF calculations is given. The analysis includes the overall CPU (central processingunit time and its most time consuming steps.

  2. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.


    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  3. Parallel and Distributed Databases: Introduction

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Kemper, Alfons; Prieto, Manuel; Szalay, Alex

    Euro-Par Topic 5 addresses data management issues in parallel and distributed computing. Advances in data management (storage, access, querying, retrieval, mining) are inherent to current and future information systems. Today, accessing large volumes of information is a reality: Data-intensive

  4. Lightweight Specifications for Parallel Correctness (United States)


    The reads and writes occur inside the constructor of a temporary object created in each iteration.) To a näıve, traditional conflict...pp. 207–227. [10] Krste Asanovic et al. The Parallel Computing Laboratory at U.C. Berkeley: A Re- search Agenda Based on the Berkeley View. Tech. rep

  5. Matpar: Parallel Extensions for MATLAB (United States)

    Springer, P. L.


    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  6. Target Acquisition Methodology Enhancement (TAME) (United States)


    acquisition probability from COMINT, PCOM , against all communications type targets, is determined offline by a stochastic model for subsequent...of PN over all replications. F-4 f. Computes overall acquisition probability, PCOM , against all communications type targets as C: PCOM = (1. - PN...NNET where NNET denotes the number of nets which the target is in. F-6. TOTAL ACQUISITION PROBABILITY. With PNCj and PCOM computed as above, the

  7. Collection assessment and acquisitions budgets

    CERN Document Server

    Lee, Sul H


    This invaluable new book contains timely information about the assessment of academic library collections and the relationship of collection assessment to acquisition budgets. The rising cost of information significantly influences academic libraries'abilities to acquire the necessary materials for students and faculty, and public libraries'abilities to acquire material for their clientele. Collection Assessment and Acquisitions Budgets examines different aspects of the relationship between the assessment of academic library collections and the management of library acquisition budgets. Librar

  8. A parallelism viewpoint to analyze performance bottlenecks of parallelism-intensive software systems


    Muhammad, Naeem; Boucké, Nelis; Berbers, Yolande


    The use of parallelism enhances the performance of a software system. However, its excessive use can degrade the system performance. In this paper we propose a parallelism viewpoint to optimize the use of parallelism by eliminating unnecessarily used parallelism in legacy systems. The parallelism viewpoint describes parallelism of the system in order to analyze multiple overheads associated with its threads. We use the proposed viewpoint to find parallelism specific performance overheads of a...

  9. Temporal properties of stereopsis (United States)

    Gheorghiu, E.


    The goal of the research presented in this thesis was to investigate temporal properties of disparity processing and depth perception in human subjects, in response to dynamic stimuli. The results presented in various chapters, reporting findings about different temporal aspects of disparity processing, are based on psychophysical experiments and computational model analysis. In chapter 1 we investigated which processes of binocular depth perception in dynamic random-dot stereograms (DRS), i.e., tolerance for interocular delays and temporal integration of correlation, are responsible for the temporal flexibility of the stereoscopic system. Our results demonstrate that (i) disparities from simultaneous monocular inputs dominate those from interocular delayed inputs; (ii) stereopsis is limited by temporal properties of monocular luminance mechanisms; (iii) depth perception in DRS results from cross-correlation-like operation on two simultaneous monocular inputs that represent the retinal images after having been subjected to a process of monocular temporal integration of luminance. In chapter 2 we examined what temporal information is exploited by the mechanisms underlying stereoscopic motion in depth. We investigated systematically the influence of temporal frequency on binocular depth perception in temporally correlated and temporally uncorrelated DRS. Our results show that disparity-defined depth is judged differently in temporally correlated and uncorrelated DRS above a temporal frequency of about 3 Hz. The results and simulations indicate that: (i) above about 20 Hz, the complete absence of stereomotion is caused by temporal integration of luminance; (ii) the difference in perceived depth in temporally correlated and temporally uncorrelated DRS for temporal frequencies between 20 and 3 Hz, is caused by temporal integration of disparity. In chapter 3 we investigated temporal properties of stereopsis at different spatial scales in response to sustained and

  10. Ultrafast 3D spin-echo acquisition improves Gadolinium-enhanced MRI signal contrast enhancement (United States)

    Han, S. H.; Cho, F. H.; Song, Y. K.; Paulsen, J.; Song, Y. Q.; Kim, Y. R.; Kim, J. K.; Cho, G.; Cho, H.


    Long scan times of 3D volumetric MR acquisitions usually necessitate ultrafast in vivo gradient-echo acquisitions, which are intrinsically susceptible to magnetic field inhomogeneities. This is especially problematic for contrast-enhanced (CE)-MRI applications, where non-negligible T2* effect of contrast agent deteriorates the positive signal contrast and limits the available range of MR acquisition parameters and injection doses. To overcome these shortcomings without degrading temporal resolution, ultrafast spin-echo acquisitions were implemented. Specifically, a multiplicative acceleration factor from multiple spin echoes (×32) and compressed sensing (CS) sampling (×8) allowed highly-accelerated 3D Multiple-Modulation-Multiple-Echo (MMME) acquisition. At the same time, the CE-MRI of kidney with Gd-DOTA showed significantly improved signal enhancement for CS-MMME acquisitions (×7) over that of corresponding FLASH acquisitions (×2). Increased positive contrast enhancement and highly accelerated acquisition of extended volume with reduced RF irradiations will be beneficial for oncological and nephrological applications, in which the accurate in vivo 3D quantification of contrast agent concentration is necessary with high temporal resolution. PMID:24863102

  11. Data Acquisition Systems course

    CERN Multimedia

    CERN. Geneva HR-RFA


    We will review the main physics and operational requirements on the Trigger and Data Acquisition (DAQ) systems of the LHC experiments. A description of the architecture of the various systems, the motivation of each alternative and the conceptual design of each filtering stage will be discussed. We will then turn to a description of the major elements of the three distinct sub-systems, namely the Level-1 trigger, the DAQ with its event-building and overall control and monitor, and finally the High-Level trigger system and the online processor farms. The thrust of the two lectures will be to provide a "broad brush" picture of the functionality of these systems.

  12. Prestack Parallel Modeling of Dispersive and Attenuative Medium

    Directory of Open Access Journals (Sweden)

    How-Wei Chen


    Full Text Available This study presents an efficient parallelized staggered grid pseudospectral method for 2-D viscoacoustic seismic waveform modeling that runs in a highperformance multi-processor computer and an in-house developed PC cluster. Parallel simulation permits several processors to be used for solving a single large problem with a high computation to communication ratio. Thus, parallelizing the serial scheme effectively reduces the computation time. Computational results indicate a reasonably consistent parallel performance when using different FFTs in pseudospectral computations. Meanwhile, a virtually perfect linear speedup can be achieved in a distributed- memory multi-processor environment. Effectiveness of the proposed algorithm is demonstrated using synthetic examples by simulating multiple shot gathers consistent with field coordinates. For dispersive and attenuating media, the propagating wavefield possesses the observable differences in waveform, amplitude and travel-times. The resulting effects on seismic signals, such as the decreased amplitude because of intrinsic Q and temporal shift because of physical dispersion phenomena, can be analyzed quantitatively. Anelastic effects become more visible owing to cumulative propagation effects. Field data application is presented in simulating OBS wide-angle seismic marine data for deep crustal structure study. The fine details of deep crustal velocity and attenuation structures in the survey area can be resolved by comparing simulated waveforms with observed seismograms recorded at various distances. Parallel performance is analyzed through speedup and efficiency for a variety of computing platforms. Effective parallel implementation requires numerous independent CPU intensive sub-jobs with low latency and high bandwidth inter-processor communication.

  13. Theories of language acquisition. (United States)

    Vetter, H J; Howell, R W


    Prior to the advent of generative grammar, theoretical approaches to language development relied heavily upon the concepts ofdifferential reinforcement andimitation. Current studies of linguistic acquisition are largely dominated by the hypothesis that the child constructs his language on the basis of a primitive grammar which gradually evolves into a more complex grammar. This approach presupposes that the investigator does not impose his own grammatical rules on the utterances of the child; that the sound system of the child and the rules he employs to form sentences are to be described in their own terms, independently of the model provided by the adult linguistic community; and that there is a series of steps or stages through which the child passes on his way toward mastery of the adult grammar in his linguistic environment. This paper attempts to trace the development of human vocalization through prelinguistic stages to the development of what can be clearly recognized as language behavior, and then progresses to transitional phases in which the language of the child begins to approximate that of the adult model. In the view of the authors, the most challenging problems which confront theories of linguistic acquisition arise in seeking to account for structure of sound sequences, in the rules that enable the speaker to go from meaning to sound and which enable the listener to go from sound to meaning. The principal area of concern for the investigator, according to the authors, is the discovery of those rules at various stages of the learning process. The paper concludes with a return to the question of what constitutes an adequate theory of language ontogenesis. It is suggested that such a theory will have to be keyed to theories of cognitive development and will have to include and go beyond a theory which accounts for adult language competence and performance, since these represent only the terminal stage of linguistic ontogenesis.

  14. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi


    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  15. Updated NGNP Fuel Acquisition Strategy

    Energy Technology Data Exchange (ETDEWEB)

    David Petti; Tim Abram; Richard Hobbins; Jim Kendall


    A Next Generation Nuclear Plant (NGNP) fuel acquisition strategy was first established in 2007. In that report, a detailed technical assessment of potential fuel vendors for the first core of NGNP was conducted by an independent group of international experts based on input from the three major reactor vendor teams. Part of the assessment included an evaluation of the credibility of each option, along with a cost and schedule to implement each strategy compared with the schedule and throughput needs of the NGNP project. While credible options were identified based on the conditions in place at the time, many changes in the assumptions underlying the strategy and in externalities that have occurred in the interim requiring that the options be re-evaluated. This document presents an update to that strategy based on current capabilities for fuel fabrication as well as fuel performance and qualification testing worldwide. In light of the recent Pebble Bed Modular Reactor (PBMR) project closure, the Advanced Gas Reactor (AGR) fuel development and qualification program needs to support both pebble and prismatic options under the NGNP project. A number of assumptions were established that formed a context for the evaluation. Of these, the most important are: • Based on logistics associated with the on-going engineering design activities, vendor teams would start preliminary design in October 2012 and complete in May 2014. A decision on reactor type will be made following preliminary design, with the decision process assumed to be completed in January 2015. Thus, no fuel decision (pebble or prismatic) will be made in the near term. • Activities necessary for both pebble and prismatic fuel qualification will be conducted in parallel until a fuel form selection is made. As such, process development, fuel fabrication, irradiation, and testing for pebble and prismatic options should not negatively influence each other during the period prior to a decision on reactor type

  16. Parallel deterioration to language processing in a bilingual speaker. (United States)

    Druks, Judit; Weekes, Brendan Stuart


    The convergence hypothesis [Green, D. W. (2003). The neural basis of the lexicon and the grammar in L2 acquisition: The convergence hypothesis. In R. van Hout, A. Hulk, F. Kuiken, & R. Towell (Eds.), The interface between syntax and the lexicon in second language acquisition (pp. 197-218). Amsterdam: John Benjamins] assumes that the neural substrates of language representations are shared between the languages of a bilingual speaker. One prediction of this hypothesis is that neurodegenerative disease should produce parallel deterioration to lexical and grammatical processing in bilingual aphasia. We tested this prediction with a late bilingual Hungarian (first language, L1)-English (second language, L2) speaker J.B. who had nonfluent progressive aphasia (NFPA). J.B. had acquired L2 in adolescence but was premorbidly proficient and used English as his dominant language throughout adult life. Our investigations showed comparable deterioration to lexical and grammatical knowledge in both languages during a one-year period. Parallel deterioration to language processing in a bilingual speaker with NFPA challenges the assumption that L1 and L2 rely on different brain mechanisms as assumed in some theories of bilingual language processing [Ullman, M. T. (2001). The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition, 4(1), 105-122].

  17. Implicit temporal expectation attenuates auditory attentional blink.

    Directory of Open Access Journals (Sweden)

    Dawei Shen

    Full Text Available Attentional blink (AB describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20% that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.

  18. Towards Temporal Graph Databases


    Campos, Alexander; Mozzino, Jorge; Vaisman, Alejandro


    In spite of the extensive literature on graph databases (GDBs), temporal GDBs have not received too much attention so far. Temporal GBDs can capture, for example, the evolution of social networks across time, a relevant topic in data analysis nowadays. In this paper we propose a data model and query language (denoted TEG-QL) for temporal GDBs, based on the notion of attribute graphs. This allows a straightforward translation to Neo4J, a well-known GBD. We present extensive examples of the use...

  19. A reusable knowledge acquisition shell: KASH (United States)

    Westphal, Christopher; Williams, Stephen; Keech, Virginia


    KASH (Knowledge Acquisition SHell) is proposed to assist a knowledge engineer by providing a set of utilities for constructing knowledge acquisition sessions based on interviewing techniques. The information elicited from domain experts during the sessions is guided by a question dependency graph (QDG). The QDG defined by the knowledge engineer, consists of a series of control questions about the domain that are used to organize the knowledge of an expert. The content information supplies by the expert, in response to the questions, is represented in the form of a concept map. These maps can be constructed in a top-down or bottom-up manner by the QDG and used by KASH to generate the rules for a large class of expert system domains. Additionally, the concept maps can support the representation of temporal knowledge. The high degree of reusability encountered in the QDG and concept maps can vastly reduce the development times and costs associated with producing intelligent decision aids, training programs, and process control functions.

  20. A parallel Fast Fourier transform

    CERN Document Server

    Morante, S; Salina, G


    In this paper we discuss the general problem of implementing the multidimensional Fast Fourier Transform algorithm on parallel computers. We show that, on a machine with P processors and fully parallel node communications, the optimal asymptotic scaling behavior of the total computational time with the number of data points, N, given in d dimensions by the formula aN/Plog(N/P)+bN/P/sup (d-1)/d/, can actually be achieved on realistic platforms. As a concrete realization of our strategy, we have produced codes efficiently running on machines of the APE family and on Cray T3E. On the former for asymptotic values of N our codes attain the above optimal result. (16 refs).

  1. Merlin - Massively parallel heterogeneous computing (United States)

    Wittie, Larry; Maples, Creve


    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  2. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  3. GPU Parallel Bundle Block Adjustment

    Directory of Open Access Journals (Sweden)

    ZHENG Maoteng


    Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

  4. Biological Constraints on Literacy Acquisition. (United States)

    Cossu, Giuseppe


    Investigates some of the biological constraints that shape the process of literacy acquisition. Explores the possibility of isolating processing components of reading which correspond to computational units of equivalent size in the neural architecture. Suggests that the process of literacy acquisition is largely constrained by a specific…

  5. Foreign Acquisition, Wages and Productivity

    DEFF Research Database (Denmark)

    Bandick, Roger


    This paper studies the effect of foreign acquisition on wages and total factor productivity (TFP) in the years following a takeover by using unique detailed firm-level data for Sweden for the period 1993-2002. The paper takes particular account of the potential endogeneity of the acquisition...

  6. Foreign Acquisition, Wages and Productivity

    DEFF Research Database (Denmark)

    Bandick, Roger

    This paper studies the effect of foreign acquisition on wages and total factor productivity (TFP) in the years following a takeover by using unique detailed firm-level data for Sweden for the period 1993-2002. The paper takes particular account of the potential endogeneity of the acquisition...

  7. Defense Acquisition Performance Assessment Report (United States)


    Development – “Acquisition Reform and Interoperability” BARTLeTT, ROsALiND , Analyst, Office of the Deputy Assistant Secretary of the Navy for Acquisition...Managers Performance Review” FRANKLiN , CHARLes, Vice-President, Raytheon Company Evaluation Team – “Corporate Perspective” FRANKLiN , RuTH W., Director

  8. Parallel Spatiotemporal Spectral Clustering with Massive Trajectory Data (United States)

    Gu, Y. Z.; Qin, K.; Chen, Y. X.; Yue, M. X.; Guo, T.


    Massive trajectory data contains wealth useful information and knowledge. Spectral clustering, which has been shown to be effective in finding clusters, becomes an important clustering approaches in the trajectory data mining. However, the traditional spectral clustering lacks the temporal expansion on the algorithm and limited in its applicability to large-scale problems due to its high computational complexity. This paper presents a parallel spatiotemporal spectral clustering based on multiple acceleration solutions to make the algorithm more effective and efficient, the performance is proved due to the experiment carried out on the massive taxi trajectory dataset in Wuhan city, China.

  9. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy......, and the limited memory in these architectures, severely constrains the data sets that can be processed. Moreover, the language-integrated cost semantics for nested data parallelism pioneered by NESL depends on a parallelism-flattening execution strategy that only exacerbates the problem. This is because...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...

  10. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman


    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  11. Fuzzy clustering in parallel universes


    Wiswedel, Bernd; Berthold, Michael R.


    We present an extension of the fuzzy c-Means algorithm, which operates simultaneously on different feature spaces so-called parallel universes and also incorporates noise detection. The method assigns membership values of patterns to different universes, which are then adopted throughout the training. This leads to better clustering results since patterns not contributing to clustering in a universe are (completely or partially) ignored. The method also uses an auxiliary universe to capt...

  12. Northeast Parallel Architectures Center (NPAC) (United States)


    networks using parallel algorithms for detection of lineaments in remotely sensed LandSat date of the Canadian Shield and for detection of abnormalities in...University Ashok K. Joshi Ashok K. Joshi, Syracuse University 211 Link Hall Syracuse NY 13244 Abstract of Research Remotely sensed data from satelite ...are not readily visible in the imageries. Photo- interpretation of these satelite images is the more commonly used technique and not much emphasis

  13. Temporal Lobe Seizure (United States)

    ... pregnancy Temporal lobe seizure Symptoms & causes Diagnosis & treatment Advertisement Mayo Clinic does not endorse companies or products. ... a Job Site Map About This Site Twitter Facebook Google YouTube Pinterest Mayo Clinic is a not- ...

  14. Temporal Lobe Seizure (United States)

    ... functions, including having odd feelings — such as euphoria, deja vu or fear. During a temporal lobe seizure, you ... include: A sudden sense of unprovoked fear A deja vu experience — a feeling that what's happening has happened ...

  15. Multisensory temporal numerosity judgment

    NARCIS (Netherlands)

    Philippi, T.; Erp, J.B.F. van; Werkhoven, P.J.


    In temporal numerosity judgment, observers systematically underestimate the number of pulses. The strongest underestimations occur when stimuli are presented with a short interstimulus interval (ISI) and are stronger for vision than for audition and touch. We investigated if multisensory

  16. Neocortical Temporal Lobe Epilepsy (United States)

    Bercovici, Eduard; Kumar, Balagobal Santosh; Mirsattari, Seyed M.


    Complex partial seizures (CPSs) can present with various semiologies, while mesial temporal lobe epilepsy (mTLE) is a well-recognized cause of CPS, neocortical temporal lobe epilepsy (nTLE) albeit being less common is increasingly recognized as separate disease entity. Differentiating the two remains a challenge for epileptologists as many symptoms overlap due to reciprocal connections between the neocortical and the mesial temporal regions. Various studies have attempted to correctly localize the seizure focus in nTLE as patients with this disorder may benefit from surgery. While earlier work predicted poor outcomes in this population, recent work challenges those ideas yielding good outcomes in part due to better localization using improved anatomical and functional techniques. This paper provides a comprehensive review of the diagnostic workup, particularly the application of recent advances in electroencephalography and functional brain imaging, in neocortical temporal lobe epilepsy. PMID:22953057

  17. Massive temporal lobe cholesteatoma

    National Research Council Canada - National Science Library

    Waidyasekara, Pasan; Dowthwaite, Samuel A; Stephenson, Ellison; Bhuta, Sandeep; McMonagle, Brent


    .... There had been no relevant symptoms in the interim until 6 weeks prior to this presentation. Imaging demonstrated a large right temporal lobe mass contiguous with the middle ear and mastoid cavity with features consistent with cholesteatoma...

  18. Massive Temporal Lobe Cholesteatoma

    National Research Council Canada - National Science Library

    Waidyasekara, Pasan; Dowthwaite, Samuel A; Stephenson, Ellison; Bhuta, Sandeep; McMonagle, Brent


    .... There had been no relevant symptoms in the interim until 6 weeks prior to this presentation. Imaging demonstrated a large right temporal lobe mass contiguous with the middle ear and mastoid cavity with features consistent with cholesteatoma...

  19. Parallel circuit simulation on supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Saleh, R.A.; Gallivan, K.A. (Illinois Univ., Urbana, IL (USA). Center for Supercomputing Research and Development); Chang, M.C. (Texas Instruments, Inc., Dallas, TX (USA)); Hajj, I.N.; Trick, T.N. (Illinois Univ., Urbana, IL (USA). Coordinated Science Lab.); Smart, D. (Semiconductor Div., Analog Devices, Wilmington, MA (US))


    Circuit simulation is a very time-consuming and numerically intensive application, especially when the problem size is large as in the case of VLSI circuits. To improve the performance of circuit simulators without sacrificing accuracy, a variety of parallel processing algorithms have been investigated due to the recent availability of a number of commercial multiprocessor machines. In this paper, research in the field of parallel circuit simulation is surveyed and the ongoing research in this area at the University of Illinois is described. Both standard and relaxation-based approaches are considered. In particular, the forms of parallelism available within the direct method approach, used in programs such as SPICE2 and SLATE, and within the relaxation-based approaches, such as waveform relaxation, iterated timing analysis, and waveform-relaxation-Newton, are described. The specific implementation issues addressed here are primarily related to general-purpose multiprocessors with a shared-memory architecture having a limited number of processors, although many of the comments also apply to a number of other architectures.

  20. Applications of Temporal Reasoning to Intensive Care Units

    Directory of Open Access Journals (Sweden)

    J. M. Juarez


    Full Text Available Intensive Care Units (ICUs are hospital departments that focus on the evolution of patients. In this scenario, the temporal dimension plays an essential role in understanding the state of the patients from their temporal information. The development of methods for the acquisition, modelling, reasoning and knowledge discovery of temporal information is, therefore, useful to exploit the large amount of temporal data recorded daily in the ICU. During the past decades, some subfields of Artificial Intelligence have been devoted to the study of temporal models and techniques to solve generic problems and towards their practical applications in the medical domain. The main goal of this paper is to present our view of some aspects of practical problems of temporal reasoning in the ICU field, and to describe our practical experience in the field in the last decade. This paper provides a non-exhaustive review of some of the efforts made in the field and our particular contributions in the development of temporal reasoning methods to partially solve some of these problems. The results are a set of software tools that help physicians to better understand the patient's temporal evolution.

  1. Developing Acquisition IS Integration Capabilities

    DEFF Research Database (Denmark)

    Wynne, Peter J.


    An under researched, yet critical challenge of Mergers and Acquisitions (M&A), is what to do with the two organisations’ information systems (IS) post-acquisition. Commonly referred to as acquisition IS integration, existing theory suggests that to integrate the information systems successfully......, an acquiring company must leverage two high level capabilities: diagnosis and integration execution. Through a case study, this paper identifies how a novice acquirer develops these capabilities in anticipation of an acquisition by examining its use of learning processes. The study finds the novice acquirer...... applies trial and error, experimental, and vicarious learning processes, while actively avoiding improvisational learning. The results of the study contribute to the acquisition IS integration literature specifically by exploring it from a new perspective: the learning processes used by novice acquirers...

  2. Fast two-snapshot structured illumination for temporal focusing microscopy with enhanced axial resolution. (United States)

    Meng, Yunlong; Lin, Wei; Li, Chenglin; Chen, Shih-Chi


    We present a new two-snapshot structured light illumination (SLI) reconstruction algorithm for fast image acquisition. The new algorithm, which only requires two mutually π phase-shifted raw structured images, is implemented on a custom-built temporal focusing fluorescence microscope (TFFM) to enhance its axial resolution via a digital micromirror device (DMD). First, the orientation of the modulated sinusoidal fringe patterns is automatically identified via spatial frequency vector detection. Subsequently, the modulated in-focal-plane images are obtained via rotation and subtraction. Lastly, a parallel amplitude demodulation method, derived based on Hilbert transform, is applied to complete the decoding processes. To demonstrate the new SLI algorithm, a TFFM is custom-constructed, where a DMD replaces the generic blazed grating in the system and simultaneously functions as a diffraction grating and a programmable binary mask, generating arbitrary fringe patterns. The experimental results show promising depth-discrimination capability with an axial resolution enhancement factor of 1.25, which matches well with the theoretical estimation, i.e, 1.27. Imaging experiments on pollen grain and mouse kidney samples have been performed. The results indicate that the two-snapshot algorithm presents comparable contrast reconstruction and optical cross-sectioning capability than those adopting the conventional root-mean-square (RMS) reconstruction method. The two-snapshot method can be readily applied to any sinusoidally modulated illumination systems to realize high-speed 3D imaging as less frames are required for each in-focal-plane image restoration, i.e., the image acquisition speed is improved by 2.5 times for any two-photon systems.

  3. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF). (United States)

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S


    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  4. 48 CFR 873.105 - Acquisition planning. (United States)


    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Acquisition planning. 873.105 Section 873.105 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS DEPARTMENT... planning. (a) Acquisition planning is an indispensable component of the total acquisition process. (b) For...

  5. 48 CFR 234.004 - Acquisition strategy. (United States)


    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Acquisition strategy. 234..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION 234.004 Acquisition strategy. (1) See 209.570 for policy applicable to acquisition strategies that consider the use of lead system...

  6. 48 CFR 34.004 - Acquisition strategy. (United States)


    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Acquisition strategy. 34... CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 34.004 Acquisition strategy. The program manager, as specified in agency procedures, shall develop an acquisition strategy tailored to the particular...

  7. 48 CFR 3034.004 - Acquisition strategy. (United States)


    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Acquisition strategy. 3034.004 Section 3034.004 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Acquisition strategy. See (HSAR) 48 CFR 3009.570 for policy applicable to acquisition strategies that consider...

  8. Automatic parallelization of nested loop programs for non-manifest real-time stream processing applications

    NARCIS (Netherlands)

    Bijlsma, T.


    This thesis is concerned with the automatic parallelization of real-time stream processing applications, such that they can be executed on embedded multiprocessor systems. Stream processing applications can be found in the video and channel decoding domain. These applications often have temporal


    CERN Multimedia

    Gerry Bauer

    The CMS Storage Manager System The tail-end of the CMS Data Acquisition System is the Storage Manger (SM), which collects output from the HLT and stages the data at Cessy for transfer to its ultimate home in the Tier-0 center. A SM system has been used by CMS for several years with the steadily evolving software within the XDAQ framework, but until relatively recently, only with provisional hardware. The SM is well known to much of the collaboration through the ‘MiniDAQ’ system, which served as the central DAQ system in 2007, and lives on in 2008 for dedicated sub-detector commissioning. Since March of 2008 a first phase of the final hardware was commissioned and used in CMS Global Runs. The system originally planned for 2008 aimed at recording ~1MB events at a few hundred Hz. The building blocks to achieve this are based on Nexsan's SATABeast storage array - a device  housing up to 40 disks of 1TB each, and possessing two controllers each capable of almost 200 MB/sec throughput....

  10. Trigger and data acquisition

    CERN Multimedia

    CERN. Geneva; Gaspar, C


    Past LEP experiments generate data at 0.5 MByte/s from particle detectors with over a quarter of a million readout channels. The process of reading out the electronic channels, treating them, and storing the date produced by each collision for further analysis by the physicists is called "Data Acquisition". Not all beam crossings produce interesting physics "events", picking the interesting ones is the task of the "Trigger" system. In order to make sure that the data is collected in good conditions the experiment's operation has to be constantly verified. In all, at LEP experiments over 100 000 parameters were monitored, controlled, and synchronized by the "Monotoring and control" system. In the future, LHC experiments will produce as much data in a single day as a LEP detector did in a full year's running with a raw data rate of 10 - 100 MBytes/s and will have to cope with some 800 million proton-proton collisions a second of these collisions only one in 100 million million is interesting for new particle se...

  11. Triple arterial phase MR imaging with gadoxetic acid using a combination of contrast enhanced time robust angiography, keyhole, and viewsharing techniques and two-dimensional parallel imaging in comparison with conventional single arterial phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee; Lee, Jeong Min; Han, Joon Koo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Yu, Mi Hye [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul (Korea, Republic of)


    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ2 test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  12. Triple Arterial Phase MR Imaging with Gadoxetic Acid Using a Combination of Contrast Enhanced Time Robust Angiography, Keyhole, and Viewsharing Techniques and Two-Dimensional Parallel Imaging in Comparison with Conventional Single Arterial Phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Lee, Jeong Min [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of); Yu, Mi Hye [Department of Radiology, Konkuk University Medical Center, Seoul 05030 (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul 04342 (Korea, Republic of); Han, Joon Koo [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of)


    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ{sup 2} test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  13. Distributed parallel computing in stochastic modeling of groundwater systems. (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen


    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  14. Cross-Linguistic Evidence for the Nature of Age Effects in Second Language Acquisition (United States)

    Dekeyser, Robert; Alfi-Shabtay, Iris; Ravid, Dorit


    Few researchers would doubt that ultimate attainment in second language grammar is negatively correlated with age of acquisition, but considerable controversy remains about the nature of this relationship: the exact shape of the age-attainment function and its interpretation. This article presents two parallel studies with native speakers of…

  15. Threading libraries performance when applied to image acquisition and processing in a forensic application


    Bermúdez, Carlos


    Peer-reviewed Based on concerns during ballistics identification, a new system for ballistics image acquisition and data processing is proposed. Since image processing consists of high CPU load rates, a comparison of three different threading libraries is presented, concluding that parallel processing enhances ballistics identification speed.

  16. Expanded Understanding of IS/IT Related Challenges in Mergers and Acquisitions

    DEFF Research Database (Denmark)

    Toppenberg, Gustav


    Organizational Mergers and Acquisitions (M&As) occur at an increasingly frequent pace in today’s business life. Paralleling this development, M&As has increasingly attracted attention from the Information Systems (IS) domain. This emerging line of research has started form an understanding...

  17. A simple low cost speed log interface for oceanographic data acquisition system

    Digital Repository Service at National Institute of Oceanography (India)

    Khedekar, V.D.; Phadte, G.M.

    A speed log interface is designed with parallel Binary Coded Decimal output. This design was mainly required for the oceanographic data acquisition system as an interface between the speed log and the computer. However, this can also be used as a...

  18. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  19. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun


    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  20. Data acquisition techniques using PC

    CERN Document Server

    Austerlitz, Howard


    Data Acquisition Techniques Using Personal Computers contains all the information required by a technical professional (engineer, scientist, technician) to implement a PC-based acquisition system. Including both basic tutorial information as well as some advanced topics, this work is suitable as a reference book for engineers or as a supplemental text for engineering students. It gives the reader enough understanding of the topics to implement a data acquisition system based on commercial products. A reader can alternatively learn how to custom build hardware or write his or her own software.

  1. The Acquisition Experiences of Kazoil

    DEFF Research Database (Denmark)

    Minbaeva, Dana; Muratbekova-Touron, Maral


    This case describes two diverging post-acquisition experiences of KazOil, an oil drilling company in Kazakhstan, in the years after the dissolution of the Soviet Union. When the company was bought by the Canadian corporation Hydrocarbons Ltd in 1996, exposed to new human resource strategies...... among students that cultural distance is not the main determinant for the success of social integration mechanisms in post-acquisition situations. On the contrary, the relationship between integration instrument and integration success is also governed by contextual factors such as the attractiveness...... of the acquisition target or state of development of HRM in the target country....

  2. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  3. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  4. Parallel machine architecture and compiler design facilities (United States)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex


    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  5. Parallel Graph Transformation based on Merged Approach

    Directory of Open Access Journals (Sweden)

    Asmaa Aouat


    Full Text Available Graph transformation is one of the key concepts in graph grammar. In order to accelerate the graph transformation, the concept of parallel graph transformation has been proposed by different tools such as AGG tool. The theory of parallel graph transformation used by AGG just allows clarifying the concepts of conflict and dependency between the transformation rules. This work proposes an approach of parallel graph transformations which enables dependent transformation rules to be executed in parallel.

  6. Bootstrapping language acquisition. (United States)

    Abend, Omri; Kwiatkowski, Tom; Smith, Nathaniel J; Goldwater, Sharon; Steedman, Mark


    The semantic bootstrapping hypothesis proposes that children acquire their native language through exposure to sentences of the language paired with structured representations of their meaning, whose component substructures can be associated with words and syntactic structures used to express these concepts. The child's task is then to learn a language-specific grammar and lexicon based on (probably contextually ambiguous, possibly somewhat noisy) pairs of sentences and their meaning representations (logical forms). Starting from these assumptions, we develop a Bayesian probabilistic account of semantically bootstrapped first-language acquisition in the child, based on techniques from computational parsing and interpretation of unrestricted text. Our learner jointly models (a) word learning: the mapping between components of the given sentential meaning and lexical words (or phrases) of the language, and (b) syntax learning: the projection of lexical elements onto sentences by universal construction-free syntactic rules. Using an incremental learning algorithm, we apply the model to a dataset of real syntactically complex child-directed utterances and (pseudo) logical forms, the latter including contextually plausible but irrelevant distractors. Taking the Eve section of the CHILDES corpus as input, the model simulates several well-documented phenomena from the developmental literature. In particular, the model exhibits syntactic bootstrapping effects (in which previously learned constructions facilitate the learning of novel words), sudden jumps in learning without explicit parameter setting, acceleration of word-learning (the "vocabulary spurt"), an initial bias favoring the learning of nouns over verbs, and one-shot learning of words and their meanings. The learner thus demonstrates how statistical learning over structured representations can provide a unified account for these seemingly disparate phenomena. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz


    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  8. Scalable Parallel Algebraic Multigrid Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Bank, R; Lu, S; Tong, C; Vassilevski, P


    The authors propose a parallel algebraic multilevel algorithm (AMG), which has the novel feature that the subproblem residing in each processor is defined over the entire partition domain, although the vast majority of unknowns for each subproblem are associated with the partition owned by the corresponding processor. This feature ensures that a global coarse description of the problem is contained within each of the subproblems. The advantages of this approach are that interprocessor communication is minimized in the solution process while an optimal order of convergence rate is preserved; and the speed of local subproblem solvers can be maximized using the best existing sequential algebraic solvers.

  9. Parallelism in G. V. Mona's UVulindlela

    African Journals Online (AJOL)


    Jan 2, 2010 ... tiful structural patterns and some musical effect in the poetry. The concept of parallelism. Parallelism is a stylistic device of repetition. ... ing of “phrases or sentences of similar construction and meaning placed side by side, balancing each other ”. Myers and Simms (1985: 223) define parallelism as:.

  10. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism (United States)

    Agarwal, Mayank


    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  11. Parallel line scanning ophthalmoscope for retinal imaging

    NARCIS (Netherlands)

    Vienola, K.V.; Damodaran, M.; Braaf, B.; Vermeer, K.A.; de Boer, J.F.


    A parallel line scanning ophthalmoscope (PLSO) is presented using a digital micromirror device (DMD) for parallel confocal line imaging of the retina. The posterior part of the eye is illuminated using up to seven parallel lines, which were projected at 100 Hz. The DMD offers a high degree of

  12. Inductive Information Retrieval Using Parallel Distributed Computation. (United States)

    Mozer, Michael C.

    This paper reports on an application of parallel models to the area of information retrieval and argues that massively parallel, distributed models of computation, called connectionist, or parallel distributed processing (PDP) models, offer a new approach to the representation and manipulation of knowledge. Although this document focuses on…

  13. Second derivative parallel block backward differentiation type ...

    African Journals Online (AJOL)

    A class of second derivative parallel block Backward differentiation type formulas is developed and the methods are inherently parallel and can be distributed over parallel processors. They are L–stable for block size k 6 with small error constants when compared to the conventional sequential Linear multi –step methods of ...

  14. Comparison of Parallel Viscosity with Neoclassical Theory


    K., Ida; N., Nakajima


    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (mu_perp =2m^2 /s).

  15. Temporal network epidemiology

    CERN Document Server

    Holme, Petter


    This book covers recent developments in epidemic process models and related data on temporally varying networks. It is widely recognized that contact networks are indispensable for describing, understanding, and intervening to stop the spread of infectious diseases in human and animal populations; “network epidemiology” is an umbrella term to describe this research field. More recently, contact networks have been recognized as being highly dynamic. This observation, also supported by an increasing amount of new data, has led to research on temporal networks, a rapidly growing area. Changes in network structure are often informed by epidemic (or other) dynamics, in which case they are referred to as adaptive networks. This volume gathers contributions by prominent authors working in temporal and adaptive network epidemiology, a field essential to understanding infectious diseases in real society.

  16. Schizophrenia and second language acquisition. (United States)

    Bersudsky, Yuly; Fine, Jonathan; Gorjaltsan, Igor; Chen, Osnat; Walters, Joel


    Language acquisition involves brain processes that can be affected by lesions or dysfunctions in several brain systems and second language acquisition may depend on different brain substrates than first language acquisition in childhood. A total of 16 Russian immigrants to Israel, 8 diagnosed schizophrenics and 8 healthy immigrants, were compared. The primary data for this study were collected via sociolinguistic interviews. The two groups use language and learn language in very much the same way. Only exophoric reference and blocking revealed meaningful differences between the schizophrenics and healthy counterparts. This does not mean of course that schizophrenia does not induce language abnormalities. Our study focuses on those aspects of language that are typically difficult to acquire in second language acquisition. Despite the cognitive compromises in schizophrenia and the manifest atypicalities in language of speakers with schizophrenia, the process of acquiring a second language seems relatively unaffected by schizophrenia.

  17. Platform attitude data acquisition system

    Digital Repository Service at National Institute of Oceanography (India)

    Afzulpurkar, S.

    A system for automatic acquisition of underwater platform attitude data has been designed, developed and tested in the laboratory. This is a micro controller based system interfacing dual axis inclinometer, high-resolution digital compass...

  18. Portable Data Acquisition System Project (United States)

    National Aeronautics and Space Administration — Armstrong researchers have developed a portable data acquisition system (PDAT) that can be easily transported and set up at remote locations to display and archive...

  19. Temporal abstraction and temporal Bayesian networks in clinical domains: a survey. (United States)

    Orphanou, Kalia; Stassopoulou, Athena; Keravnou, Elpida


    Temporal abstraction (TA) of clinical data aims to abstract and interpret clinical data into meaningful higher-level interval concepts. Abstracted concepts are used for diagnostic, prediction and therapy planning purposes. On the other hand, temporal Bayesian networks (TBNs) are temporal extensions of the known probabilistic graphical models, Bayesian networks. TBNs can represent temporal relationships between events and their state changes, or the evolution of a process, through time. This paper offers a survey on techniques/methods from these two areas that were used independently in many clinical domains (e.g. diabetes, hepatitis, cancer) for various clinical tasks (e.g. diagnosis, prognosis). A main objective of this survey, in addition to presenting the key aspects of TA and TBNs, is to point out important benefits from a potential integration of TA and TBNs in medical domains and tasks. The motivation for integrating these two areas is their complementary function: TA provides clinicians with high level views of data while TBNs serve as a knowledge representation and reasoning tool under uncertainty, which is inherent in all clinical tasks. Key publications from these two areas of relevance to clinical systems, mainly circumscribed to the latest two decades, are reviewed and classified. TA techniques are compared on the basis of: (a) knowledge acquisition and representation for deriving TA concepts and (b) methodology for deriving basic and complex temporal abstractions. TBNs are compared on the basis of: (a) representation of time, (b) knowledge representation and acquisition, (c) inference methods and the computational demands of the network, and (d) their applications in medicine. The survey performs an extensive comparative analysis to illustrate the separate merits and limitations of various TA and TBN techniques used in clinical systems with the purpose of anticipating potential gains through an integration of the two techniques, thus leading to a

  20. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar


    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  1. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.


    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  2. Performance limitations of parallel simulations

    Directory of Open Access Journals (Sweden)

    Liang Chen


    Full Text Available This study shows how the performance of a parallel simulation may be affected by the structure of the system being simulated. We consider a wide class of “linearly synchronous” simulations consisting of asynchronous and synchronous parallel simulations (or other distributed-processing systems, with conservative or optimistic protocols, in which the differences in the virtual times of the logical processes being simulated in real time t are of the order o(t as t tends to infinity. Using a random time transformation idea, we show how a simulation's processing rate in real time is related to the throughput rates in virtual time of the system being simulated. This relation is the basis for establishing upper bounds on simulation processing rates. The bounds for the rates are tight and are close to the actual rates as numerical experiments indicate. We use the bounds to determine the maximum number of processors that a simulation can effectively use. The bounds also give insight into efficient assignment of processors to the logical processes in a simulation.

  3. Altered structural connectome in temporal lobe epilepsy. (United States)

    DeSalvo, Matthew N; Douw, Linda; Tanaka, Naoaki; Reinsberger, Claus; Stufflebeam, Steven M


    To study differences in the whole-brain structural connectomes of patients with left temporal lobe epilepsy (TLE) and healthy control subjects. This study was approved by the institutional review board, and all individuals gave signed informed consent. Sixty-direction diffusion-tensor imaging and magnetization-prepared rapid acquisition gradient-echo (MP-RAGE) magnetic resonance imaging volumes were analyzed in 24 patients with left TLE and in 24 healthy control subjects. MP-RAGE volumes were segmented into 1015 regions of interest (ROIs) spanning the entire brain. Deterministic white matter tractography was performed after voxelwise tensor calculation. Weighted structural connectivity matrices were generated by using the pairwise density of connecting fibers between ROIs. Graph theoretical measures of connectivity networks were compared between groups by using linear models with permutation testing. Patients with TLE had 22%-45% reduced (P < .01) distant connectivity in the medial orbitofrontal cortex, temporal cortex, posterior cingulate cortex, and precuneus, compared with that in healthy subjects. However, local connectivity, as measured by means of network efficiency, was increased by 85%-270% (P < .01) in the medial and lateral frontal cortices, insular cortex, posterior cingulate cortex, precuneus, and occipital cortex in patients with TLE as compared with healthy subjects. This study suggests that TLE involves altered structural connectivity in a network that reaches beyond the temporal lobe, especially in the default mode network.

  4. Vivienda temporal para refugiados


    Amonarraiz Gutiérrez, Ana


    El proyecto se centra en el diseño y desarrollo de un espacio destinado a vivienda temporal para dar hogar a personas que han perdido su casa. Este tipo de vivienda es fundamental dentro del proceso de recuperación post-desastre ya que la construcción inmediata de viviendas permanentes es utópica. El objetivo principal es la construcción de una vivienda temporal formada por elementos prefabricados, logrando así una mayor rapidez en su montaje. Esto también permitirá que cualquier component...

  5. Temporal dynamics of ocular aberrations: monocular vs binocular vision. (United States)

    Mira-Agudelo, A; Lundström, L; Artal, P


    The temporal dynamics of ocular aberrations are important for the evaluation of, e.g. the accuracy of aberration estimates, the correlation to visual performance, and the requirements for real-time correction with adaptive optics. Traditionally, studies on the eye's dynamic behavior have been performed monocularly, which might have affected the results. In this study we measured aberrations and their temporal dynamics both monocularly and binocularly in the relaxed and accommodated state for six healthy subjects. Temporal frequencies up to 100 Hz were measured with a fast-acquisition Hartmann-Shack wavefront sensor having an open field-of-view configuration which allowed fixation to real targets. Wavefront aberrations were collected in temporal series of 5 s duration during binocular and monocular vision with fixation targets at 5 m and 25 cm distance. As expected, a larger temporal variability was found in the root-mean-square wavefront error when the eye accommodated, mainly for frequencies lower than 30 Hz. A statistically-significant difference in temporal behavior between monocular and binocular viewing conditions was found. However, on average it was too small to be of practical importance, although some subjects showed a notably higher variability for the monocular case during near vision. We did find differences in pupil size with mono- and binocular vision but the pupil size temporal dynamics did not behave in the same way as the aberrations' dynamics.

  6. Challenges in Mergers and Acquisitions


    Phyllis J. Rivera


    Since the advent of globalization and tough competitive market, there has been a drastic shift in the global business environment towards the acquisitions and mergers. The consolidation and acquisitions are undertaken for the purpose of increasing capabilities and gaining larger market share, but many high profile mergers become unsuccessful to meet the organizational objectives because of inadequate role of Human resources and the issues revolving in it (Love, 2000).The mergers and acquisiti...

  7. Prerequisites for acquisition of inheritance


    Večeřová, Marta


    Title: Prerequisites for Acquisition of Inheritance Keywords: deceased, inheritance, heir Type of paper: Thesis Author: Mgr. Daniela Bendová Supervisor: prof. JUDr. Jan Dvořák, CSc. Faculty of Law of Charles University Department of Civil Law The thesis addresses rudimentary prerequisites for acquisition of inheritance in the Czech Republic. These prerequisites include death of a person, that is necessary for application of inheritance rights, existence of inheritance, in particular ownership...

  8. A role for the developing lexicon in phonetic category acquisition. (United States)

    Feldman, Naomi H; Griffiths, Thomas L; Goldwater, Sharon; Morgan, James L


    Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  9. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren


    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  10. Design, Implementation and Evaluation of Parallel Pipelined STAP on Parallel Computers (United States)


    parallel computers . In particular, the paper describes the issues involved in parallelization, our approach to parallelization and performance results...on an Intel Paragon. The paper also discusses the process of developing software for such an application on parallel computers when latency and

  11. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.


    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  12. Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor (United States)

    Nagy, J.; Kelly, K.


    Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.

  13. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Palamidessi, Catuscia; Valencia, Frank Dan


    The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...

  14. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Valencia, Frank Dan

    Concurrent constraint programming (ccp) is a formalism for concurrency in which agents interact with one another by telling (adding) and asking (reading) information in a shared medium. Temporal ccp extends ccp by allowing agents to be constrained by time conditions. This dissertation studies...

  15. Mesial temporal sclerosis

    African Journals Online (AJOL)



    Jul 29, 2005 ... Introduction. Mesial temporal sclerosis is the commonest cause of partial complex seizures. The aetiology of this condi- tion is controversial, but it is postulat- ed that both acquired and develop- mental processes may be involved. Familial cases have also been reported. Magnetic resonance imaging. (MRI) ...

  16. Temporal bone imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lemmerling, Marc [Algemeen Ziekenhuis Sint-Lucas, Gent (Belgium). Dept. of Radiology; Foer, Bert de (ed.) [Sint-Augustinus Ziekenhuis, Wilrijk (Belgium). Dept. of Radiology


    Complete overview of imaging of normal and diseased temporal bone. Straightforward structure to facilitate learning. Detailed consideration of newer imaging techniques, including the hot topic of diffusion-weighted imaging. Includes a chapter on anatomy that will be of great help to the novice interpreter of imaging findings. Excellent illustrations throughout. This book provides a complete overview of imaging of normal and diseased temporal bone. After description of indications for imaging and the cross-sectional imaging anatomy of the area, subsequent chapters address the various diseases and conditions that affect the temporal bone and are likely to be encountered regularly in clinical practice. The classic imaging methods are described and discussed in detail, and individual chapters are included on newer techniques such as functional imaging and diffusion-weighted imaging. There is also a strong focus on postoperative imaging. Throughout, imaging findings are documented with the aid of numerous informative, high-quality illustrations. Temporal Bone Imaging, with its straightforward structure based essentially on topography, will prove of immense value in daily practice.

  17. Acquisition of Reading and Written Spelling in a Transparent Orthography: Two Non-Parallel Processes? (United States)

    Cossu, Giuseppe; And Others


    States that reading and written spelling skills for words and nonwords of varying length and orthographic complexity were investigated in normal Italian first and second graders. Finds that reading and written spelling are nonparallel processes and that developmental asynchrony reflects a partial structural independence of the two systems. (PA)

  18. Chinese-English bilinguals processing temporal-spatial metaphor. (United States)

    Xue, Jin; Yang, Jie; Zhao, Qian


    The conceptual projection of time onto the domain of space constitutes one of the most challenging issues in the cognitive embodied theories. In Chinese, spatial order (e.g.,/da shu qian/, in front of a tree) shares the same terms with temporal sequence (", /san yue qian/, before March). In comparison, English natives use different sets of prepositions to describe spatial and temporal relationship, i.e., "before" to express temporal sequencing and "in front of" to express spatial order. The linguistic variations regarding the specific lexical encodings indicate that some flexibility might be available in how space-time parallelisms are formulated across different languages. In the present study, ERP (Event-related potentials) data were collected when Chinese-English bilinguals processed temporal ordering and spatial sequencing in both their first language (L1) Chinese (Experiment 1) and the second language (L2) English (Experiment 2). It was found that, despite the different lexical encodings, early sensorimotor simulation plays a role in temporal sequencing processing in both L1 Chinese and L2 English. The findings well support the embodied theory that conceptual knowledge is grounded in sensory-motor systems (Gallese and Lakoff, Cogn Neuropsychol 22:455-479, 2005). Additionally, in both languages, neural representations during comprehending temporal sequencing and spatial ordering are different. The time-spatial relationship is asymmetric, in that space schema could be imported into temporal sequence processing but not vice versa. These findings support the weak view of the Metaphoric Mapping Theory.

  19. Communication, Technology, Temporality

    Directory of Open Access Journals (Sweden)

    Mark A. Martinez


    Full Text Available This paper proposes a media studies that foregrounds technological objects as communicative and historical agents. Specifically, I take the digital computer as a powerful catalyst of crises in communication theories and certain key features of modernity. Finally, the computer is the motor of “New Media” which is at once a set of technologies, a historical epoch, and a field of knowledge. As such the computer shapes “the new” and “the future” as History pushes its origins further in the past and its convergent quality pushes its future as a predominate medium. As treatment of information and interface suggest, communication theories observe computers, and technologies generally, for the mediated languages they either afford or foreclose to us. My project describes the figures information and interface for the different ways they can be thought of as aspects of communication. I treat information not as semantic meaning, formal or discursive language, but rather as a physical organism. Similarly an interface is not a relationship between a screen and a human visual intelligence, but is instead a reciprocal, affective and physical process of contact. I illustrate that historically there have been conceptions of information and interface complimentary to mine, fleeting as they have been in the face of a dominant temporality of mediation. I begin with a theoretically informed approach to media history, and extend it to a new theory of communication. In doing so I discuss a model of time common to popular, scientific, and critical conceptions of media technologies especially in theories of computer technology. This is a predominate model with particular rules of temporal change and causality for thinking about mediation, and limits the conditions of possibility for knowledge production about communication. I suggest a new model of time as integral to any event of observation and analysis, and that human mediation does not exhaust the

  20. Temporal attention as a Scaffold for Language Development

    Directory of Open Access Journals (Sweden)

    Ruth eDe Diego-Balaguer


    Full Text Available Language is one of the most fascinating abilities that humans possess. Infants demonstrate an amazing repertoire of linguistic abilities from very early on and reach an adult-like form incredibly fast. However, language is not acquired all at once but in an incremental fashion. In this article we propose that the attentional system may be one of the sources for this developmental trajectory in language acquisition. At birth, infants are endowed with an attentional system fully driven by salient stimuli in their environment, such as prosodic information (e.g., rhythm or pitch. Early stages of language acquisition could benefit from this readily available, stimulus-driven attention to simplify the complex speech input and allow word segmentation. At later stages of development, infants are progressively able to selectively attend to specific elements while disregarding others. This attentional ability could allow them to learn distant non-adjacent rules needed for morphosyntactic acquisition. Because non-adjacent dependencies occur at distant moments in time, learning these dependencies may require correctly orienting attention in the temporal domain. Here, we gather evidence uncovering the intimate relationship between the development of attention and language. We aim to provide a novel approach to human development, bridging together temporal attention and language acquisition.

  1. Parallel computing and quantum chromodynamics

    CERN Document Server

    Bowler, K C


    The study of Quantum Chromodynamics (QCD) remains one of the most challenging topics in elementary particle physics. The lattice formulation of QCD, in which space-time is treated as a four- dimensional hypercubic grid of points, provides the means for a numerical solution from first principles but makes extreme demands upon computational performance. High Performance Computing (HPC) offers us the tantalising prospect of a verification of QCD through the precise reproduction of the known masses of the strongly interacting particles. It is also leading to the development of a phenomenological tool capable of disentangling strong interaction effects from weak interaction effects in the decays of one kind of quark into another, crucial for determining parameters of the standard model of particle physics. The 1980s saw the first attempts to apply parallel architecture computers to lattice QCD. The SIMD and MIMD machines used in these pioneering efforts were the ICL DAP and the Cosmic Cube, respectively. These wer...

  2. Parallel Performance Characterization of Columbia (United States)

    Biswas, Rupak


    Using a collection of benchmark problems of increasing levels of realism and computational effort, we will characterize the strengths and limitations of the 10,240 processor Columbia system to deliver supercomputing value to application scientists. Scientists need to be able to determine if and how they can utilize Columbia to carry extreme workloads, either in terms of ultra-large applications that cannot be run otherwise (capability), or in terms of very large ensembles of medium-scale applications to populate response matrices (capacity). We select existing application benchmarks that scale from a small number of processors to the entire machine, and that highlight different issues in running supercomputing-calss applicaions, such as the various types of memory access, file I/O, inter- and intra-node communications and parallelization paradigms.

  3. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne


    .Or they can be organized based on the student’s (social and vocational) competences, on which research-based knowledge is built. University courses are traditionally organized according to the first principle. But in a lifelong learning perspective the last principle will provide the greatest opportunity...... for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...... preconditions and some challenges in implementing the program. It describes the difficulties associated with measuring prior learning and the pedagogical problems related to combine vocational experiences with formal school-based knowledge. But experiences show that the difficulties can be overcome....

  4. Self-testing in parallel (United States)

    McKague, Matthew


    Self-testing allows us to determine, through classical interaction only, whether some players in a non-local game share particular quantum states. Most work on self-testing has concentrated on developing tests for small states like one pair of maximally entangled qubits, or on tests where there is a separate player for each qubit, as in a graph state. Here we consider the case of testing many maximally entangled pairs of qubits shared between two players. Previously such a test was shown where testing is sequential, i.e., one pair is tested at a time. Here we consider the parallel case where all pairs are tested simultaneously, giving considerably more power to dishonest players. We derive sufficient conditions for a self-test for many maximally entangled pairs of qubits shared between two players and also two constructions for self-tests where all pairs are tested simultaneously.

  5. Massively parallel diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V.; Pitts, Todd A.


    Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

  6. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.


    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  7. Perirhinal and Postrhinal, but Not Lateral Entorhinal, Cortices Are Essential for Acquisition of Trace Eyeblink Conditioning (United States)

    Suter, Eugenie E.; Weiss, Craig; Disterhoft, John F.


    The acquisition of temporal associative tasks such as trace eyeblink conditioning is hippocampus-dependent, while consolidated performance is not. The parahippocampal region mediates much of the input and output of the hippocampus, and perirhinal (PER) and entorhinal (EC) cortices support persistent spiking, a possible mediator of temporal…

  8. 48 CFR 434.004 - Acquisition strategy. (United States)


    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Acquisition strategy. 434... CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 434.004 Acquisition strategy. (a) The program... Systems Executive, a written charter outlining the authority, responsibility, accountability, and budget...

  9. Computer-Aided Parallelizer and Optimizer (United States)

    Jin, Haoqiang


    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  10. New multilevel parallelism management for multimedia processors (United States)

    Verians, Xavier; Legat, Jean-Didier; Macq, Benoit M. M.; Quisquater, Jean-Jacques


    This paper presents a new parallelism manager for multimedia multiprocessors. An analysis of recent multimedia applications shows that the available parallelism moves from the data-level to the control-level. New architectures are required to be able to extract this kind of dynamic parallelism. Our proposed parallelism management describes the parallelism with a topological description of the task dependence graph. It allows to represent various and complex parallelism patterns. This parallelism description is separated from the program code to allow the task manager to decode it in parallel with the task execution. The task manager is based on a queue bank that stores the task graph. Control commands are inserted in the task dependence graph to allow a dynamic modification of this graph, depending on the processed data. Simulations on classical multiprocessing benchmarks show that in case of simple parallelism, we have similar performances than classical systems. However, the performances on complex applications are improved up to 12%. Multimedia applications have also bee simulated. The results show that our task manager can efficiently handle complex dynamic parallelism structures.

  11. Cardiac cine parallel imaging on a 0.7T open system. (United States)

    Takizawa, Masahiro; Goto, Tomohiro; Mochizuki, Hiroyuki; Nonaka, Masayuki; Nagai, Shizuka; Takeuchi, Hiroyuki; Taniguchi, Yo; Ochi, Hisaaki; Takahashi, Tetsuhiko


    Parallel imaging can be applied to cardiac imaging with a cylindrical MRI (magnetic resonance imaging) apparatus. Studies of open MRI, however, are few. This study sought to achieve cardiac cine parallel imaging (or RAPID, for "rapid acquisition through parallel imaging design") with an open 0.7T MRI apparatus. Imaging time was shortened in all slice directions with the use of a dedicated four-channel RF receiving coil comprising solenoid coils and butterfly coils. Coil shape was designed through an RF-coil simulation that considered biological load. The auto-calibration of a 0.7T open MRI apparatus incorporated a modified image-domain reconstruction algorithm. Cine images were obtained with a BASG, or balanced SARGE (steady-state acquisition with rewound gradient echo), sequence. Image quality was evaluated with cylindrical phantoms and five healthy volunteers. Multi-slice phantom images showed no visible artifacts. Cine images taken under breath-hold with an acceleration factor of two were evaluated carefully. With auto-calibration, the images revealed no visible unfolded artifacts or motion artifacts. RAPID thus improved the acquisition speed, time resolution, and spatial resolution of short-axis, long-axis, and four-chamber images. The use of a dedicated RF coil enabled cardiac cine RAPID to be performed with an open MRI apparatus.

  12. Parallelization of Rocket Engine Simulator Software (PRESS) (United States)

    Cezzar, Ruknet


    We have outlined our work in the last half of the funding period. We have shown how a demo package for RESSAP using MPI can be done. However, we also mentioned the difficulties with the UNIX platform. We have reiterated some of the suggestions made during the presentation of the progress of the at Fourth Annual HBCU Conference. Although we have discussed, in some detail, how TURBDES/PUMPDES software can be run in parallel using MPI, at present, we are unable to experiment any further with either MPI or PVM. Due to X windows not being implemented, we are also not able to experiment further with XPVM, which it will be recalled, has a nice GUI interface. There are also some concerns, on our part, about MPI being an appropriate tool. The best thing about MPr is that it is public domain. Although and plenty of documentation exists for the intricacies of using MPI, little information is available on its actual implementations. Other than very typical, somewhat contrived examples, such as Jacobi algorithm for solving Laplace's equation, there are few examples which can readily be applied to real situations, such as in our case. In effect, the review of literature on both MPI and PVM, and there is a lot, indicate something similar to the enormous effort which was spent on LISP and LISP-like languages as tools for artificial intelligence research. During the development of a book on programming languages [12], when we searched the literature for very simple examples like taking averages, reading and writing records, multiplying matrices, etc., we could hardly find a any! Yet, so much was said and done on that topic in academic circles. It appears that we faced the same problem with MPI, where despite significant documentation, we could not find even a simple example which supports course-grain parallelism involving only a few processes. From the foregoing, it appears that a new direction may be required for more productive research during the extension period (10/19/98 - 10

  13. High density event-related potential data acquisition in cognitive neuroscience. (United States)

    Slotnick, Scott D


    Functional magnetic resonance imaging (fMRI) is currently the standard method of evaluating brain function in the field of Cognitive Neuroscience, in part because fMRI data acquisition and analysis techniques are readily available. Because fMRI has excellent spatial resolution but poor temporal resolution, this method can only be used to identify the spatial location of brain activity associated with a given cognitive process (and reveals virtually nothing about the time course of brain activity). By contrast, event-related potential (ERP) recording, a method that is used much less frequently than fMRI, has excellent temporal resolution and thus can track rapid temporal modulations in neural activity. Unfortunately, ERPs are under utilized in Cognitive Neuroscience because data acquisition techniques are not readily available and low density ERP recording has poor spatial resolution. In an effort to foster the increased use of ERPs in Cognitive Neuroscience, the present article details key techniques involved in high density ERP data acquisition. Critically, high density ERPs offer the promise of excellent temporal resolution and good spatial resolution (or excellent spatial resolution if coupled with fMRI), which is necessary to capture the spatial-temporal dynamics of human brain function.

  14. General-purpose optimization methods for parallelization of digital terrain analysis based on cellular automata (United States)

    Cheng, Guo; Liu, Lu; Jing, Ning; Chen, Luo; Xiong, Wei


    Solving traditional spatial analysis problems benefits from high performance geo-computation powered by parallel computing. Digital Terrain Analysis (DTA) is a typical example of data and computationally intensive spatial analysis problems and can be improved by parallelization technologies. Previous work on this topic has mainly focused on applying optimization schemes for specific DTA case studies. The task addressed in this paper, in contrast, is to find optimization methods that are generally applicable to the parallelization of DTA. By modeling a complex DTA problem with Cellular Automata (CA), we developed a temporal model that can describe the time cost of the solution. Three methods for optimizing different components in the temporal model are proposed: (1) a parallel loading/writing method that can improve the IO efficiency; (2) a best cell division method that can minimize the communication time among processes; and (3) a communication evolution overlapping method that can reduce the total time of evolutions and communications. The feasibilities and practical efficiencies of the proposed methods have been verified by comparative experiments conducted on an elevation dataset from North America using the Slope of Aspect (SOA) as an example of a general DTA problem. The results showed that the parallel performance of the SOA can be improved by applying the proposed methods individually or in an integrated fashion.

  15. ADHD and temporality

    DEFF Research Database (Denmark)

    Nielsen, Mikka

    According to the official diagnostic manual, ADHD is defined by symptoms of inattention, hyperactivity, and impulsivity and patterns of behaviour are characterized as failure to pay attention to details, excessive talking, fidgeting, or inability to remain seated in appropriate situations (DSM-5......). In this paper, however, I will ask if we can understand what we call ADHD in a different way than through the symptom descriptions and will advocate for a complementary, phenomenological understanding of ADHD as a certain being in the world – more specifically as a matter of a phenomenological difference...... in temporal experience and/or rhythm. Inspired by both psychiatry’s experiments with people diagnosed with ADHD and their assessment of time and phenomenological perspectives on mental disorders and temporal disorientation I explore the experience of ADHD as a disruption in the phenomenological experience...

  16. Temporal lobe epilepsy semiology. (United States)

    Blair, Robert D G


    Epilepsy represents a multifaceted group of disorders divided into two broad categories, partial and generalized, based on the seizure onset zone. The identification of the neuroanatomic site of seizure onset depends on delineation of seizure semiology by a careful history together with video-EEG, and a variety of neuroimaging technologies such as MRI, fMRI, FDG-PET, MEG, or invasive intracranial EEG recording. Temporal lobe epilepsy (TLE) is the commonest form of focal epilepsy and represents almost 2/3 of cases of intractable epilepsy managed surgically. A history of febrile seizures (especially complex febrile seizures) is common in TLE and is frequently associated with mesial temporal sclerosis (the commonest form of TLE). Seizure auras occur in many TLE patients and often exhibit features that are relatively specific for TLE but few are of lateralizing value. Automatisms, however, often have lateralizing significance. Careful study of seizure semiology remains invaluable in addressing the search for the seizure onset zone.

  17. 75 FR 77721 - Federal Acquisition Regulation; Federal Acquisition Circular 2005-47; Introduction (United States)


    ... ADMINISTRATION 48 CFR Chapter 1 Federal Acquisition Regulation; Federal Acquisition Circular 2005-47... Council and the Defense Acquisition Regulations Council (Councils) in this Federal Acquisition Circular... interagency acquisitions, not just those made under the Economy Act authority. A new subsection 17.502-1 is...

  18. Parallel closure theory for toroidally confined plasmas (United States)

    Ji, Jeong-Young; Held, Eric D.


    We solve a system of general moment equations to obtain parallel closures for electrons and ions in an axisymmetric toroidal magnetic field. Magnetic field gradient terms are kept and treated using the Fourier series method. Assuming lowest order density (pressure) and temperature to be flux labels, the parallel heat flow, friction, and viscosity are expressed in terms of radial gradients of the lowest-order temperature and pressure, parallel gradients of temperature and parallel flow, and the relative electron-ion parallel flow velocity. Convergence of closure quantities is demonstrated as the number of moments and Fourier modes are increased. Properties of the moment equations in the collisionless limit are also discussed. Combining closures with fluid equations parallel mass flow and electric current are also obtained. Work in collaboration with the PSI Center and supported by the U.S. DOE under Grant Nos. DE-SC0014033, DE-SC0016256, and DE-FG02-04ER54746.

  19. Parallelism, deep homology, and evo-devo. (United States)

    Hall, Brian K


    Parallelism has been the subject of a number of recent studies that have resulted in reassessment of the term and the process. Parallelism has been aligned with homology leaving convergence as the only case of homoplasy, regarded as a transition between homologous and convergent characters, and defined as the independent evolution of genetic traits. Another study advocates abolishing the term parallelism and treating all cases of the independent evolution of characters as convergence. With the sophistication of modern genomics and genetic analysis, parallelism of characters of the phenotype is being discovered to reflect parallel genetic evolution. Approaching parallelism from developmental and genetic perspectives enables us to tease out the degree to which the reuse of pathways represent deep homology and is a major task for evolutionary developmental biology in the coming decades. © 2012 Wiley Periodicals, Inc.

  20. The Same-Source Parallel MM5

    Directory of Open Access Journals (Sweden)

    John Michalakes


    Full Text Available Beginning with the March 1998 release of the Penn State University/NCAR Mesoscale Model (MM5, and continuing through eight subsequent releases up to the present, the official version has run on distributed -memory (DM parallel computers. Source translation and runtime library support minimize the impact of parallelization on the original model source code, with the result that the majority of code is line-for-line identical with the original version. Parallel performance and scaling are equivalent to earlier, hand-parallelized versions; the modifications have no effect when the code is compiled and run without the DM option. Supported computers include the IBM SP, Cray T3E, Fujitsu VPP, Compaq Alpha clusters, and clusters of PCs (so-called Beowulf clusters. The approach also is compatible with shared-memory parallel directives, allowing distributed-memory/shared-memory hybrid parallelization on distributed-memory clusters of symmetric multiprocessors.

  1. Design considerations for parallel graphics libraries (United States)

    Crockett, Thomas W.


    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  2. Easy and Effective Parallel Programmable ETL

    DEFF Research Database (Denmark)

    Thomsen, Christian; Pedersen, Torben Bach


    Extract–Transform–Load (ETL) programs are used to load data into data warehouses (DWs). An ETL program must extract data from sources, apply different transformations to it, and use the DW to look up/insert the data. It is both time consuming to develop and to run an ETL program. It is, however......, typically the case that the ETL program can exploit both task parallelism and data parallelism to run faster. This, on the other hand, makes the development time longer as it is complex to create a parallel ETL program. To remedy this situation, we propose efficient ways to parallelize typical ETL tasks...... and we implement these new constructs in an ETL framework. The constructs are easy to apply and do only require few modifications to an ETL program to parallelize it. They support both task and data parallelism and give the programmer different possibilities to choose from. An experimental evaluation...

  3. ParCAT: A Parallel Climate Analysis Toolkit (United States)

    Haugen, B.; Smith, B.; Steed, C.; Ricciuto, D. M.; Thornton, P. E.; Shipman, G.


    Climate science has employed increasingly complex models and simulations to analyze the past and predict the future of our climate. The size and dimensionality of climate simulation data has been growing with the complexity of the models. This growth in data is creating a widening gap between the data being produced and the tools necessary to analyze large, high dimensional data sets. With single run data sets increasing into 10's, 100's and even 1000's of gigabytes, parallel computing tools are becoming a necessity in order to analyze and compare climate simulation data. The Parallel Climate Analysis Toolkit (ParCAT) provides basic tools that efficiently use parallel computing techniques to narrow the gap between data set size and analysis tools. ParCAT was created as a collaborative effort between climate scientists and computer scientists in order to provide efficient parallel implementations of the computing tools that are of use to climate scientists. Some of the basic functionalities included in the toolkit are the ability to compute spatio-temporal means and variances, differences between two runs and histograms of the values in a data set. ParCAT is designed to facilitate the "heavy lifting" that is required for large, multidimensional data sets. The toolkit does not focus on performing the final visualizations and presentation of results but rather, reducing large data sets to smaller, more manageable summaries. The output from ParCAT is provided in commonly used file formats (NetCDF, CSV, ASCII) to allow for simple integration with other tools. The toolkit is currently implemented as a command line utility, but will likely also provide a C library for developers interested in tighter software integration. Elements of the toolkit are already being incorporated into projects such as UV-CDAT and CMDX. There is also an effort underway to implement portions of the CCSM Land Model Diagnostics package using ParCAT in conjunction with Python and gnuplot. Par

  4. Computational models of syntactic acquisition. (United States)

    Yang, Charles


    The computational approach to syntactic acquisition can be fruitfully pursued by integrating results and perspectives from computer science, linguistics, and developmental psychology. In this article, we first review some key results in computational learning theory and their implications for language acquisition. We then turn to examine specific learning models, some of which exploit distributional information in the input while others rely on a constrained space of hypotheses, yet both approaches share a common set of characteristics to overcome the learning problem. We conclude with a discussion of how computational models connects with the empirical study of child grammar, making the case for computationally tractable, psychologically plausible and developmentally realistic models of acquisition. WIREs Cogn Sci 2012, 3:205-213. doi: 10.1002/wcs.1154 For further resources related to this article, please visit the WIREs website. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Unusual ictal foreign language automatisms in temporal lobe epilepsy. (United States)

    Soe, Naing Ko; Lee, Sang Kun


    The distinct brain regions could be specifically involved in different languages and the differences in brain activation depending on the language proficiency and on the age of language acquisition. Speech disturbances are observed in the majority of temporal lobe complex motor seizures. Ictal verbalization had significant lateralization value: 90% of patients with this manifestation had seizure focus in the non-dominant temporal lobe. Although, ictal speech automatisms are usually uttered in the patient's native language, ictal speech foreign language automatisms are unusual presentations of non-dominent temporal lobe epilepsy. The release of isolated foreign language area could be possible depending on the pattern of ictal spreading of non-dominant hemisphere. Most of the case reports in ictal speech foreign language automatisms were men. In this case report, we observed ictal foreign language automatisms in middle age Korean woman.

  6. Image-based temporal alignment of echocardiographic sequences (United States)

    Danudibroto, Adriyana; Bersvendsen, Jørn; Mirea, Oana; Gerard, Olivier; D'hooge, Jan; Samset, Eigil


    Temporal alignment of echocardiographic sequences enables fair comparisons of multiple cardiac sequences by showing corresponding frames at given time points in the cardiac cycle. It is also essential for spatial registration of echo volumes where several acquisitions are combined for enhancement of image quality or forming larger field of view. In this study, three different image-based temporal alignment methods were investigated. First, a method based on dynamic time warping (DTW). Second, a spline-based method that optimized the similarity between temporal characteristic curves of the cardiac cycle using 1D cubic B-spline interpolation. Third, a method based on the spline-based method with piecewise modification. These methods were tested on in-vivo data sets of 19 echo sequences. For each sequence, the mitral valve opening (MVO) time was manually annotated. The results showed that the average MVO timing error for all methods are well under the time resolution of the sequences.

  7. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor

    Directory of Open Access Journals (Sweden)

    Yanzhi Zhao


    Full Text Available In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor.

  8. Basic parallel and distributed computing curriculum


    Tadonki, Claude


    International audience; With the advent of multi-core processors and their fast expansion, it is quite clear that parallel computing is now a genuine requirement in Computer Science and Engineering (and related) curriculum. In addition to the pervasiveness of parallel computing devices, we should take into account the fact that there are lot of existing softwares that are implemented in the sequential mode, and thus need to be adapted for a parallel execution. Therefore, it is required to the...

  9. 07181 Introduction -- Parallel Universes and Local Patterns


    Berthold, Michael R.; Morik, Katharina; Siebes, Arno


    Learning in parallel universes and the mining for local patterns are both relatively new fields of research. Local pattern detection addresses the problem of identifying (small) deviations from an overall distribution of some underlying data in some feature space. Learning in parallel universes on the other hand, deals with the analysis of objects, which are given in different feature spaces, i.e. parallel universes; and the aim is on finding groups of objects, which show ...

  10. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB


    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  11. Automatic Multilevel Parallelization Using OpenMP

    Directory of Open Access Journals (Sweden)

    Haoqiang Jin


    Full Text Available In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  12. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I


    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  13. Data Acquisition with GPUs: The DAQ for the Muon $g$-$2$ Experiment at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Gohn, W. [Kentucky U.


    Graphical Processing Units (GPUs) have recently become a valuable computing tool for the acquisition of data at high rates and for a relatively low cost. The devices work by parallelizing the code into thousands of threads, each executing a simple process, such as identifying pulses from a waveform digitizer. The CUDA programming library can be used to effectively write code to parallelize such tasks on Nvidia GPUs, providing a significant upgrade in performance over CPU based acquisition systems. The muon $g$-$2$ experiment at Fermilab is heavily relying on GPUs to process its data. The data acquisition system for this experiment must have the ability to create deadtime-free records from 700 $\\mu$s muon spills at a raw data rate 18 GB per second. Data will be collected using 1296 channels of $\\mu$TCA-based 800 MSPS, 12 bit waveform digitizers and processed in a layered array of networked commodity processors with 24 GPUs working in parallel to perform a fast recording of the muon decays during the spill. The described data acquisition system is currently being constructed, and will be fully operational before the start of the experiment in 2017.

  14. Independent slab-phase modulation combined with parallel imaging in bilateral breast MRI. (United States)

    Han, Misung; Beatty, Philip J; Daniel, Bruce L; Hargreaves, Brian A


    Independent slab-phase modulation allows three-dimensional imaging of multiple volumes without encoding the space between volumes, thus reducing scan time. Parallel imaging further accelerates data acquisition by exploiting coil sensitivity differences between volumes. This work compared bilateral breast image quality from self-calibrated parallel imaging reconstruction methods such as modified sensitivity encoding, generalized autocalibrating partially parallel acquisitions and autocalibrated reconstruction for Cartesian sampling (ARC) for data with and without slab-phase modulation. A study showed an improvement of image quality by incorporating slab-phase modulation. Geometry factors measured from phantom images were more homogenous and lower on average when slab-phase modulation was used for both mSENSE and GRAPPA reconstructions. The resulting improved signal-to-noise ratio (SNR) was validated for in vivo images as well using ARC instead of GRAPPA, illustrating average SNR efficiency increases in mSENSE by 5% and ARC by 8% based on region of interest analysis. Furthermore, aliasing artifacts from mSENSE reconstruction were reduced when slab-phase modulation was used. Overall, slab-phase modulation with parallel imaging improved image quality and efficiency for 3D bilateral breast imaging. (c) 2009 Wiley-Liss, Inc.

  15. Parallel Picture-Naming Tests: Development and Psychometric Properties for Farsi-Speaking Adults. (United States)

    Tahanzadeh, Behnoosh; Soleymani, Zahra; Jalaie, Shohre


    The present study describes the development and validation of two parallel picture-naming tests (PPNTs) as neuropsychological tools for evaluating word retrieval disorders in Farsi-speaking adults with and without aphasia. The development phase used the distributions of psycholinguistic variables (word frequency or age of acquisition) to select test items. Each parallel test consists of 109 line-drawings assigned to concrete nouns that were arranged in order of increasing difficulty. Assessment of content validity indicated that all items were quite or highly relevant and clear. The psychometric features were tested on 30 normal adults and 10 matched individuals with aphasia. The results showed appropriate criterion validity. Parallel tests allowed discrimination by subjects with and without naming difficulties. The tests were internally consistent. Each test form showed reasonable test-retest reliability. The correlation between the scores from both test forms indicated good parallel reliability. The cut-off point at which the tests reached the highest level of sensitivity and specificity was observed to be 86 correct responses. The percentage of correct responses for each item correlated strongly with frequency, age of acquisition, and name agreement. The overall findings support the validity and reliability of the PPNTs and suggest that these tests are appropriate for use in research and for clinical purposes.

  16. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven


    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...

  17. Mergers and Acquisitions - Case Study


    Vedele, Sebastiano


    The thesis generally talks about mergers & acquisitions, discussing definitions, differences and reasons behind an M&A. I have analyzed what is a merge and what is an acquisition. Why companies combine themselves through an M&A. What are advantages and disadvantages about an M&A. After that the work is followed by a case study, which focuses on Fiat and Chrysler. With regards to this point, the case touches all the steps of the agreement between the two car automakers providing numbers, perce...

  18. Parallel reconstruction in accelerated multivoxel MR spectroscopy

    NARCIS (Netherlands)

    Boer, V. O.; Klomp, D. W. J.|info:eu-repo/dai/nl/298206382; Laterra, J.; Barker, P. B.

    PurposeTo develop the simultaneous acquisition of multiple voxels in localized MR spectroscopy (MRS) using sensitivity encoding, allowing reduced total scan time compared to conventional sequential single voxel (SV) acquisition methods. MethodsDual volume localization was used to simultaneously

  19. Parallel Algorithms for Online Track Finding for the \\bar{{\\rm{P}}}ANDA Experiment at FAIR (United States)

    Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; PANDA Collaboration


    \\bar{{{P}}}ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. Unlike the majority of current experiments, \\bar{{{P}}}ANDA’s strategy for data acquisition is based on online event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports on the status of the development of algorithms for the reconstruction of charged particle tracks, targeted towards online data processing applications, designed for execution on data-parallel processors such as GPUs (Graphic Processing Units). Two parallel algorithms for track finding, derived from the Circle Hough algorithm, are being developed to extend the parallelism to all stages of the algorithm. The concepts of the algorithms are described, along with preliminary results and considerations about their implementations and performance.

  20. Lossless Three-Dimensional Parallelization in Digitally Scanned Light-Sheet Fluorescence Microscopy. (United States)

    Dean, Kevin M; Fiolka, Reto


    We introduce a concept that enables parallelized three-dimensional imaging throughout large volumes with isotropic 300-350 nm resolution. By staggering high aspect ratio illumination beams laterally and axially within the depth of focus of a digitally scanned light-sheet fluorescence microscope (LSFM), multiple image planes can be simultaneously imaged with minimal cross-talk and light loss. We present a first demonstration of this concept for parallelized imaging by synthesizing two light-sheets with nonlinear Bessel beams and perform volumetric imaging of fluorescent beads and invasive breast cancer cells. This work demonstrates that in principle any digitally scanned LSFM can be parallelized in a lossless manner, enabling drastically faster volumetric image acquisition rates for a given sample brightness and detector technology.

  1. Self-mixing flow sensor using a monolithic VCSEL array with parallel readout. (United States)

    Lim, Yah Leng; Kliese, Russell; Bertling, Karl; Tanimizu, Katsuyoshi; Jacobs, P A; Rakić, Aleksandar D


    The self-mixing sensing technique is a compact, interferometric sensing technique that can be used for measuring fluid flows. In this work, we demonstrate a parallel readout self-mixing flow velocity sensing system based on a monolithic Vertical-Cavity Surface-Emitting Laser (VCSEL) array. The parallel sensing scheme enables high-resolution full-field imaging systems employing electronic scanning with faster acquisition rates than mechanical scanning systems. The self-mixing signal is acquired from the variation in VCSEL junction voltage, thus markedly reducing the system complexity. The system was validated by measuring velocity distribution of fluid in a custom built diverging-converging planar flow channel. The results obtained agree well with simulation and demonstrate the feasibility of high frame-rate and resolution parallel self-mixing sensors.

  2. Parallel Simulation of Chip-Multiprocessor Architectures

    National Research Council Canada - National Science Library

    Chidester, Matthew C; George, Alan D


    Chip-multiprocessor (CMP) architectures present a challenge for efficient simulation, combining the requirements of a detailed microprocessor simulator with that of a tightly-coupled parallel system...

  3. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster


    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  4. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille


    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  5. Parallel QR Decomposition for Electromagnetic Scattering Problems

    National Research Council Canada - National Science Library

    Boleng, Jeff


    This report introduces a new parallel QR decomposition algorithm. Test results are presented for several problem sizes, numbers of processors, and data from the electromagnetic scattering problem domain...

  6. High performance parallel I/O

    CERN Document Server



    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  7. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch


    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  8. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory


    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  9. Parallel thermal radiation transport in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)


    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  10. Parallel transmission techniques in magnetic resonance imaging: experimental realization, applications and perspectives; Parallele Sendetechniken in der Magnetresonanztomographie: experimentelle Realisierung, Anwendungen und Perspektiven

    Energy Technology Data Exchange (ETDEWEB)

    Ullmann, P.


    and parallel reception to further reduce the acquisition time. (orig.)

  11. [First language acquisition research and theories of language acquisition]. (United States)

    Miller, S; Jungheim, M; Ptok, M


    In principle, a child can seemingly easily acquire any given language. First language acquisition follows a certain pattern which to some extent is found to be language independent. Since time immemorial, it has been of interest why children are able to acquire language so easily. Different disciplinary and methodological orientations addressing this question can be identified. A selective literature search in PubMed and Scopus was carried out and relevant monographies were considered. Different, partially overlapping phases can be distinguished in language acquisition research: whereas in ancient times, deprivation experiments were carried out to discover the "original human language", the era of diary studies began in the mid-19th century. From the mid-1920s onwards, behaviouristic paradigms dominated this field of research; interests were focussed on the determination of normal, average language acquisition. The subsequent linguistic period was strongly influenced by the nativist view of Chomsky and the constructivist concepts of Piaget. Speech comprehension, the role of speech input and the relevance of genetic disposition became the centre of attention. The interactionist concept led to a revival of the convergence theory according to Stern. Each of these four major theories--behaviourism, cognitivism, interactionism and nativism--have given valuable and unique impulses, but no single theory is universally accepted to provide an explanation of all aspects of language acquisition. Moreover, it can be critically questioned whether clinicians consciously refer to one of these theories in daily routine work and whether therapies are then based on this concept. It remains to be seen whether or not new theories of grammar, such as the so-called construction grammar (CxG), will eventually change the general concept of language acquisition.

  12. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano


    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  13. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Yuchou Chang


    Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.

  14. BOLD sensitivity and SNR characteristics of parallel imaging-accelerated single-shot multi-echo EPI for fMRI. (United States)

    Bhavsar, Saurabh; Zvyagintsev, Mikhail; Mathiak, Klaus


    Echo-planar imaging (EPI) is a standard procedure in functional magnetic resonance imaging (fMRI) for measuring changes in the blood oxygen level-dependent (BOLD) signal associated with neuronal activity. The images obtained from fMRI with EPI, however, exhibit signal dropouts and geometric distortions. Parallel imaging (PI), due to its short readout, accelerates image acquisition and might reduce dephasing in phase-encoding direction. The concomitant loss of signal-to-noise ratio (SNR) might be compensated through single-shot multi-echo EPI (mEPI). We systematically compared the temporal SNR and BOLD sensitivity of single echoes (TE=15, 45, and 75ms) and contrast-optimized mEPI with and without PI and mEPI-based denoising. Audio-visual stimulation under natural viewing conditions activated distributed neural networks. Heterogeneous SNR, noise gain, and sensitivity maps emerged. In single echoes, SNR and BOLD sensitivity followed the predicted dependency on echo time (TE) and were reduced under PI. However, the combination of echoes with mEPI recovered the quality parameters and increased BOLD signal changes at circumscribed fronto-polar and deep brain structures. We suggest applying PI only in combination with mEPI to reduce imaging artifacts and conserve BOLD sensitivity. © 2013.

  15. Temporal expectancies driven by self- and externally generated rhythms. (United States)

    Jones, Alexander; Hsu, Yi-Fang; Granjon, Lionel; Waszak, Florian


    The dynamic attending theory proposes that rhythms entrain periodic fluctuations of attention which modulate the gain of sensory input. However, temporal expectancies can also be driven by the mere passage of time (foreperiod effect). It is currently unknown how these two types of temporal expectancy relate to each other, i.e. whether they work in parallel and have distinguishable neural signatures. The current research addresses this issue. Participants either tapped a 1Hz rhythm (active task) or were passively presented with the same rhythm using tactile stimulators (passive task). Based on this rhythm an auditory target was then presented early, in synchrony, or late. Behavioural results were in line with the dynamic attending theory as RTs were faster for in- compared to out-of-synchrony targets. Electrophysiological results suggested self-generated and externally induced rhythms to entrain neural oscillations in the delta frequency band. Auditory ERPs showed evidence of two distinct temporal expectancy processes. Both tasks demonstrated a pattern which followed a linear foreperiod effect. In the active task, however, we also observed an ERP effect consistent with the dynamic attending theory. This study shows that temporal expectancies generated by a rhythm and expectancy generated by the mere passage of time can work in parallel and sheds light on how these mechanisms are implemented in the brain. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Step-parallel algorithms for stiff initial value problems

    NARCIS (Netherlands)

    W.A. van der Veen


    textabstractFor the parallel integration of stiff initial value problems, three types of parallelism can be employed: 'parallelism across the problem', 'parallelism across the method' and 'parallelism across the steps'. Recently, methods based on Runge-Kutta schemes that use parallelism across the

  17. Suffix Knowledge: Acquisition and Applications (United States)

    Ward, Jeremy; Chuenjundaeng, Jitlada


    The purpose of this study is to investigate L2 learners' knowledge of complex word part analysis ("word-building"), with particular reference to two issues: suffix acquisition and to the use of word families as a counting tool. Subjects were two groups of EAP students in a Thai university. Results suggest that (1) the use of word…

  18. Instructional Model for Concept Acquisition. (United States)

    Tennyson, Robert D.

    The purpose of this paper is to demonstrate the feasibility of applying research variables for concept acquisition into a generalized instructional model for teaching concepts. This paper does not present the methodology for the decision/selection stages in designing the actual instruction task, but offers references to other sources which give…

  19. [Language acquisition and statistical learning]. (United States)

    Breitenstein, C; Knecht, S


    Statistical learning is a basic mechanism of information processing in the human brain. The purpose lies in the extraction of probabilistic regularities from the multitude of sensory inputs. Principles of statistical learning contribute significantly to language acquisition and presumably also to language recovery following stroke. The empirical database presented in this manuscript demonstrates that the process of word segmentation, acquisition of a lexicon, and acquisition of simple grammatical rules can be entirely explained through statistical learning. Statistical learning is mediated by changes in synaptic weights in neuronal networks. The concept therefore stands at the transition to molecular biology and pharmacology of the neuronal synapse. It still remains to be shown if all aspects of language acquisition can be explained through statistical learning and which regions of the brain are involved in or capable of statistical learning. Principles of effective language training are obvious already. Most important is the massive, repeated interactive exposure. Conscious processing of the stimulus material may not be essential. The crucial principle is a high cooccurrence of language and corresponding sensory processes. This requires a more intense training frequency than traditional aphasia treatment programs provide.

  20. Profiling Vocabulary Acquisition in Irish (United States)

    O'Toole, Ciara; Fletcher, Paul


    Investigations into early vocabulary development, including the timing of the acquisition of nouns, verbs and closed-class words, have produced conflicting results, both within and across languages. Studying vocabulary development in Irish can contribute to this area, as it has potentially informative features such as a VSO word order, and…

  1. Analog Input Data Acquisition Software (United States)

    Arens, Ellen


    DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.

  2. Essays on mergers and acquisitions

    NARCIS (Netherlands)

    Faelten, A.I.


    “Essays on Mergers and Acquisitions" tackles some of the most prominent business challenges related to M&A activity. The Introduction examines the reasons why deals fail through well-known case studies; Chapter 1 presents a new index measuring countries M&A maturity worldwide; Chapter 2 focus on the

  3. Behavioural View of Language Acquisition

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 13; Issue 5. Behavioural View of Language Acquisition. Rajeev Sangal. Book Review Volume 13 Issue 5 May 2008 pp 487-489. Fulltext. Click here to view fulltext PDF. Permanent link: ...

  4. Language Acquisition and Language Revitalization (United States)

    O'Grady, William; Hattori, Ryoko


    Intergenerational transmission, the ultimate goal of language revitalization efforts, can only be achieved by (re)establishing the conditions under which an imperiled language can be acquired by the community's children. This paper presents a tutorial survey of several key points relating to language acquisition and maintenance in children,…

  5. The SINQ data acquisition environment

    Energy Technology Data Exchange (ETDEWEB)

    Maden, D. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)


    The data acquisition environment for the neutron scattering instruments supported by LNS at SINQ is described. The intention is to provide future users with the necessary background to the computing facilities on site rather than to present a user manual for the on-line system. (author) 5 figs., 6 refs.

  6. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34 (United States)

    von Davier, Matthias


    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  7. Spectrotemporal CT data acquisition and reconstruction at low dose

    Energy Technology Data Exchange (ETDEWEB)

    Clark, Darin P.; Badea, Cristian T., E-mail: [Department of Radiology, Center for In Vivo Microscopy, Duke University Medical Center, Durham, North Carolina 27710 (United States); Lee, Chang-Lung [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Kirsch, David G. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 and Department of Pharmacology and Cancer Biology, Duke University Medical Center, Durham, North Carolina 27710 (United States)


    Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction

  8. Buffer gas acquisition and storage (United States)

    Parrish, Clyde F.; Lueck, Dale E.; Jennings, Paul A.


    The acquisition and storage of buffer gases (primarily argon and nitrogen) from the Mars atmosphere provides a valuable resource for blanketing and pressurizing fuel tanks and as a buffer gas for breathing air for manned missions. During the acquisition of carbon dioxide (CO2), whether by sorption bed or cryo-freezer, the accompanying buffer gases build up in the carbon dioxide acquisition system, reduce the flow of CO2 to the bed, and lower system efficiency. It is this build up of buffer gases that provide a convenient source, which must be removed, for efficient capture of CO2. Removal of this buffer gas barrier greatly improves the charging rate of the CO2 acquisition bed and, thereby, maintains the fuel production rates required for a successful mission. Consequently, the acquisition, purification, and storage of these buffer gases are important goals of ISRU plans. Purity of the buffer gases is a concern e.g., if the CO2 freezer operates at 140 K, the composition of the inert gas would be approximately 21 percent CO2, 50 percent nitrogen, and 29 percent argon. Although there are several approaches that could be used, this effort focused on a hollow-fiber membrane (HFM) separation method. This study measured the permeation rates of CO2, nitrogen (N2), and argon (Ar) through a multiple-membrane system and the individual membranes from room temperature to 193 K and 10 kPa to 300 kPa. Concentrations were measured with a gas chromatograph. The end result was data necessary to design a system that could separate CO2, N2, and Ar. .

  9. Buffer Gas Acquisition and Storage (United States)

    Parrish, Clyde F.; Lueck, Dale E.; Jennings, Paul A.; Callahan, Richard A.; Delgado, H. (Technical Monitor)


    The acquisition and storage of buffer gases (primarily argon and nitrogen) from the Mars atmosphere provides a valuable resource for blanketing and pressurizing fuel tanks and as a buffer gas for breathing air for manned missions. During the acquisition of carbon dioxide (CO2), whether by sorption bed or cryo-freezer, the accompanying buffer gases build up in the carbon dioxide acquisition system, reduce the flow of CO2 to the bed, and lower system efficiency. It is this build up of buffer gases that provide a convenient source, which must be removed, for efficient capture Of CO2 Removal of this buffer gas barrier greatly improves the charging rate of the CO2 acquisition bed and, thereby, maintains the fuel production rates required for a successful mission. Consequently, the acquisition, purification, and storage of these buffer gases are important goals of ISRU plans. Purity of the buffer gases is a concern e.g., if the CO, freezer operates at 140 K, the composition of the inert gas would be approximately 21 percent CO2, 50 percent nitrogen, and 29 percent argon. Although there are several approaches that could be used, this effort focused on a hollow-fiber membrane (HFM) separation method. This study measured the permeation rates of CO2, nitrogen (ND, and argon (Ar) through a multiple-membrane system and the individual membranes from room temperature to 193K and 10 kpa to 300 kPa. Concentrations were measured with a gas chromatograph that used a thermoconductivity (TCD) detector with helium (He) as the carrier gas. The general trend as the temperature was lowered was for the membranes to become more selective, In addition, the relative permeation rates between the three gases changed with temperature. The end result was to provide design parameters that could be used to separate CO2 from N2 and Ar.

  10. Evalueringsrapport: Projekt Parallel Pædagogik

    DEFF Research Database (Denmark)

    Andreasen, Karen Egedal; Hviid, Marianne Kemeny


    Evaluering af udviklingsarbejde om parallel pædagogik på VUC på VUC Sønderjylland og VUC FYN & FYNs HF-kursus......Evaluering af udviklingsarbejde om parallel pædagogik på VUC på VUC Sønderjylland og VUC FYN & FYNs HF-kursus...

  11. Parallel approach in RDF query processing (United States)

    Vajgl, Marek; Parenica, Jan


    Parallel approach is nowadays a very cheap solution to increase computational power due to possibility of usage of multithreaded computational units. This hardware became typical part of nowadays personal computers or notebooks and is widely spread. This contribution deals with experiments how evaluation of computational complex algorithm of the inference over RDF data can be parallelized over graphical cards to decrease computational time.

  12. Parallel Computing Strategies for Irregular Algorithms (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)


    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  13. Serial Order: A Parallel Distributed Processing Approach. (United States)

    Jordan, Michael I.

    Human behavior shows a variety of serially ordered action sequences. This paper presents a theory of serial order which describes how sequences of actions might be learned and performed. In this theory, parallel interactions across time (coarticulation) and parallel interactions across space (dual-task interference) are viewed as two aspects of a…

  14. Parallel Evaluation of Multi-join Queries

    NARCIS (Netherlands)

    Wilschut, A.N.; Flokstra, Jan; Apers, Peter M.G.

    A number of execution strategies for parallel evaluation of multi-join queries have been proposed in the literature. In this paper we give a comparative performance evaluation of four execution strategies by implementing all of them on the same parallel database system, PRISMA/DB. Experiments have

  15. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  16. Parallel Narrative Structure in Paul Harding's "Tinkers" (United States)

    Çirakli, Mustafa Zeki


    The present paper explores the implications of parallel narrative structure in Paul Harding's "Tinkers" (2009). Besides primarily recounting the two sets of parallel narratives, "Tinkers" also comprises of seemingly unrelated fragments such as excerpts from clock repair manuals and diaries. The main stories, however, told…

  17. Customizable Memory Schemes for Data Parallel Architectures

    NARCIS (Netherlands)

    Gou, C.


    Memory system efficiency is crucial for any processor to achieve high performance, especially in the case of data parallel machines. Processing capabilities of parallel lanes will be wasted, when data requests are not accomplished in a sustainable and timely manner. Irregular vector memory accesses

  18. Parallel transposition of sparse data structures

    DEFF Research Database (Denmark)

    Wang, Hao; Liu, Weifeng; Hou, Kaixi


    Many applications in computational sciences and social sciences exploit sparsity and connectivity of acquired data. Even though many parallel sparse primitives such as sparse matrix-vector (SpMV) multiplication have been extensively studied, some other important building blocks, e.g., parallel...

  19. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad


    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  20. Task Parallelism and Data Distribution: An Overview of Explicit Parallel Programming Languages


    Khaldi, Dounia; Jouvelot, Pierre,; Ancourt, Corinne; Irigoin, François


    15 pages; International audience; Programming parallelmachines as effectively as sequential ones would ideally require a language that provides high-level programming constructs to avoid the programming errors frequent when expressing parallelism. Since task parallelism is considered more error-prone than data parallelism, we survey six popular and efficient parallel language designs that tackle this difficult issue: Cilk, Chapel, X10, Habanero-Java, OpenMP and OpenCL. Using as single running...

  1. Data acquisition system for the MuLan muon lifetime experiment

    Energy Technology Data Exchange (ETDEWEB)

    Tishchenko, V.; Battu, S.; Cheekatmalla, S. [Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506 (United States); Chitwood, D.B. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Dhamija, S. [Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506 (United States); Gorringe, T.P. [Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506 (United States)], E-mail:; Gray, F. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Department of Physics, University of California, Berkeley, CA 94720 (United States); Lynch, K.R.; Logashenko, I. [Department of Physics, Boston University, Boston, MA 02215 (United States); Rath, S. [Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506 (United States); Webber, D.M. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States)


    We describe the data acquisition system for the MuLan muon lifetime experiment at Paul Scherrer Institute. The system was designed to record muon decays at rates up to 1 MHz and acquire data at rates up to 60 MB/s. The system employed a parallel network of dual-processor machines and repeating acquisition cycles of deadtime-free time segments in order to reach the design goals. The system incorporated a versatile scheme for control and diagnostics and a custom web interface for monitoring experimental conditions.

  2. Monitoring and Acquisition Real-time System (MARS) (United States)

    Holland, Corbin


    MARS is a graphical user interface (GUI) written in MATLAB and Java, allowing the user to configure and control the Scalable Parallel Architecture for Real-Time Acquisition and Analysis (SPARTAA) data acquisition system. SPARTAA not only acquires data, but also allows for complex algorithms to be applied to the acquired data in real time. The MARS client allows the user to set up and configure all settings regarding the data channels attached to the system, as well as have complete control over starting and stopping data acquisition. It provides a unique "Test" programming environment, allowing the user to create tests consisting of a series of alarms, each of which contains any number of data channels. Each alarm is configured with a particular algorithm, determining the type of processing that will be applied on each data channel and tested against a defined threshold. Tests can be uploaded to SPARTAA, thereby teaching it how to process the data. The uniqueness of MARS is in its capability to be adaptable easily to many test configurations. MARS sends and receives protocols via TCP/IP, which allows for quick integration into almost any test environment. The use of MATLAB and Java as the programming languages allows for developers to integrate the software across multiple operating platforms.

  3. Spatio-Temporal Rule Mining

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach


    Recent advances in communication and information technology, such as the increasing accuracy of GPS technology and the miniaturization of wireless communication devices pave the road for Location-Based Services (LBS). To achieve high quality for such services, spatio-temporal data mining techniques...... are needed. In this paper, we describe experiences with spatio-temporal rule mining in a Danish data mining company. First, a number of real world spatio-temporal data sets are described, leading to a taxonomy of spatio-temporal data. Second, the paper describes a general methodology that transforms...... the spatio-temporal rule mining task to the traditional market basket analysis task and applies it to the described data sets, enabling traditional association rule mining methods to discover spatio-temporal rules for LBS. Finally, unique issues in spatio-temporal rule mining are identified and discussed....

  4. Parallel acceleration for modeling of calcium dynamics in cardiac myocytes. (United States)

    Liu, Ke; Yao, Guangming; Yu, Zeyun


    Spatial-temporal calcium dynamics due to calcium release, buffering, and re-uptaking plays a central role in studying excitation-contraction (E-C) coupling in both healthy and defected cardiac myocytes. In our previous work, partial differential equations (PDEs) had been used to simulate calcium dynamics with realistic geometries extracted from electron microscopic imaging data. However, the computational costs of such simulations are very high on a single processor. To alleviate this problem, we have accelerated the numerical simulations of calcium dynamics by using graphics processing units (GPUs). Computational performance and simulation accuracy are compared with those based on a single CPU and another popular parallel computing technique, OpenMP.

  5. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.


    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  6. Parallel tempering for the traveling salesman problem

    Energy Technology Data Exchange (ETDEWEB)

    Percus, Allon [Los Alamos National Laboratory; Wang, Richard [UCLA MATH DEPT; Hyman, Jeffrey [UCLA MATH DEPT; Caflisch, Russel [UCLA MATH DEPT


    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulations are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.

  7. Broadcasting a message in a parallel computer (United States)

    Berg, Jeremy E [Rochester, MN; Faraj, Ahmad A [Rochester, MN


    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  8. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    Energy Technology Data Exchange (ETDEWEB)

    Yang, U M


    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  9. Parallel programming with Easy Java Simulations (United States)

    Esquembre, F.; Christian, W.; Belloni, M.


    Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.

  10. Parallel expression of synaptophysin and evoked neurotransmitter release during development of cultured neurons

    DEFF Research Database (Denmark)

    Ehrhart-Bornstein, M; Treiman, M; Hansen, Gert Helge


    and neurotransmitter release were measured in each of the culture types as a function of development for up to 8 days in vitro, using the same batch of cells for both sets of measurements to obtain optimal comparisons. The content and the distribution of synaptophysin in the developing cells were assessed...... by quantitative immunoblotting and light microscope immunocytochemistry, respectively. In both cell types, a close parallelism was found between the temporal pattern of development in synaptophysin expression and neurotransmitter release. This temporal pattern differed between the two types of neurons....... The cerebral cortex neurons showed a biphasic time course of increase in synaptophysin content, paralleled by a biphasic pattern of development in their ability to release [3H]GABA in response to depolarization by glutamate or elevated K+ concentrations. In contrast, a monophasic, approximately linear increase...

  11. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir


    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  12. Parallelization of the molecular dynamics code GROMOS87 for distributed memory parallel architectures

    NARCIS (Netherlands)

    Green, DG; Meacham, KE; vanHoesel, F; Hertzberger, B; Serazzi, G


    This paper describes the techniques and methodologies employed during parallelization of the Molecular Dynamics (MD) code GROMOS87, with the specific requirement that the program run efficiently on a range of distributed-memory parallel platforms. We discuss the preliminary results of our parallel

  13. 48 CFR 307.104-70 - Acquisition strategy. (United States)


    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Acquisition strategy. 307... AND ACQUISITION PLANNING ACQUISITION PLANNING Acquisition Planning 307.104-70 Acquisition strategy... designated by the HHS CIO, DASFMP, the CAO, or the cognizant HCA) shall prepare an acquisition strategy using...

  14. Temporal lobe epilepsy, depression, and hippocampal volume. (United States)

    Shamim, Sadat; Hasler, Gregor; Liew, Clarissa; Sato, Susumu; Theodore, William H


    To evaluate the relationship between hippocampal volume loss, depression, and epilepsy. There is a significantly increased incidence of depression and suicide in patients with epilepsy. Both epilepsy and depression are associated with reduced hippocampal volumes, but it is uncertain whether patients with both conditions have greater atrophy than those with epilepsy alone. Previous studies used depression measures strongly weighted to current state, and did not necessarily assess the influence of chronic major depressive disorder ("trait"), which could have a greater impact on hippocampal volume. Fifty-five epilepsy patients with complex partial seizures (CPS) confirmed by electroencephalography (EEG) had three-dimensional (3D)-spoiled gradient recall (SPGR) acquisition magnetic resonance imaging (MRI) scans for hippocampal volumetric analysis. Depression screening was performed with the Beck Depression Inventory (BDI, 51 patients) and with the structured clinical inventory for DSM-IV (SCID, 34 patients). For the BDI, a score above 10 was considered mild to moderate, above 20 moderate to severe, and above 30 severe depression. MRI and clinical analysis were performed blinded to other data. Statistical analysis was performed with Systat using Student's t test and analysis of variance (ANOVA). There was a significant interaction between depression detected on SCID, side of focus, and left hippocampal volume. Patients with a diagnosis of depression and a right temporal seizure focus had significantly lower left hippocampal volume. A similar trend for an effect of depression on right hippocampal volume in patients with a right temporal focus did not reach statistical significance. Our results suggest that patients with right temporal lobe epilepsy and depression have hippocampal atrophy that cannot be explained by epilepsy alone.

  15. Discovering metric temporal constraint networks on temporal databases. (United States)

    Álvarez, Miguel R; Félix, Paulo; Cariñena, Purificación


    In this paper, we propose the ASTPminer algorithm for mining collections of time-stamped sequences to discover frequent temporal patterns, as represented in the simple temporal problem (STP) formalism: a representation of temporal knowledge as a set of event types and a set of metric temporal constraints among them. To focus the mining process, some initial knowledge can be provided by the user, also expressed as an STP, that acts as a seed pattern for the searching procedure. In this manner, the mining algorithm will search for those frequent temporal patterns consistent with the initial knowledge. Health organisations demand, for multiple areas of activity, new computational tools that will obtain new knowledge from huge collections of data. Temporal data mining has arisen as an active research field that provides new algorithms for discovering new temporal knowledge. An important point in defining different proposals is the expressiveness of the resulting temporal knowledge, which is commonly found in the bibliography in a qualitative form. ASTPminer develops an Apriori-like strategy in an iterative algorithm where, as a result of each iteration i, a set of frequent temporal patterns of size i is found that incorporates three distinctive mechanisms: (1) use of a clustering procedure over distributions of temporal distances between events to recognise similar occurrences as temporal patterns; (2) consistency checking of every combination of temporal patterns, which ensures the soundness of the resultant patterns; and (3) use of seed patterns to allow the user to drive the mining process. To validate our proposal, several experiments were conducted over a database of time-stamped sequences obtained from polysomnography tests in patients with sleep apnea-hypopnea syndrome. ASTPminer was able to extract well-known temporal patterns corresponding to different manifestations of the syndrome. Furthermore, the use of seed patterns resulted in a reduction in the size of

  16. Mergers and acquisitions processes in Ukraine




    The paper deals with characteristics of merges and acquisitions market in Ukraine. The study describes main directions and motives of integration processes at domestic enterprises. Efficient ways are suggested to improve processes of merges and acquisitions in Ukraine.

  17. Technological Integration of Acquisitions in Digital Industries

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Toppenberg, Gustav


    Acquisitions have become essential tools to retain the technological edge in digital industries. This paper analyses the technological integration challenges in such acquisitions. Acquirers in digital industries are typically platform leaders in platform markets. They acquire (a) other platform...

  18. Acquisition management of the Global Transportation Network (United States)


    This report discusses the acquisition management of the Global transportation Network by the U.S. Transportation Command. This report is one in a series of audit reports addressing DoD acquisition management of information technology systems. The Glo...

  19. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python (United States)

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  20. Optimal task mapping in safety-critical real-time parallel systems; Placement optimal de taches pour les systemes paralleles temps-reel critiques

    Energy Technology Data Exchange (ETDEWEB)

    Aussagues, Ch


    This PhD thesis is dealing with the correct design of safety-critical real-time parallel systems. Such systems constitutes a fundamental part of high-performance systems for command and control that can be found in the nuclear domain or more generally in parallel embedded systems. The verification of their temporal correctness is the core of this thesis. our contribution is mainly in the following three points: the analysis and extension of a programming model for such real-time parallel systems; the proposal of an original method based on a new operator of synchronized product of state machines task-graphs; the validation of the approach by its implementation and evaluation. The work addresses particularly the main problem of optimal task mapping on a parallel architecture, such that the temporal constraints are globally guaranteed, i.e. the timeliness property is valid. The results incorporate also optimally criteria for the sizing and correct dimensioning of a parallel system, for instance in the number of processing elements. These criteria are connected with operational constraints of the application domain. Our approach is based on the off-line analysis of the feasibility of the deadline-driven dynamic scheduling that is used to schedule tasks inside one processor. This leads us to define the synchronized-product, a system of linear, constraints is automatically generated and then allows to calculate a maximum load of a group of tasks and then to verify their timeliness constraints. The communications, their timeliness verification and incorporation to the mapping problem is the second main contribution of this thesis. FInally, the global solving technique dealing with both task and communication aspects has been implemented and evaluated in the framework of the OASIS project in the LETI research center at the CEA/Saclay. (author) 96 refs.

  1. A Reconfigurable FPGA System for Parallel Independent Component Analysis

    Directory of Open Access Journals (Sweden)

    Du Hongtao


    Full Text Available A run-time reconfigurable field programmable gate array (FPGA system is presented for the implementation of the parallel independent component analysis (ICA algorithm. In this work, we investigate design challenges caused by the capacity constraints of single FPGA. Using the reconfigurability of FPGA, we show how to manipulate the FPGA-based system and execute processes for the parallel ICA (pICA algorithm. During the implementation procedure, pICA is first partitioned into three temporally independent function blocks, each of which is synthesized by using several ICA-related reconfigurable components (RCs that are developed for reuse and retargeting purposes. All blocks are then integrated into a design and development environment for performing tasks such as FPGA optimization, placement, and routing. With partitioning and reconfiguration, the proposed reconfigurable FPGA system overcomes the capacity constraints for the pICA implementation on embedded systems. We demonstrate the effectiveness of this implementation on real images with large throughput for dimensionality reduction in hyperspectral image (HSI analysis.

  2. A Reconfigurable FPGA System for Parallel Independent Component Analysis

    Directory of Open Access Journals (Sweden)

    Hairong Qi


    Full Text Available A run-time reconfigurable field programmable gate array (FPGA system is presented for the implementation of the parallel independent component analysis (ICA algorithm. In this work, we investigate design challenges caused by the capacity constraints of single FPGA. Using the reconfigurability of FPGA, we show how to manipulate the FPGA-based system and execute processes for the parallel ICA (pICA algorithm. During the implementation procedure, pICA is first partitioned into three temporally independent function blocks, each of which is synthesized by using several ICA-related reconfigurable components (RCs that are developed for reuse and retargeting purposes. All blocks are then integrated into a design and development environment for performing tasks such as FPGA optimization, placement, and routing. With partitioning and reconfiguration, the proposed reconfigurable FPGA system overcomes the capacity constraints for the pICA implementation on embedded systems. We demonstrate the effectiveness of this implementation on real images with large throughput for dimensionality reduction in hyperspectral image (HSI analysis.

  3. Fluoxetine Restores Spatial Learning but Not Accelerated Forgetting in Mesial Temporal Lobe Epilepsy (United States)

    Barkas, Lisa; Redhead, Edward; Taylor, Matthew; Shtaya, Anan; Hamilton, Derek A.; Gray, William P.


    Learning and memory dysfunction is the most common neuropsychological effect of mesial temporal lobe epilepsy, and because the underlying neurobiology is poorly understood, there are no pharmacological strategies to help restore memory function in these patients. We have demonstrated impairments in the acquisition of an allocentric spatial task,…

  4. Parallel optical control of spatiotemporal neuronal spike activity using high-frequency digital light processingtechnology

    Directory of Open Access Journals (Sweden)

    Jason eJerome


    Full Text Available Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses Digital-Light-Processing (DLP technology to generate 2-dimensional (2D stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 µm and temporal (>13kHz resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 x 2.07 mm2 of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  5. Parallel optical control of spatiotemporal neuronal spike activity using high-speed digital light processing. (United States)

    Jerome, Jason; Foehring, Robert C; Armstrong, William E; Spain, William J; Heck, Detlef H


    Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses digital light processing technology to generate 2-dimensional (2D) stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 μm) and temporal (>13 kHz) resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 mm × 2.07 mm) of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  6. Applications of Parallel Processing in Configuration Analyses (United States)

    Sundaram, Ppchuraman; Hager, James O.; Biedron, Robert T.


    The paper presents the recent progress made towards developing an efficient and user-friendly parallel environment for routine analysis of large CFD problems. The coarse-grain parallel version of the CFL3D Euler/Navier-Stokes analysis code, CFL3Dhp, has been ported onto most available parallel platforms. The CFL3Dhp solution accuracy on these parallel platforms has been verified with the CFL3D sequential analyses. User-friendly pre- and post-processing tools that enable a seamless transfer from sequential to parallel processing have been written. Static load balancing tool for CFL3Dhp analysis has also been implemented for achieving good parallel efficiency. For large problems, load balancing efficiency as high as 95% can be achieved even when large number of processors are used. Linear scalability of the CFL3Dhp code with increasing number of processors has also been shown using a large installed transonic nozzle boattail analysis. To highlight the fast turn-around time of parallel processing, the TCA full configuration in sideslip Navier-Stokes drag polar at supersonic cruise has been obtained in a day. CFL3Dhp is currently being used as a production analysis tool.

  7. Code Parallelization with CAPO: A User Manual (United States)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)


    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  8. Parallel contributions of distinct human memory systems during probabilistic learning. (United States)

    Dickerson, Kathryn C; Li, Jian; Delgado, Mauricio R


    Regions within the medial temporal lobe and basal ganglia are thought to subserve distinct memory systems underlying declarative and nondeclarative processes, respectively. One question of interest is how these multiple memory systems interact during learning to contribute to goal directed behavior. While some hypotheses suggest that regions such as the striatum and the hippocampus interact in a competitive manner, alternative views posit that these structures may operate in a parallel manner to facilitate learning. In the current experiment, we probed the functional connectivity between regions in the striatum and hippocampus in the human brain during an event related probabilistic learning task that varied with respect to type of difficulty (easy or hard cues) and type of learning (via feedback or observation). We hypothesized that the hippocampus and striatum would interact in a parallel manner during learning. We identified regions of interest (ROI) in the striatum and hippocampus that showed an effect of cue difficulty during learning and found that such ROIs displayed a similar pattern of blood oxygen level dependent (BOLD) responses, irrespective of learning type, and were functionally correlated as assessed by a Granger causality analysis. Given the connectivity of both structures with dopaminergic midbrain centers, we further applied a reinforcement learning algorithm often used to highlight the role of dopamine in human reward related learning paradigms. Activity in both the striatum and hippocampus positively correlated with a prediction error signal during feedback learning. These results suggest that distinct human memory systems operate in parallel during probabilistic learning, and may act synergistically particularly when a violation of expectation occurs, to jointly contribute to learning and decision making. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Acquisition of adverbs and pronominal adverbs in a Czech child

    Directory of Open Access Journals (Sweden)

    Chejnová Pavla


    Full Text Available This article addresses the acquisition of adverbs in a Czech monolingual boy from speech onset until age 6. This study is based on an analysis of authentic material: a corpus of transcriptions of audio recordings of the child in verbal interaction with adults, and diary records. The acquisition sequence and adverbial type frequency were analysed. The boy acquired adverbs of place first; although direction was acquired before location, location became more frequent. Adverbs relating to time were developed second, and location in time and repetition were the most frequent temporal aspects. Adverbs of degree followed; adverbs related to high degree were more frequently expressed than adverbs related to low degree. Adverbs of manner appeared last, in which adverbs related to a broader evaluative sense prevailed. Adverbs related to cause were manifested mostly by the interrogative proč / why. Interrelation between the acquisition of nominal and verbal categories was found: when the child acquired the category of nominal case and was able to use the construction of preposition + noun, the percentage of adverbs of place decreased. When he acquired the verbal aspect, the percentage of adverbs related to time decreased.

  10. COLDEX New Data Acquisition Framework

    CERN Document Server

    Grech, Christian


    COLDEX (COLD bore EXperiment) is an experiment of the TE-VSC group installed in the Super Proton Synchrotron (SPS) which mimics a LHC type cryogenic vacuum system. In the framework of the High Luminosity upgrade of the LHC (HL-LHC project), COLDEX has been recommissioned in 2014 in order to validate carbon coatings performances at cryogenic temperature with LHC type beams. To achieve this mission, a data acquisition system is needed to retrieve and store information from the different experiment’s systems (vacuum, cryogenics, controls, safety) and perform specific calculations. This work aimed to completely redesign, implement, test and operate a brand new data acquisition framework based on communication with the experiment’s PLCs for the devices potentially available over network. The communication protocol to the PLCs is based on data retrieval both from CERN middleware infrastructures (CMW, JAPC) and on a novel open source Simatic S7 data exchange package over TCP/IP (libnodave).

  11. Music and early language acquisition. (United States)

    Brandt, Anthony; Gebrian, Molly; Slevc, L Robert


    Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability - one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development.

  12. Music and Early Language Acquisition

    Directory of Open Access Journals (Sweden)

    Anthony K. Brandt


    Full Text Available Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability—one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, the authors challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, the authors present evidence that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development.

  13. The ALICE data acquisition system

    CERN Document Server

    Carena, F; Chapeland, S; Chibante Barroso, V; Costa, F; Dénes, E; Divià, R; Fuchs, U; Grigore, A; Kiss, T; Simonetti, G; Soós, C; Telesca, A; Vande Vyvre, P; Von Haller, B


    In this paper we describe the design, the construction, the commissioning and the operation of the Data Acquisition (DAQ) and Experiment Control Systems (ECS) of the ALICE experiment at the CERN Large Hadron Collider (LHC). The DAQ and the ECS are the systems used respectively for the acquisition of all physics data and for the overall control of the experiment. They are two computing systems made of hundreds of PCs and data storage units interconnected via two networks. The collection of experimental data from the detectors is performed by several hundreds of high-speed optical links. We describe in detail the design considerations for these systems handling the extreme data throughput resulting from central lead ions collisions at LHC energy. The implementation of the resulting requirements into hardware (custom optical links and commercial computing equipment), infrastructure (racks, cooling, power distribution, control room), and software led to many innovative solutions which are described together with ...

  14. Music and Early Language Acquisition (United States)

    Brandt, Anthony; Gebrian, Molly; Slevc, L. Robert


    Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability – one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development. PMID:22973254

  15. GRAVITY acquisition camera: characterization results (United States)

    Anugu, Narsireddy; Garcia, Paulo; Amorim, Antonio; Wiezorrek, Erich; Wieprecht, Ekkehard; Eisenhauer, Frank; Ott, Thomas; Pfuhl, Oliver; Gordo, Paulo; Perrin, Guy; Brandner, Wolfgang; Straubmeier, Christian; Perraut, Karine


    GRAVITY acquisition camera implements four optical functions to track multiple beams of Very Large Telescope Interferometer (VLTI): a) pupil tracker: a 2×2 lenslet images four pupil reference lasers mounted on the spiders of telescope secondary mirror; b) field tracker: images science object; c) pupil imager: reimages telescope pupil; d) aberration tracker: images a Shack-Hartmann. The estimation of beam stabilization parameters from the acquisition camera detector image is carried out, for every 0.7 s, with a dedicated data reduction software. The measured parameters are used in: a) alignment of GRAVITY with the VLTI; b) active pupil and field stabilization; c) defocus correction and engineering purposes. The instrument is now successfully operational on-sky in closed loop. The relevant data reduction and on-sky characterization results are reported.

  16. Temporal Proximity Promotes Integration of Overlapping Events. (United States)

    Zeithamova, Dagmar; Preston, Alison R


    Events with overlapping elements can be encoded as two separate representations or linked into an integrated representation, yet we know little about the conditions that promote one form of representation over the other. Here, we tested the hypothesis that the proximity of overlapping events would increase the probability of integration. Participants first established memories for house-object and face-object pairs; half of the pairs were learned 24 hr before an fMRI session, and the other half 30 min before the session. During scanning, participants encoded object-object pairs that overlapped with the initial pairs acquired on the same or prior day. Participants were also scanned as they made inference judgments about the relationships among overlapping pairs learned on the same or different day. Participants were more accurate and faster when inferring relationships among memories learned on the same day relative to those acquired across days, suggesting that temporal proximity promotes integration. Evidence for reactivation of existing memories-as measured by a visual content classifier-was equivalent during encoding of overlapping pairs from the two temporal conditions. In contrast, evidence for integration-as measured by a mnemonic strategy classifier from an independent study [Richter, F. R., Chanales, A. J. H., & Kuhl, B. A. Predicting the integration of overlapping memories by decoding mnemonic processing states during learning. Neuroimage, 124, 323-335, 2016]-was greater for same-day overlapping events, paralleling the behavioral results. During inference itself, activation patterns further differentiated when participants were making inferences about events acquired on the same day versus across days. These findings indicate that temporal proximity of events promotes integration and further influences the neural mechanisms engaged during inference.

  17. Utilizing Information Technology to Facilitate Rapid Acquisition (United States)


    ordering systems to facilitate streamlined commercial item acquisitions that reap the benefits of improved efficiency, reduced overall costs, and...PAGES 109 14. SUBJECT TERMS Rapid Acquisition, eCommerce , eProcurement, Information Technology, Contracting, Global Information Network...streamlined commercial item acquisitions that reap the benefits of improved efficiency, reduced overall costs, and timeliness. This thesis

  18. 32 CFR 644.7 - Acquisition lines. (United States)


    ... HANDBOOK Project Planning Civil Works § 644.7 Acquisition lines. (a) Tentative acquisition lines. As..., in accordance with sound real estate practices. Accordingly, fringe tracts will not be acquired until the final acquisition lines are approved by the Division Engineer. (b) Submission. As soon as possible...

  19. Sustaining an Acquisition-based Growth Strategy

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Toppenberg, Gustav; Shanks, Graeme

    Value creating acquisitions are a major challenge for many firms. Our case study of Cisco Systems shows that an advanced Enterprise Architecture (EA) capability can contribute to the acquisition process through a) preparing the acquirer to become ‘acquisition ready’, b) identifying resource......-based growth strategy over time....

  20. The acquisition of speech and language. (United States)

    Woodfield, T A


    There are many theories behind speech and language acquisition. The role of parents in social interaction with their infant to facilitate speech and language acquisition is of paramount importance. Several pathological influences may hinder speech and language acquisition. Children's nurses need knowledge and understanding of how speech and language are acquired.