WorldWideScience

Sample records for sequential parallel comparison

  1. From sequential to parallel programming with patterns

    CERN Document Server

    CERN. Geneva

    2018-01-01

    To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.

  2. A sequential/parallel track selector

    CERN Document Server

    Bertolino, F; Bressani, Tullio; Chiavassa, E; Costa, S; Dellacasa, G; Gallio, M; Musso, A

    1980-01-01

    A medium speed ( approximately 1 mu s) hardware pre-analyzer for the selection of events detected in four planes of drift chambers in the magnetic field of the Omicron Spectrometer at the CERN SC is described. Specific geometrical criteria determine patterns of hits in the four planes of vertical wires that have to be recognized and that are stored as patterns of '1's in random access memories. Pairs of good hits are found sequentially, then the RAMs are used as look-up tables. (6 refs).

  3. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  4. Research on parallel algorithm for sequential pattern mining

    Science.gov (United States)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  5. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  6. Sequential and parallel image restoration: neural network implementations.

    Science.gov (United States)

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  7. Efficient sequential and parallel algorithms for record linkage.

    Science.gov (United States)

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  8. Efficient sequential and parallel algorithms for planted motif search.

    Science.gov (United States)

    Nicolae, Marius; Rajasekaran, Sanguthevar

    2014-01-31

    Motif searching is an important step in the detection of rare events occurring in a set of DNA or protein sequences. One formulation of the problem is known as (l,d)-motif search or Planted Motif Search (PMS). In PMS we are given two integers l and d and n biological sequences. We want to find all sequences of length l that appear in each of the input sequences with at most d mismatches. The PMS problem is NP-complete. PMS algorithms are typically evaluated on certain instances considered challenging. Despite ample research in the area, a considerable performance gap exists because many state of the art algorithms have large runtimes even for moderately challenging instances. This paper presents a fast exact parallel PMS algorithm called PMS8. PMS8 is the first algorithm to solve the challenging (l,d) instances (25,10) and (26,11). PMS8 is also efficient on instances with larger l and d such as (50,21). We include a comparison of PMS8 with several state of the art algorithms on multiple problem instances. This paper also presents necessary and sufficient conditions for 3 l-mers to have a common d-neighbor. The program is freely available at http://engr.uconn.edu/~man09004/PMS8/. We present PMS8, an efficient exact algorithm for Planted Motif Search. PMS8 introduces novel ideas for generating common neighborhoods. We have also implemented a parallel version for this algorithm. PMS8 can solve instances not solved by any previous algorithms.

  9. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  10. Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon

    DEFF Research Database (Denmark)

    Fischer, Paul

    1997-01-01

    This paper investigates the problem where one is given a finite set of n points in the plane each of which is labeled either ?positive? or ?negative?. We consider bounded convex polygons, the vertices of which are positive points and which do not contain any negative point. It is shown how...... such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...... becomes O(M n³ log n). It is also shown how to find a maximum convex polygon which contains a given point in time O(n³ log n). Two parallel algorithms for the basic problem are also presented. The first one runs in time O(n log n) using O(n²) processors, the second one has polylogarithmic time but needs O...

  11. OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS

    Directory of Open Access Journals (Sweden)

    G. М. Levin

    2016-01-01

    Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.

  12. The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging

    Science.gov (United States)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-06-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.

  13. Efficient sequential and parallel algorithms for finding edit distance based motifs.

    Science.gov (United States)

    Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar

    2016-08-18

    Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in

  14. Comparison of parallel viscosity with neoclassical theory

    International Nuclear Information System (INIS)

    Ida, K.; Nakajima, N.

    1996-04-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (μ perpendicular = 2 m 2 /s). (author)

  15. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  16. Sequential data access with Oracle and Hadoop: a performance comparison

    International Nuclear Information System (INIS)

    Baranowski, Zbigniew; Canali, Luca; Grancher, Eric

    2014-01-01

    The Hadoop framework has proven to be an effective and popular approach for dealing with 'Big Data' and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's 'shared nothing' architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions–addressing cost/performance as well as raw performance– based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.

  17. Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.

    Science.gov (United States)

    Lin, Jane-Ming; Tsai, Yi-Yu

    2005-01-01

    To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.

  18. Performance of a Sequential and Parallel Computational Fluid Dynamic (CFD) Solver on a Missile Body Configuration

    National Research Council Canada - National Science Library

    Hisley, Dixie

    1999-01-01

    .... The goals of this report are: (1) to investigate the performance of message passing and loop level parallelization techniques, as they were implemented in the computational fluid dynamics (CFD...

  19. Air-side performance of a parallel-flow parallel-fin (PF{sup 2}) heat exchanger in sequential frosting

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Ping [Zhejiang Vocational College of Commerce, Hangzhou, Binwen Road 470 (China); Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, 1206 West Green Street, Urbana, IL 61801 (United States); Hrnjak, P.S. [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, 1206 West Green Street, Urbana, IL 61801 (United States)

    2010-09-15

    The thermal-hydraulic performance in periodic frosting conditions is experimentally studied for the parallel-flow parallel-fin heat exchanger, henceforth referred to as a PF{sup 2} heat exchanger, a new style of heat exchanger that uses louvered bent fins on flat tubes to enhance water drainage when the flat tubes are horizontal. Typically, it takes a few frosting/defrosting cycles to come to repeatable conditions. The criterion for the initiation of defrost and a sufficiently long defrost period are determined for the test PF{sup 2} heat exchanger and test condition. The effects of blower operation on the pressure drop, frost accumulation, water retention, and capacity in time are compared under the conditions of 15 sequential frosting cycles. Pressure drop across the heat exchanger and overall heat transfer coefficient are quantified under frost conditions as functions of the air humidity and air face velocity. The performances of two types of flat-tube heat exchangers, PF{sup 2} heat exchanger and conventional parallel-flow serpentine-fin (PFSF) heat exchanger, are compared and the results obtained are presented. (author)

  20. Comparisons of memory for nonverbal auditory and visual sequential stimuli.

    Science.gov (United States)

    McFarland, D J; Cacace, A T

    1995-01-01

    Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.

  1. Parallel Sequential Monte Carlo for Efficient Density Combination: The Deco Matlab Toolbox

    DEFF Research Database (Denmark)

    Casarin, Roberto; Grassi, Stefano; Ravazzolo, Francesco

    This paper presents the Matlab package DeCo (Density Combination) which is based on the paper by Billio et al. (2013) where a constructive Bayesian approach is presented for combining predictive densities originating from different models or other sources of information. The combination weights...... for standard CPU computing and for Graphical Process Unit (GPU) parallel computing. For the GPU implementation we use the Matlab parallel computing toolbox and show how to use General Purposes GPU computing almost effortless. This GPU implementation comes with a speed up of the execution time up to seventy...... times compared to a standard CPU Matlab implementation on a multicore CPU. We show the use of the package and the computational gain of the GPU version, through some simulation experiments and empirical applications....

  2. Sequential decisions: a computational comparison of observational and reinforcement accounts.

    Directory of Open Access Journals (Sweden)

    Nazanin Mohammadi Sepahvand

    Full Text Available Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.

  3. Sequential combination of k-t principle component analysis (PCA) and partial parallel imaging: k-t PCA GROWL.

    Science.gov (United States)

    Qi, Haikun; Huang, Feng; Zhou, Hongmei; Chen, Huijun

    2017-03-01

    k-t principle component analysis (k-t PCA) is a distinguished method for high spatiotemporal resolution dynamic MRI. To further improve the accuracy of k-t PCA, a combination with partial parallel imaging (PPI), k-t PCA/SENSE, has been tested. However, k-t PCA/SENSE suffers from long reconstruction time and limited improvement. This study aims to improve the combination of k-t PCA and PPI on both reconstruction speed and accuracy. A sequential combination scheme called k-t PCA GROWL (GRAPPA operator for wider readout line) was proposed. The GRAPPA operator was performed before k-t PCA to extend each readout line into a wider band, which improved the condition of the encoding matrix in the following k-t PCA reconstruction. k-t PCA GROWL was tested and compared with k-t PCA and k-t PCA/SENSE on cardiac imaging. k-t PCA GROWL consistently resulted in better image quality compared with k-t PCA/SENSE at high acceleration factors for both retrospectively and prospectively undersampled cardiac imaging, with a much lower computation cost. The improvement in image quality became greater with the increase of acceleration factor. By sequentially combining the GRAPPA operator and k-t PCA, the proposed k-t PCA GROWL method outperformed k-t PCA/SENSE in both reconstruction speed and accuracy, suggesting that k-t PCA GROWL is a better combination scheme than k-t PCA/SENSE. Magn Reson Med 77:1058-1067, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Parameter sampling capabilities of sequential and simultaneous data assimilation: I. Analytical comparison

    International Nuclear Information System (INIS)

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess the parameter sampling capabilities of some Bayesian, ensemble-based, joint state-parameter (JS) estimation methods. The forward model is assumed to be non-chaotic and have nonlinear components, and the emphasis is on results obtained for the parameters in the state-parameter vector. A variety of approximate sampling methods exist, and a number of numerical comparisons between such methods have been performed. Often, more than one of the defining characteristics vary from one method to another, so it can be difficult to point out which characteristic of the more successful method in such a comparison was decisive. In this study, we single out one defining characteristic for comparison; whether or not data are assimilated sequentially or simultaneously. The current paper is concerned with analytical investigations into this issue. We carefully select one sequential and one simultaneous JS method for the comparison. We also design a corresponding pair of pure parameter estimation methods, and we show how the JS methods and the parameter estimation methods are pairwise related. It is shown that the sequential and the simultaneous parameter estimation methods are equivalent for one particular combination of observations with different degrees of nonlinearity. Strong indications are presented for why one may expect the sequential parameter estimation method to outperform the simultaneous parameter estimation method for all other combinations of observations. Finally, the conditions for when similar relations can be expected to hold between the corresponding JS methods are discussed. A companion paper, part II (Fossum and Mannseth 2014 Inverse Problems 30 114003), is concerned with statistical analysis of results from a range of numerical experiments involving sequential and simultaneous JS estimation, where the design of the numerical investigation is motivated by our findings in the current paper. (paper)

  5. Breast Conserving Treatment for Breast Cancer: Dosimetric Comparison of Sequential versus Simultaneous Integrated Photon Boost

    Directory of Open Access Journals (Sweden)

    Hilde Van Parijs

    2014-01-01

    Full Text Available Background. Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. Methods. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. Results. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001. There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04. The dose to the organs at risk (OAR was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. Conclusions. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine.

  6. Breast conserving treatment for breast cancer: dosimetric comparison of sequential versus simultaneous integrated photon boost.

    Science.gov (United States)

    Van Parijs, Hilde; Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark

    2014-01-01

    Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine.

  7. A Comparison of Sequential and GPU Implementations of Iterative Methods to Compute Reachability Probabilities

    Directory of Open Access Journals (Sweden)

    Elise Cormie-Bowins

    2012-10-01

    Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.

  8. Comparison of likelihood testing procedures for parallel systems with covariances

    International Nuclear Information System (INIS)

    Ayman Baklizi; Isa Daud; Noor Akma Ibrahim

    1998-01-01

    In this paper we considered investigating and comparing the behavior of the likelihood ratio, the Rao's and the Wald's statistics for testing hypotheses on the parameters of the simple linear regression model based on parallel systems with covariances. These statistics are asymptotically equivalent (Barndorff-Nielsen and Cox, 1994). However, their relative performances in finite samples are generally known. A Monte Carlo experiment is conducted to stimulate the sizes and the powers of these statistics for complete samples and in the presence of time censoring. Comparisons of the statistics are made according to the attainment of assumed size of the test and their powers at various points in the parameter space. The results show that the likelihood ratio statistics appears to have the best performance in terms of the attainment of the assumed size of the test. Power comparisons show that the Rao statistic has some advantage over the Wald statistic in almost all of the space of alternatives while likelihood ratio statistic occupies either the first or the last position in term of power. Overall, the likelihood ratio statistic appears to be more appropriate to the model under study, especially for small sample sizes

  9. Comparison of Sequential Regimen and Standard Therapy for Helicobacter pylori Eradication in Patients with Dyspepsia

    Directory of Open Access Journals (Sweden)

    Gh. Roshanaei

    2013-10-01

    Full Text Available Introduction & Objective: Some studies have reported successful eradication rates using se-quential therapy but more recent studies performed in Asia did not find a similar benefit. Due to inconsistencies in the comparison of standard triple drugs therapy and sequential regimen, in the previous researches we decided to compare these treatments in Persian patients. Materials & Methods: This study is a randomized clinical trial, performed in one hundred and forty patients suffering from dyspepsia with indication for H. pylori eradication between No-vember 2010 and March 2012.Patients were randomized in two equal groups. The patients in the first group (standard were treated by omeprazole capsule 20 mg BID, amoxicillin cap-sule 1 gr BID, clarithromycin tablet 500mg BID for 14 days; while the patients in the second group (sequential were treated by omeprazole capsule 20 mg for 10 days, amoxicillin cap-sule 1 gr BID for 5 days, then clarithromycin tablet 500 mg and tinidazole tablet 500 mg BID for other 5 days. 4-6 weeks after the treatment, we compared the eradication of H.pylori be-tween the two groups by urease breathe test with C14. Results: H. pylori infection was successfully cured in 57/70 (81.43% with a 10-day sequen-tial therapy, in 60/70 (85.75% with the standard fourteen-day triple therapy, respectively. Conclusion: We detected no significant differences between the 10-day sequential eradication therapy for H. pylori and 14-day standard triple treatment among the patients. (Sci J Hamadan Univ Med Sci 2013; 20 (3:184-193

  10. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  11. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-10-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  12. Comparison of some parallelization strategies of thermalhydraulic codes on GPUs

    International Nuclear Information System (INIS)

    Jendoubi, T.; Bergeaud, V.; Geay, A.

    2013-01-01

    Modern supercomputers architecture is now often based on hybrid concepts combining parallelism to distributed memory, parallelism to shared memory and also to GPUs (Graphic Process Units). In this work, we propose a new approach to take advantage of these graphic cards in thermohydraulics algorithms. (authors)

  13. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  14. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  15. Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms

    Directory of Open Access Journals (Sweden)

    Dominik Zurek

    2013-01-01

    Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.

  16. BLAST in Gid (BiG): A Grid-Enabled Software Architecture and Implementation of Parallel and Sequential BLAST

    International Nuclear Information System (INIS)

    Aparicio, G.; Blanquer, I.; Hernandez, V.; Segrelles, D.

    2007-01-01

    The integration of High-performance computing tools is a key issue in biomedical research. Many computer-based applications have been migrated to High-Performance computers to deal with their computing and storage needs such as BLAST. However, the use of clusters and computing farm presents problems in scalability. The use of a higher layer of parallelism that splits the task into highly independent long jobs that can be executed in parallel can improve the performance maintaining the efficiency. Grid technologies combined with parallel computing resources are an important enabling technology. This work presents a software architecture for executing BLAST in a International Grid Infrastructure that guarantees security, scalability and fault tolerance. The software architecture is modular an adaptable to many other high-throughput applications, both inside the field of bio computing and outside. (Author)

  17. PLAST: parallel local alignment search tool for database comparison

    Directory of Open Access Journals (Sweden)

    Lavenier Dominique

    2009-10-01

    Full Text Available Abstract Background Sequence similarity searching is an important and challenging task in molecular biology and next-generation sequencing should further strengthen the need for faster algorithms to process such vast amounts of data. At the same time, the internal architecture of current microprocessors is tending towards more parallelism, leading to the use of chips with two, four and more cores integrated on the same die. The main purpose of this work was to design an effective algorithm to fit with the parallel capabilities of modern microprocessors. Results A parallel algorithm for comparing large genomic banks and targeting middle-range computers has been developed and implemented in PLAST software. The algorithm exploits two key parallel features of existing and future microprocessors: the SIMD programming model (SSE instruction set and the multithreading concept (multicore. Compared to multithreaded BLAST software, tests performed on an 8-processor server have shown speedup ranging from 3 to 6 with a similar level of accuracy. Conclusion A parallel algorithmic approach driven by the knowledge of the internal microprocessor architecture allows significant speedup to be obtained while preserving standard sensitivity for similarity search problems.

  18. Comparison of two percutaneous tracheostomy techniques, guide wire dilating forceps and Ciaglia Blue Rhino: a sequential cohort study.

    NARCIS (Netherlands)

    Fikkers, B.G.; Staatsen, M; Lardenoije, S.G.; Hoogen, F.J.A. van den; Hoeven, J.G. van der

    2004-01-01

    INTRODUCTION: To evaluate and compare the peri-operative and postoperative complications of the two most frequently used percutaneous tracheostomy techniques, namely guide wire dilating forceps (GWDF) and Ciaglia Blue Rhino (CBR). METHODS: A sequential cohort study with comparison of short-term and

  19. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    Science.gov (United States)

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Comparison of Software Technologies for Vectorization and Parallelization

    CERN Document Server

    Lazzaro, Alfio; Nowak, Andrzej; Valsan, Liviu

    2012-01-01

    This paper demonstrates how modern software development methodologies can be used to give an existing sequential application a considerable performance speed-up on modern x86 server systems. Whereas, in the past, speed-up was directly linked to the increase in clock frequency when moving to a more modern system, current x86 servers present a plethora of “performance dimensions” that need to be harnessed with great care. The application we used is a real-life data analysis example in C++ analyzing High Energy Physics data. The key software methods used are OpenMP, Intel Threading Building Blocks (TBB), Intel Cilk Plus, and the auto-vectorization capability of the Intel compiler (Composer XE). Somewhat surprisingly, the Message Passing Interface (MPI) is successfully added, although our focus is on single-node rather than multi-node performance optimization. The paper underlines the importance of algorithmic redesign in order to optimize each performance dimension and links this to close control of the memo...

  1. A comparison of an algorithm for automated sequential beam orientation selection (Cycle) with simulated annealing

    International Nuclear Information System (INIS)

    Woudstra, Evert; Heijmen, Ben J M; Storchi, Pascal R M

    2008-01-01

    Some time ago we developed and published a new deterministic algorithm (called Cycle) for automatic selection of beam orientations in radiotherapy. This algorithm is a plan generation process aiming at the prescribed PTV dose within hard dose and dose-volume constraints. The algorithm allows a large number of input orientations to be used and selects only the most efficient orientations, surviving the selection process. Efficiency is determined by a score function and is more or less equal to the extent of uninhibited access to the PTV for a specific beam during the selection process. In this paper we compare the capabilities of fast-simulated annealing (FSA) and Cycle for cases where local optima are supposed to be present. Five pancreas and five oesophagus cases previously treated in our institute were selected for this comparison. Plans were generated for FSA and Cycle, using the same hard dose and dose-volume constraints, and the largest possible achieved PTV doses as obtained from these algorithms were compared. The largest achieved PTV dose values were generally very similar for the two algorithms. In some cases FSA resulted in a slightly higher PTV dose than Cycle, at the cost of switching on substantially more beam orientations than Cycle. In other cases, when Cycle generated the solution with the highest PTV dose using only a limited number of non-zero weight beams, FSA seemed to have some difficulty in switching off the unfavourable directions. Cycle was faster than FSA, especially for large-dimensional feasible spaces. In conclusion, for the cases studied in this paper, we have found that despite the inherent drawback of sequential search as used by Cycle (where Cycle could probably get trapped in a local optimum), Cycle is nevertheless able to find comparable or sometimes slightly better treatment plans in comparison with FSA (which in theory finds the global optimum) especially in large-dimensional beam weight spaces

  2. Microwave Ablation: Comparison of Simultaneous and Sequential Activation of Multiple Antennas in Liver Model Systems.

    Science.gov (United States)

    Harari, Colin M; Magagna, Michelle; Bedoya, Mariajose; Lee, Fred T; Lubner, Meghan G; Hinshaw, J Louis; Ziemlewicz, Timothy; Brace, Christopher L

    2016-01-01

    To compare microwave ablation zones created by using sequential or simultaneous power delivery in ex vivo and in vivo liver tissue. All procedures were approved by the institutional animal care and use committee. Microwave ablations were performed in both ex vivo and in vivo liver models with a 2.45-GHz system capable of powering up to three antennas simultaneously. Two- and three-antenna arrays were evaluated in each model. Sequential and simultaneous ablations were created by delivering power (50 W ex vivo, 65 W in vivo) for 5 minutes per antenna (10 and 15 minutes total ablation time for sequential ablations, 5 minutes for simultaneous ablations). Thirty-two ablations were performed in ex vivo bovine livers (eight per group) and 28 in the livers of eight swine in vivo (seven per group). Ablation zone size and circularity metrics were determined from ablations excised postmortem. Mixed effects modeling was used to evaluate the influence of power delivery, number of antennas, and tissue type. On average, ablations created by using the simultaneous power delivery technique were larger than those with the sequential technique (P Simultaneous ablations were also more circular than sequential ablations (P = .0001). Larger and more circular ablations were achieved with three antennas compared with two antennas (P simultaneous power delivery creates larger, more confluent ablations with greater temperatures than those created with sequential power delivery. © RSNA, 2015.

  3. Sequential vs simultaneous encoding of spatial information: a comparison between the blind and the sighted.

    Science.gov (United States)

    Ruotolo, Francesco; Ruggiero, Gennaro; Vinciguerra, Michela; Iachini, Tina

    2012-02-01

    The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Stiffness analysis and comparison of a Biglide parallel grinder with alternative spatial modular parallelograms

    DEFF Research Database (Denmark)

    Wu, Guanglei; Zou, Ping

    2017-01-01

    This paper deals with the stiffness modeling, analysis and comparison of a Biglide parallel grinder with two alternative modular parallelograms. It turns out that the Cartesian stiffness matrix of the manipulator has the property that it can be decoupled into two homogeneous matrices, correspondi...

  5. Parallel-Sequential Texture Analysis

    NARCIS (Netherlands)

    van den Broek, Egon; Singh, Sameer; Singh, Maneesha; van Rikxoort, Eva M.; Apte, Chid; Perner, Petra

    2005-01-01

    Color induced texture analysis is explored, using two texture analysis techniques: the co-occurrence matrix and the color correlogram as well as color histograms. Several quantization schemes for six color spaces and the human-based 11 color quantization scheme have been applied. The VisTex texture

  6. A comparison of energetic ions in the plasma depletion layer and the quasi-parallel magnetosheath

    Science.gov (United States)

    Fuselier, Stephen A.

    1994-01-01

    Energetic ion spectra measured by the Active Magnetospheric Particle Tracer Explorers/Charge Composition Explorer (AMPTE/CCE) downstream from the Earth's quasi-parallel bow shock (in the quasi-parallel magnetosheath) and in the plasma depletion layer are compared. In the latter region, energetic ions are from a single source, leakage of magnetospheric ions across the magnetopause and into the plasma depletion layer. In the former region, both the magnetospheric source and shock acceleration of the thermal solar wind population at the quasi-parallel shock can contribute to the energetic ion spectra. The relative strengths of these two energetic ion sources are determined through the comparison of spectra from the two regions. It is found that magnetospheric leakage can provide an upper limit of 35% of the total energetic H(+) population in the quasi-parallel magnetosheath near the magnetopause in the energy range from approximately 10 to approximately 80 keV/e and substantially less than this limit for the energetic He(2+) population. The rest of the energetic H(+) population and nearly all of the energetic He(2+) population are accelerated out of the thermal solar wind population through shock acceleration processes. By comparing the energetic and thermal He(2+) and H(+) populations in the quasi-parallel magnetosheath, it is found that the quasi-parallel bow shock is 2 to 3 times more efficient at accelerating He(2+) than H(+). This result is consistent with previous estimates from shock acceleration theory and simulati ons.

  7. Comparison of three sequential extraction procedures to describe metal fractionation in anaerobic granular sludges

    NARCIS (Netherlands)

    Hullebusch, van E.D.; Sudarno, S.; Zandvoort, M.H.; Lens, P.N.L.

    2005-01-01

    In the last few decades. several sequential extraction procedures have been developed to quantify the chemical status of metals in the solid phase. In this study. three extraction techniques (modified [A. Tessier, P.G.C. Campbell, M. Bisson, Anal. Chem. 51 (1979) 844]: [R.C. Stover. L.E. Sommers,

  8. Comparison of simultaneous and sequential SPECT imaging for discrimination tasks in assessment of cardiac defects.

    Science.gov (United States)

    Trott, C M; Ouyang, J; El Fakhri, G

    2010-11-21

    Simultaneous rest perfusion/fatty-acid metabolism studies have the potential to replace sequential rest/stress perfusion studies for the assessment of cardiac function. Simultaneous acquisition has the benefits of increased signal and lack of need for patient stress, but is complicated by cross-talk between the two radionuclide signals. We consider a simultaneous rest (99m)Tc-sestamibi/(123)I-BMIPP imaging protocol in place of the commonly used sequential rest/stress (99m)Tc-sestamibi protocol. The theoretical precision with which the severity of a cardiac defect and the transmural extent of infarct can be measured is computed for simultaneous and sequential SPECT imaging, and their performance is compared for discriminating (1) degrees of defect severity and (2) sub-endocardial from transmural defects. We consider cardiac infarcts for which reduced perfusion and metabolism are observed. From an information perspective, simultaneous imaging is found to yield comparable or improved performance compared with sequential imaging for discriminating both severity of defect and transmural extent of infarct, for three defects of differing location and size.

  9. Comparison of simultaneous and sequential SPECT imaging for discrimination tasks in assessment of cardiac defects

    International Nuclear Information System (INIS)

    Trott, C M; Ouyang, J; El Fakhri, G

    2010-01-01

    Simultaneous rest perfusion/fatty-acid metabolism studies have the potential to replace sequential rest/stress perfusion studies for the assessment of cardiac function. Simultaneous acquisition has the benefits of increased signal and lack of need for patient stress, but is complicated by cross-talk between the two radionuclide signals. We consider a simultaneous rest 99m Tc-sestamibi/ 123 I-BMIPP imaging protocol in place of the commonly used sequential rest/stress 99m Tc-sestamibi protocol. The theoretical precision with which the severity of a cardiac defect and the transmural extent of infarct can be measured is computed for simultaneous and sequential SPECT imaging, and their performance is compared for discriminating (1) degrees of defect severity and (2) sub-endocardial from transmural defects. We consider cardiac infarcts for which reduced perfusion and metabolism are observed. From an information perspective, simultaneous imaging is found to yield comparable or improved performance compared with sequential imaging for discriminating both severity of defect and transmural extent of infarct, for three defects of differing location and size.

  10. A Comparison of Ultimate Loads from Fully and Sequentially Coupled Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wendt, Fabian F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Damiani, Rick R [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-11-14

    This poster summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between two modeling approaches (fully coupled and sequentially coupled) through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  11. A multithreaded parallel implementation of a dynamic programming algorithm for sequence comparison.

    Science.gov (United States)

    Martins, W S; Del Cuvillo, J B; Useche, F J; Theobald, K B; Gao, G R

    2001-01-01

    This paper discusses the issues involved in implementing a dynamic programming algorithm for biological sequence comparison on a general-purpose parallel computing platform based on a fine-grain event-driven multithreaded program execution model. Fine-grain multithreading permits efficient parallelism exploitation in this application both by taking advantage of asynchronous point-to-point synchronizations and communication with low overheads and by effectively tolerating latency through the overlapping of computation and communication. We have implemented our scheme on EARTH, a fine-grain event-driven multithreaded execution and architecture model which has been ported to a number of parallel machines with off-the-shelf processors. Our experimental results show that the dynamic programming algorithm can be efficiently implemented on EARTH systems with high performance (e.g., speedup of 90 on 120 nodes), good programmability and reasonable cost.

  12. Sequential antimicrobial therapy: comparison of the views of microbiologists and pharmacists.

    Science.gov (United States)

    Smyth, E T; Tillotson, G S

    1998-07-01

    Sequential antimicrobial therapy (SAT) is arousing keen interest in microbiologists and pharmacists. In an attempt to obtain information from these groups regarding the use of SAT in hospitals, an anonymized postal survey was carried out. A SAT questionnaire was circulated to consultant medical microbiologists, clinical microbiologists, and heads of pharmacy departments within the British Isles. Four hundred and forty-seven microbiologists and pharmacists returned completed questionnaires, giving a response rate of 29%. Just over half of medical microbiologists (MM) and pharmacists (PH) indicated that SAT was used in their institution in respiratory medicine, geriatrics, surgery and, significantly, to a lesser degree in paediatrics. The most common infections treated were pneumonia, bronchitis and wound infection. However, there were significant differences between MM and PH, with MM favouring greater use of SAT in peritonitis (P=0.03), septicaemia (PUTI) (P<0.01), and PH favouring use in bronchitis (P<0.01). The ability to take oral fluids or a recognition of no potential absorption problems were key criteria in the decision process leading to the institution of SAT by MM and PH. Significantly more MM favoured employing criteria such as temperature <38 degrees C (P<0.01), no requirement for high tissue concentrations (P=0.02) and evidence of response to i.v. antimicrobial therapy (P<0.01) than PH. The most frequently "switched" antimicrobials were metronidazole, ciprofloxacin and co-amoxiclav. There were more than five times as many MM reporting the use of clindamycin than PH (P<0.01), whereas nearly twice as many PH cited use of cefuroxime (P<0.01). Of those hospitals not employing SAT, most MM and PH concurred that the commonest reason to institute SAT was financial, followed by convenience to patients and staff. However, more PH than MM indicated that protocols (P<0.01) and a reduction in i.v. complications (P<0.01) were important to them. In promoting SAT, MM

  13. Distortion product otoacoustic emissions: comparison of sequential vs. simultaneous presentation of primary tones.

    Science.gov (United States)

    Kumar, U Ajith; Maruthy, Sandeep; Chandrakant, Vishwakarma

    2009-03-01

    Distortion product otoacoustic emissions are one form of evoked otoacoustic emissions. DPOAEs provide the frequency specific information about the hearing status in mid and high frequency regions. But in most screening protocols TEOAEs are preferred as it requires less time compared to DPOAE. This is because, in DPOAE each stimulus is presented one after the other and responses are analyzed. Grason and Stadler Incorporation 60 (GSI-60) offer simultaneous presentation of four sets of primary tones at a time and checks for the DPOAE. In this mode of presentation, all the pairs are presented at a time and following that response is extracted separately whereas, in sequential mode primaries are presented in orderly fashion one after the other. In this article simultaneous and sequential protocols were used to compare the Distortion product otoacoustic emission amplitude, noise floor and administration time in individuals with normal hearing and mild sensori-neural (SN) hearing loss. In simultaneous protocols four sets of primary tones (i.e. 8 tones) were presented together whereas, in sequential presentation mode one set of primary tones was presented each time. Simultaneous protocol was completed in less than half the time required for the completion of sequential protocol. Two techniques yielded similar results at frequencies above 1000 Hz only in normal hearing group. In SN hearing loss group simultaneous presentation yielded signifi cantly higher noise floors and distortion product amplitudes. This result challenges the use of simultaneous presentation technique in neonatal hearing screening programmes and on other pathologies. This discrepancy between two protocols may be due to some changes in biomechanical process in the cochlear and/or due to higher distortion/noise produced by the system during the simultaneous presentation mode.

  14. Eyewitness accuracy rates in sequential and simultaneous lineup presentations: a meta-analytic comparison.

    Science.gov (United States)

    Steblay, N; Dysart, J; Fulero, S; Lindsay, R C

    2001-10-01

    Most police lineups use a simultaneous presentation technique in which eyewitnesses view all lineup members at the same time. Lindsay and Wells (R. C. L. Lindsay & G. L. Wells, 1985) devised an alternative procedure, the sequential lineup, in which witnesses view one lineup member at a time and decide whether or not that person is the perpetrator prior to viewing the next lineup member. The present work uses the technique of meta-analysis to compare the accuracy rates of these presentation styles. Twenty-three papers were located (9 published and 14 unpublished), providing 30 tests of the hypothesis and including 4,145 participants. Results showed that identification of perpetrators from target-present lineups occurs at a higher rate from simultaneous than from sequential lineups. However, this difference largely disappears when moderator variables approximating real world conditions are considered. Also, correct rejection rates were significantly higher for sequential than simultaneous lineups and this difference is maintained or increased by greater approximation to real world conditions. Implications of these findings are discussed.

  15. Comparison of human embryomorphokinetic parameters in sequential or global culture media.

    Science.gov (United States)

    Kazdar, Nadia; Brugnon, Florence; Bouche, Cyril; Jouve, Guilhem; Veau, Ségolène; Drapier, Hortense; Rousseau, Chloé; Pimentel, Céline; Viard, Patricia; Belaud-Rotureau, Marc-Antoine; Ravel, Célia

    2017-08-01

    A prospective study on randomized patients was conducted to determine how morphokinetic parameters are altered in embryos grown in sequential versus global culture media. Eleven morphokinetic parameters of 160 single embryos transferred were analyzed by time lapse imaging involving two University-affiliated in vitro fertilization (IVF) centers. We found that the fading of the two pronuclei occurred earlier in global (22.56±2.15 hpi) versus sequential media (23.63±2.71 hpi; p=0.0297). Likewise, the first cleavage started earlier at 24.52±2.33 hpi vs 25.76±2.95 hpi (p=0.0158). Also, the first cytokinesis was shorter in global medium, lasting 18±10.2 minutes in global versus 36±37.8 minutes in sequential culture medium (p culture medium. Our study highlights the need to adapt morphokinetic analysis accordingly to the type of media used to best support human early embryo development.

  16. Using Hadoop MapReduce for Parallel Genetic Algorithms: A Comparison of the Global, Grid and Island Models.

    Science.gov (United States)

    Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica

    2017-06-29

    The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store

  17. Evaluation of degree of readsorption of radionuclides during sequential extraction in soil: comparison between batch and dynamic extraction systems

    DEFF Research Database (Denmark)

    Petersen, Roongrat; Hansen, Elo Harald; Hou, Xiaolin

    . However, the techniques have an important problem with redistribution as a result of readsorption of dissolved analytes onto the remaining solids phases during extraction. Many authors have demonstrated the readsorption problem and inaccuracy from it. In our previous work, a dynamic extraction system......Sequential extraction techniques have been widely used to fractionate metals in solid samples (soils, sediments, solid wastes, etc.) due to their leachability. The results are useful for obtaining information about bioavailability, potential mobility and transport of element in natural environments...... developed in our laboratory for heavy metal fractionation has shown the reduction of readsorption problem in comparison with the batch techniques. Moreover, the system shows many advantages over the batch system such as speed of extraction, simple procedure, fully automatic, less risk of contamination...

  18. Comparison between state graphs and fault trees for sequential and repairable systems

    International Nuclear Information System (INIS)

    Soussan, D.; Saignes, P.

    1996-01-01

    In French PSA (Probabilistic Safety Assessment) 1300 for the 1300 Mwe PWR plants carried out by EDF, sequential and reparable systems are modeled with state graphs. This method is particularly convenient for modeling dynamic systems with long-term missions but induces a bad traceability and understandability of models. In the objective of providing elements for rewriting PSA 1300 with only boolean models, EDF has asked CEA to participate to a methodological study. The aim is to carry out a feasibility study of transposition of state graphs models into fault trees on Component Cooling System and Essential Service Water System (CCS/ESWS) and to draw a methodological guide for transposition. The study realized on CCS/ESWS involves two main axes: quantification of cold source loss (as an accident sequence initiating event, called H1); quantification of the CCS/ESWS missions in accident sequences. The subject of this article is to show that this transformation is applicable with minimum distortions of the results and to determine the hypotheses, the conditions and the limits of application of this conversion. (authors). 2 refs

  19. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  20. Comparison of capacitive and radio frequency resonator sensors for monitoring parallelized droplet microfluidic production

    KAUST Repository

    Conchouso Gonzalez, David

    2016-06-28

    Scaled-up production of microfluidic droplets, through the parallelization of hundreds of droplet generators, has received a lot of attention to bring novel multiphase microfluidics research to industrial applications. However, apart from droplet generation, other significant challenges relevant to this goal have never been discussed. Examples include monitoring systems, high-throughput processing of droplets and quality control procedures among others. In this paper, we present and compare capacitive and radio frequency (RF) resonator sensors as two candidates that can measure the dielectric properties of emulsions in microfluidic channels. By placing several of these sensors in a parallelization device, the stability of the droplet generation at different locations can be compared, and potential malfunctions can be detected. This strategy enables for the first time the monitoring of scaled-up microfluidic droplet production. Both sensors were prototyped and characterized using emulsions with droplets of 100-150 μm in diameter, which were generated in parallelization devices at water-in-oil volume fractions (φ) between 11.1% and 33.3%.Using these sensors, we were able to measure accurately increments as small as 2.4% in the water volume fraction of the emulsions. Although both methods rely on the dielectric properties of the emulsions, the main advantage of the RF resonator sensors is the fact that they can be designed to resonate at multiple frequencies of the broadband transmission line. Consequently with careful design, two or more sensors can be parallelized and read out by a single signal. Finally, a comparison between these sensors based on their sensitivity, readout cost and simplicity, and design flexibility is also discussed. © 2016 The Royal Society of Chemistry.

  1. Analysis of Parallel Burn Without Crossfeed TSTO RLV Architectures and Comparison to Parallel Burn With Crossfeed and Series Burn Architectures

    Science.gov (United States)

    Smith, Garrett; Phillips, Alan

    2002-01-01

    There are currently three dominant TSTO class architectures. These are Series Burn (SB), Parallel Burn with crossfeed (PBw/cf), and Parallel Burn without crossfeed (PBncf). The goal of this study was to determine what factors uniquely affect PBncf architectures, how each of these factors interact, and to determine from a performance perspective whether a PBncf vehicle could be competitive with a PBw/cf or SB vehicle using equivalent technology and assumptions. In all cases, performance was evaluated on a relative basis for a fixed payload and mission by comparing gross and dry vehicle masses of a closed vehicle. Propellant combinations studied were LOX: LH2 propelled orbiter and booster (HH) and LOX: Kerosene booster with LOX: LH2 orbiter (KH). The study conclusions were: 1) a PBncf orbiter should be throttled as deeply as possible after launch until the staging point. 2) a detailed structural model is essential to accurate architecture analysis and evaluation. 3) a PBncf TSTO architecture is feasible for systems that stage at mach 7. 3a) HH architectures can achieve a mass growth relative to PBw/cf of ratio and to the position of the orbiter required to align the nozzle heights at liftoff. 5 ) thrust to weight ratios of 1.3 at liftoff and between 1.0 and 0.9 when staging at mach 7 appear to be close to ideal for PBncf vehicles. 6) performance for all vehicles studied is better when staged at mach 7 instead of mach 5. The study showed that a Series Burn architecture has the lowest gross mass for HH cases, and has the lowest dry mass for KH cases. The potential disadvantages of SB are the required use of an air-start for the orbiter engines and potential CG control issues. A Parallel Burn with crossfeed architecture solves both these problems, but the mechanics of a large bipropellant crossfeed system pose significant technical difficulties. Parallel Burn without crossfeed vehicles start both booster and orbiter engines on the ground and thus avoid both the risk of

  2. Speeding Up the String Comparison of the IDS Snort using Parallel Programming: A Systematic Literature Review on the Parallelized Aho-Corasick Algorithm

    Directory of Open Access Journals (Sweden)

    SILVA JUNIOR,J. B.

    2016-12-01

    Full Text Available The Intrusion Detection System (IDS needs to compare the contents of all packets arriving at the network interface with a set of signatures for indicating possible attacks, a task that consumes much CPU processing time. In order to alleviate this problem, some researchers have tried to parallelize the IDS's comparison engine, transferring execution from the CPU to GPU. This paper identifies and maps the parallelization features of the Aho-Corasick algorithm, which is used in Snort to compare patterns, in order to show this algorithm's implementation and execution issues, as well as optimization techniques for the Aho-Corasick machine. We have found 147 papers from important computer science publications databases, and have mapped them. We selected 22 and analyzed them in order to find our results. Our analysis of the papers showed, among other results, that parallelization of the AC algorithm is a new task and the authors have focused on the State Transition Table as the most common way to implement the algorithm on the GPU. Furthermore, we found that some techniques speed up the algorithm and reduce the required machine storage space are highly used, such as the algorithm running on the fastest memories and mechanisms for reducing the number of nodes and bit maping.

  3. Numerical investigation of two interacting parallel thruster-plumes and comparison to experiment

    Science.gov (United States)

    Grabe, Martin; Holz, André; Ziegenhagen, Stefan; Hannemann, Klaus

    2014-12-01

    Clusters of orbital thrusters are an attractive option to achieve graduated thrust levels and increased redundancy with available hardware, but the heavily under-expanded plumes of chemical attitude control thrusters placed in close proximity will interact, leading to a local amplification of downstream fluxes and of back-flow onto the spacecraft. The interaction of two similar, parallel, axi-symmetric cold-gas model thrusters has recently been studied in the DLR High-Vacuum Plume Test Facility STG under space-like vacuum conditions, employing a Patterson-type impact pressure probe with slot orifice. We reproduce a selection of these experiments numerically, and emphasise that a comparison of numerical results to the measured data is not straight-forward. The signal of the probe used in the experiments must be interpreted according to the degree of rarefaction and local flow Mach number, and both vary dramatically thoughout the flow-field. We present a procedure to reconstruct the probe signal by post-processing the numerically obtained flow-field data and show that agreement to the experimental results is then improved. Features of the investigated cold-gas thruster plume interaction are discussed on the basis of the numerical results.

  4. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  5. Costs of achieving live birth from assisted reproductive technology: a comparison of sequential single and double embryo transfer approaches.

    Science.gov (United States)

    Crawford, Sara; Boulet, Sheree L; Mneimneh, Allison S; Perkins, Kiran M; Jamieson, Denise J; Zhang, Yujia; Kissin, Dmitry M

    2016-02-01

    To assess treatment and pregnancy/infant-associated medical costs and birth outcomes for assisted reproductive technology (ART) cycles in a subset of patients using elective double embryo (ET) and to project the difference in costs and outcomes had the cycles instead been sequential single ETs (fresh followed by frozen if the fresh ET did not result in live birth). Retrospective cohort study using 2012 and 2013 data from the National ART Surveillance System. Infertility treatment centers. Fresh, autologous double ETs performed in 2012 among ART patients younger than 35 years of age with no prior ART use who cryopreserved at least one embryo. Sequential single and double ETs. Actual live birth rates and estimated ART treatment and pregnancy/infant-associated medical costs for double ET cycles started in 2012 and projected ART treatment and pregnancy/infant-associated medical costs if the double ET cycles had been performed as sequential single ETs. The estimated total ART treatment and pregnancy/infant-associated medical costs were $580.9 million for 10,001 double ETs started in 2012. If performed as sequential single ETs, estimated costs would have decreased by $195.0 million to $386.0 million, and live birth rates would have increased from 57.7%-68.0%. Sequential single ETs, when clinically appropriate, can reduce total ART treatment and pregnancy/infant-associated medical costs by reducing multiple births without lowering live birth rates. Published by Elsevier Inc.

  6. Comparison of PET/CT with Sequential PET/MRI Using an MR-Compatible Mobile PET System.

    Science.gov (United States)

    Nakamoto, Ryusuke; Nakamoto, Yuji; Ishimori, Takayoshi; Fushimi, Yasutaka; Kido, Aki; Togashi, Kaori

    2018-05-01

    The current study tested a newly developed flexible PET (fxPET) scanner prototype. This fxPET system involves dual arc-shaped detectors based on silicon photomultipliers that are designed to fit existing MRI devices, allowing us to obtain fused PET and MR images by sequential PET and MR scanning. This prospective study sought to evaluate the image quality, lesion detection rate, and quantitative values of fxPET in comparison with conventional whole-body (WB) PET and to assess the accuracy of registration. Methods: Seventeen patients with suspected or known malignant tumors were analyzed. Approximately 1 h after intravenous injection of 18 F-FDG, WB PET/CT was performed, followed by fxPET and MRI. For reconstruction of fxPET images, MRI-based attenuation correction was applied. The quality of fxPET images was visually assessed, and the number of detected lesions was compared between the 2 imaging methods. SUV max and maximum average SUV within a 1 cm 3 spheric volume (SUV peak ) of lesions were also compared. In addition, the magnitude of misregistration between fxPET and MR images was evaluated. Results: The image quality of fxPET was acceptable for diagnosis of malignant tumors. There was no significant difference in detectability of malignant lesions between fxPET and WB PET ( P > 0.05). However, the fxPET system did not exhibit superior performance to the WB PET system. There were strong positive correlations between the 2 imaging modalities in SUV max (ρ = 0.88) and SUV peak (ρ = 0.81). SUV max and SUV peak measured with fxPET were approximately 1.1-fold greater than measured with WB PET. The average misregistration between fxPET and MR images was 5.5 ± 3.4 mm. Conclusion: Our preliminary data indicate that running an fxPET scanner near an existing MRI system provides visually and quantitatively acceptable fused PET/MR images for diagnosis of malignant lesions. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  7. In vivo comparison of simultaneous versus sequential injection technique for thermochemical ablation in a porcine model.

    Science.gov (United States)

    Cressman, Erik N K; Shenoi, Mithun M; Edelman, Theresa L; Geeslin, Matthew G; Hennings, Leah J; Zhang, Yan; Iaizzo, Paul A; Bischof, John C

    2012-01-01

    To investigate simultaneous and sequential injection thermochemical ablation in a porcine model, and compare them to sham and acid-only ablation. This IACUC-approved study involved 11 pigs in an acute setting. Ultrasound was used to guide placement of a thermocouple probe and coaxial device designed for thermochemical ablation. Solutions of 10 M acetic acid and NaOH were used in the study. Four injections per pig were performed in identical order at a total rate of 4 mL/min: saline sham, simultaneous, sequential, and acid only. Volume and sphericity of zones of coagulation were measured. Fixed specimens were examined by H&E stain. Average coagulation volumes were 11.2 mL (simultaneous), 19.0 mL (sequential) and 4.4 mL (acid). The highest temperature, 81.3°C, was obtained with simultaneous injection. Average temperatures were 61.1°C (simultaneous), 47.7°C (sequential) and 39.5°C (acid only). Sphericity coefficients (0.83-0.89) had no statistically significant difference among conditions. Thermochemical ablation produced substantial volumes of coagulated tissues relative to the amounts of reagents injected, considerably greater than acid alone in either technique employed. The largest volumes were obtained with sequential injection, yet this came at a price in one case of cardiac arrest. Simultaneous injection yielded the highest recorded temperatures and may be tolerated as well as or better than acid injection alone. Although this pilot study did not show a clear advantage for either sequential or simultaneous methods, the results indicate that thermochemical ablation is attractive for further investigation with regard to both safety and efficacy.

  8. Data-parallel tomographic reconstruction : A comparison of filtered backprojection and direct Fourier reconstruction

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Westenberg, M.A

    1998-01-01

    We consider the parallelization of two standard 2D reconstruction algorithms, filtered backprojection and direct Fourier reconstruction, using the data-parallel programming style. The algorithms are implemented on a Connection Machine CM-5 with 16 processors and a peak performance of 2 Gflop/s.

  9. Comparison of sequential and single extraction in order to estimate environmental impact of metals from fly ash

    Directory of Open Access Journals (Sweden)

    Tasić Aleksandra M.

    2016-01-01

    Full Text Available The aim of this paper was to simulate leaching of metals from fly ash in different environmental conditions using ultrasound and microwave-assisted extraction techniques. Single-agent extraction and sequential extraction procedures were used to determine the levels of different metals leaching. The concentration of metals (Al, Fe, Mn, Cd, Co, Cr, Ni, Pb, Cu, As, Be in fly ash extracts were measured by Inductively Coupled Plasma-Atomic Emission Spectrometry. Single-agent extractions of metals were conducted during sonication times of 10, 20, 30, 40 and 50 min. Single-agent extraction with deionized water was also undertaken by exposing samples to microwave radiation at the temperature of 50°C. The sequential extraction was undertaken according to the BCR procedure which was modified and applied to study the partitioning of metals in coal fly ash. The microwave-assisted sequential extraction was performed at different extraction temperatures: 50, 100 and 150°C. The partitioning of metals between the individual fractions was investigated and discussed. The efficiency of the extraction process for each step was examined. In addition, the results of the microwave-assisted sequential extraction are compared to the results obtained by standard ASTM method. The mobility of most elements contained in fly ash is markedly pH sensitive. [Projekat Ministarstva nauke Republike Srbije, br. 172030, br. 176006 i br. III43009

  10. Aging in Movement Representations for Sequential Finger Movements: A Comparison between Young-, Middle-Aged, and Older Adults

    Science.gov (United States)

    Cacola, Priscila; Roberson, Jerroed; Gabbard, Carl

    2013-01-01

    Studies show that as we enter older adulthood (greater than 64 years), our ability to mentally represent action in the form of using motor imagery declines. Using a chronometry paradigm to compare the movement duration of imagined and executed movements, we tested young-, middle-aged, and older adults on their ability to perform sequential finger…

  11. Stiffness Analysis and Comparison of 3-PPR Planar Parallel Manipulators with Actuation Compliance

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2012-01-01

    In this paper, the stiffness of 3-PPR planar parallel manipulator (PPM) is analyzed with the consideration of nonlinear actuation compliance. The characteristics of the stiffness matrix pertaining to the planar parallel manipulators are analyzed and discussed. Graphic representation of the stiffn...... of the stiffness characteristics by means of translational and rotational stiffness mapping is developed. The developed method is illustrated with an unsymmetrical 3-PPR PPM, being compared with its structure-symmetrical counterpart....

  12. Comparison of Pre-Analytical FFPE Sample Preparation Methods and Their Impact on Massively Parallel Sequencing in Routine Diagnostics

    Science.gov (United States)

    Heydt, Carina; Fassunke, Jana; Künstlinger, Helen; Ihle, Michaela Angelika; König, Katharina; Heukamp, Lukas Carl; Schildhaus, Hans-Ulrich; Odenthal, Margarete; Büttner, Reinhard; Merkelbach-Bruse, Sabine

    2014-01-01

    Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE) material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany) seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3–24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can be used for

  13. Comparison of pre-analytical FFPE sample preparation methods and their impact on massively parallel sequencing in routine diagnostics.

    Directory of Open Access Journals (Sweden)

    Carina Heydt

    Full Text Available Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3-24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA. No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can

  14. Accuracy of respiratory motion measurement of 4D-MRI: A comparison between cine and sequential acquisition.

    Science.gov (United States)

    Liu, Yilin; Yin, Fang-Fang; Rhee, DongJoo; Cai, Jing

    2016-01-01

    The authors have recently developed a cine-mode T2*/T1-weighted 4D-MRI technique and a sequential-mode T2-weighted 4D-MRI technique for imaging respiratory motion. This study aims at investigating which 4D-MRI image acquisition mode, cine or sequential, provides more accurate measurement of organ motion during respiration. A 4D digital extended cardiac-torso (XCAT) human phantom with a hypothesized tumor was used to simulate the image acquisition and the 4D-MRI reconstruction. The respiratory motion was controlled by the given breathing signal profiles. The tumor was manipulated to move continuously with the surrounding tissue. The motion trajectories were measured from both sequential- and cine-mode 4D-MRI images. The measured trajectories were compared with the average trajectory calculated from the input profiles, which was used as references. The error in 4D-MRI tumor motion trajectory (E) was determined. In addition, the corresponding respiratory motion amplitudes of all the selected 2D images for 4D reconstruction were recorded. Each of the amplitude was compared with the amplitude of its associated bin on the average breathing curve. The mean differences from the average breathing curve across all slice positions (D) were calculated. A total of 500 simulated respiratory profiles with a wide range of irregularity (Ir) were used to investigate the relationship between D and Ir. Furthermore, statistical analysis of E and D using XCAT controlled by 20 cancer patients' breathing profiles was conducted. Wilcoxon Signed Rank test was conducted to compare two modes. D increased faster for cine-mode (D = 1.17 × Ir + 0.23) than sequential-mode (D = 0.47 × Ir + 0.23) as irregularity increased. For the XCAT study using 20 cancer patients' breathing profiles, the median E values were significantly different: 0.12 and 0.10 cm for cine- and sequential-modes, respectively, with a p-value of 0.02. The median D values were significantly different: 0.47 and 0.24 cm for cine

  15. Comparison of three-stage sequential extraction and toxicity characteristic leaching tests to evaluate metal mobility in mining wastes

    International Nuclear Information System (INIS)

    Margui, E.; Salvado, V.; Queralt, I.; Hidalgo, M.

    2004-01-01

    Abandoned mining sites contain residues from ore processing operations that are characterised by high concentrations of heavy metals. The form in which a metal exists strongly influences its mobility and, thus, the effects on the environment. Operational methods of speciation analysis, such as the use of sequential extraction procedures, are commonly applied. In this work, the modified three-stage sequential extraction procedure proposed by the BCR (now the Standards, Measurements and Testing Programme) was applied for the fractionation of Ni, Zn, Pb and Cd in mining wastes from old Pb-Zn mining areas located in the Val d'Aran (NE Spain) and Cartagena (SE Spain). Analyses of the extracts were performed by inductively coupled plasma atomic emission spectrometry and electrothermal atomic absorption spectrometry. The procedure was evaluated by using a certified reference material, BCR-701. The results of the partitioning study indicate that more easily mobilised forms (acid exchangeable) were predominant for Cd and Zn, particularly in the sample from Cartagena. In contrast, the largest amount of lead was associated with the iron and manganese oxide fractions. On the other hand, the applicability of lixiviation tests commonly used to evaluate the leaching of toxic species from landfill disposal (US-EPA Toxicity Characteristic Leaching Procedure and DIN 38414-S4) to mining wastes was also investigated and the obtained results compared with the information on metal mobility derivable from the application of the three-stage sequential extraction procedure

  16. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  17. Comparison of electrorheological characteristics obtained for two geometries: parallel plates and concentric cylinders

    Czech Academy of Sciences Publication Activity Database

    Peer, Petra; Filip, Petr; Stěnička, M.; Pavlínek, V.

    2014-01-01

    Roč. 59, č. 3 (2014), s. 221-235 ISSN 0001-7043 R&D Projects: GA ČR(CZ) GAP105/11/2342 Institutional support: RVO:67985874 Keywords : electrorheology * parallel plates * concentric cylinders * silicone oil * PANI powders Subject RIV: BK - Fluid Dynamics

  18. Functional efficiency comparison between split- and parallel-hybrid using advanced energy flow analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Guttenberg, Philipp; Lin, Mengyan [Romax Technology, Nottingham (United Kingdom)

    2009-07-01

    The following paper presents a comparative efficiency analysis of the Toyota Prius versus the Honda Insight using advanced Energy Flow Analysis methods. The sample study shows that even very different hybrid concepts like a split- and a parallel-hybrid can be compared in a high level of detail and demonstrates the benefit showing exemplary results. (orig.)

  19. Comparison of capacitive and radio frequency resonator sensors for monitoring parallelized droplet microfluidic production

    KAUST Repository

    Conchouso Gonzalez, David; McKerricher, Garret; Carreno, Armando Arpys Arevalo; Castro, David; Shamim, Atif; Foulds, Ian G.

    2016-01-01

    with droplets of 100-150 μm in diameter, which were generated in parallelization devices at water-in-oil volume fractions (φ) between 11.1% and 33.3%.Using these sensors, we were able to measure accurately increments as small as 2.4% in the water volume fraction

  20. Limited angle tomographic breast imaging: A comparison of parallel beam and pinhole collimation

    International Nuclear Information System (INIS)

    Wessell, D.E.; Kadrmas, D.J.; Frey, E.C.

    1996-01-01

    Results from clinical trials have suggested no improvement in lesion detection with parallel hole SPECT scintimammography (SM) with Tc-99m over parallel hole planar SM. In this initial investigation, we have elucidated some of the unique requirements of SPECT SM. With these requirements in mind, we have begun to develop practical data acquisition and reconstruction strategies that can reduce image artifacts and improve image quality. In this paper we investigate limited angle orbits for both parallel hole and pinhole SPECT SM. Singular Value Decomposition (SVD) is used to analyze the artifacts associated with the limited angle orbits. Maximum likelihood expectation maximization (MLEM) reconstructions are then used to examine the effects of attenuation compensation on the quality of the reconstructed image. All simulations are performed using the 3D-MCAT breast phantom. The results of these simulation studies demonstrate that limited angle SPECT SM is feasible, that attenuation correction is needed for accurate reconstructions, and that pinhole SPECT SM may have an advantage over parallel hole SPECT SM in terms of improved image quality and reduced image artifacts

  1. Comparison of the deflated preconditioned conjugate gradient method and parallel direct solver for composite materials

    NARCIS (Netherlands)

    Jönsthövel, T.B.; Van Gijzen, M.B.; MacLachlan, S.; Vuik, C.; Scarpas, A.

    2011-01-01

    The demand for large FE meshes increases as parallel computing becomes the standard in FE simulations. Direct and iterative solution methods are used to solve the resulting linear systems. Many applications concern composite materials, which are characterized by large discontinuities in the material

  2. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  3. Refinement of Parallel and Reactive Programs

    OpenAIRE

    Back, R. J. R.

    1992-01-01

    We show how to apply the refinement calculus to stepwise refinement of parallel and reactive programs. We use action systems as our basic program model. Action systems are sequential programs which can be implemented in a parallel fashion. Hence refinement calculus methods, originally developed for sequential programs, carry over to the derivation of parallel programs. Refinement of reactive programs is handled by data refinement techniques originally developed for the sequential refinement c...

  4. Comparison of concurrent chemoradiotherapy versus sequential radiochemotherapy in patients with completely resected non-small cell lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hwan Ik; Noh, O Kyu; Oh, Young Taek; Chun, Mi Son; Kim, Sang Won; Cho, O Yeon; Heo, Jae Sung [Ajou University School of Medicine, Suwon (Korea, Republic of)

    2016-09-15

    Our institution has implemented two different adjuvant protocols in treating patients with non-small cell lung cancer (NSCLC): chemotherapy followed by concurrent chemoradiotherapy (CT-CCRT) and sequential postoperative radiotherapy (PORT) followed by postoperative chemotherapy (POCT). We aimed to compare the clinical outcomes between the two adjuvant protocols. From March 1997 to October 2012, 68 patients were treated with CT-CCRT (n = 25) and sequential PORT followed by POCT (RT-CT; n = 43). The CT-CCRT protocol consisted of 2 cycles of cisplatin-based POCT followed by PORT concurrently with 2 cycles of POCT. The RT-CT protocol consisted of PORT followed by 4 cycles of cisplatin-based POCT. PORT was administered using conventional fractionation with a dose of 50.4–60 Gy. We compared the outcomes between the two adjuvant protocols and analyzed the clinical factors affecting survivals. Median follow-up time was 43.9 months (range, 3.2 to 74.0 months), and the 5-year overall survival (OS), locoregional recurrence-free survival (LRFS), and distant metastasis-free survival (DMFS) were 53.9%, 68.2%, and 51.0%, respectively. There were no significant differences in OS (p = 0.074), LRFS (p = 0.094), and DMFS (p = 0.490) between the two protocols. In multivariable analyses, adjuvant protocol remained as a significant prognostic factor for LRFS, favouring CT-CCRT (hazard ratio [HR] = 3.506, p = 0.046) over RT-CT, not for OS (HR = 0.647, p = 0.229). CT-CCRT protocol increased LRFS more than RT-CT protocol in patients with completely resected NSCLC, but not in OS. Further studies are warranted to evaluate the benefit of CCRT strategy compared with sequential strategy.

  5. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR MODELS

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders

    2015-01-01

    In this article, we investigate the behaviour of a number of methods for estimating the co-integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion....... The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms......-based method to over-estimate the co-integration rank in relatively small sample sizes....

  6. Sequential and Parallel Attack Tree Modelling

    NARCIS (Netherlands)

    Arnold, Florian; Guck, Dennis; Kumar, Rajesh; Stoelinga, Mariëlle Ida Antoinette; Koornneef, Floor; van Gulijk, Coen

    The intricacy of socio-technical systems requires a careful planning and utilisation of security resources to ensure uninterrupted, secure and reliable services. Even though many studies have been conducted to understand and model the behaviour of a potential attacker, the detection of crucial

  7. A comparison of the interactions between sequential Ga-P and Ga-As diffusions in silicon

    International Nuclear Information System (INIS)

    Jones, C.L.; Willoughby, A.F.W.

    1976-01-01

    Investigation of the interactions between sequential gallium-phosphorus and gallium-arsenic diffusions have been made using radiotracer profiling techniques. Gallium diffusions were first carried out using isotope 67 Ga diffused from a solid gallium oxide source, and subsequently phosphorus or arsenic were diffused into the same surface. The effect of phosphorus diffusion of high surface concentration was found to be a large enhancement (up to a factor of 100)in the diffusion coefficient of the tail of the gallium profile, while similar arsenic diffusion produced either a small enhancement or a retardation, depending on the conditions used. In addition, the diffusion of both phosphorus and arsenic produced a pronounced dip in the gallium profiles, which is discussed in terms of the built-in electric field produced during the emitter diffusions. The differences between the positions of the dips produced by phosphorus and arsenic are explained by the differences in their profile shape and hence in the electric field distribution. In the case of arsenic, the dip is located at the steeply falling front of the arsenic profile which resolves discrepancies in previous studies of boron-arsenic sequential diffusions. (author)

  8. Comparison of /sup 32/P therapy and sequential hemibody irradiation (HBI) for bony metastases as methods of whole body irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, H.; Choi, K.; Sohn, C.; Yaes, R.; Rotman, M.

    1986-06-01

    We report a retrospective study of 15 patients with prostate carcinoma and diffuse bone metastases treated with sodium /sup 32/P for palliation of pain at Downstate Medical Center and Kings County Hospital from 1973 to 1978. The response rates, duration of response, and toxicities are compared with those of other series of patients treated with /sup 32/P and with sequential hemibody irradiation. The response rates and duration of response are similar with both modalities ranging from 58 to 95% with a duration of 3.3 to 6 months with /sup 32/P and from 75 to 86% with a median duration of 5.5 months with hemibody irradiation. There are significant differences in the patterns of response and in the toxicities of the two treatment methods. Both methods cause significant bone marrow depression. Acute radiation syndrome, radiation pneumonitis, and alopecia are seen with sequential hemibody irradiation and not with /sup 32/P, but their incidence can be reduced by careful treatment planning. Hemibody irradiation can provide pain relief within 24 to 48 h, while /sup 32/P may produce an initial exacerbation of pain. Lower hemibody irradiation alone is less toxic than either upper hemibody irradiation or /sup 32/P treatment.

  9. Compiling Scientific Programs for Scalable Parallel Systems

    National Research Council Canada - National Science Library

    Kennedy, Ken

    2001-01-01

    ...). The research performed in this project included new techniques for recognizing implicit parallelism in sequential programs, a powerful and precise set-based framework for analysis and transformation...

  10. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  11. COMPARISON BETWEEN TEST METHODS TO DETERMINE WOOD EMBEDMENT STRENGTH PARALLEL TO THE GRAIN

    Directory of Open Access Journals (Sweden)

    Diego Henrique de Almeida

    Full Text Available ABSTRACT This study compares the test methods according to the ABNT NBR 7190:1997, EN 383:2007, ASTM D5764:2007, EUROCODE 5:2004, NDS:2001 standards in order to provide support to establish a new test method for determining the embedment strength of wood parallel to the grain. Parallel-to-grain tests were carried out for six wood species (Schizolobium amazonicum; Pinus elliottii; Pinus oocarpa; Hymenaea spp.; Lyptus(r: hybrid Eucalyptus grandis and Eucalyptus urophylla, and Goupia glabra using four diameters (8 mm, 10 mm, 12 mm and 16 mm for the metal pin fasteners (bolts. The experimental results obtained according to the EN 383:2007 standard were closer to the specific values for the metal-dowel connections design used by ABNT NBR 7190:1997, which are considered equal compression parallel to the grain. The use of maximum embedment force or the force causing displacement of 5 mm between the bolt and the test-piece as criteria for determining embedment strength for EN 383:2007 appears to be more appropriate than the criteria used by the Brazilian and American Standards.

  12. Comparison of phase-constrained parallel MRI approaches: Analogies and differences.

    Science.gov (United States)

    Blaimer, Martin; Heim, Marius; Neumann, Daniel; Jakob, Peter M; Kannengiesser, Stephan; Breuer, Felix A

    2016-03-01

    Phase-constrained parallel MRI approaches have the potential for significantly improving the image quality of accelerated MRI scans. The purpose of this study was to investigate the properties of two different phase-constrained parallel MRI formulations, namely the standard phase-constrained approach and the virtual conjugate coil (VCC) concept utilizing conjugate k-space symmetry. Both formulations were combined with image-domain algorithms (SENSE) and a mathematical analysis was performed. Furthermore, the VCC concept was combined with k-space algorithms (GRAPPA and ESPIRiT) for image reconstruction. In vivo experiments were conducted to illustrate analogies and differences between the individual methods. Furthermore, a simple method of improving the signal-to-noise ratio by modifying the sampling scheme was implemented. For SENSE, the VCC concept was mathematically equivalent to the standard phase-constrained formulation and therefore yielded identical results. In conjunction with k-space algorithms, the VCC concept provided more robust results when only a limited amount of calibration data were available. Additionally, VCC-GRAPPA reconstructed images provided spatial phase information with full resolution. Although both phase-constrained parallel MRI formulations are very similar conceptually, there exist important differences between image-domain and k-space domain reconstructions regarding the calibration robustness and the availability of high-resolution phase information. © 2015 Wiley Periodicals, Inc.

  13. COMPARISON OF PARALLEL AND SERIES HYBRID POWERTRAINS FOR TRANSIT BUS APPLICATION

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Zhiming [ORNL; Daw, C Stuart [ORNL; Smith, David E [ORNL; Jones, Perry T [ORNL; LaClair, Tim J [ORNL; Parks, II, James E [ORNL

    2016-01-01

    The fuel economy and emissions of both conventional and hybrid buses equipped with emissions aftertreatment were evaluated via computational simulation for six representative city bus drive cycles. Both series and parallel configurations for the hybrid case were studied. The simulation results indicate that series hybrid buses have the greatest overall advantage in fuel economy. The series and parallel hybrid buses were predicted to produce similar CO and HC tailpipe emissions but were also predicted to have reduced NOx tailpipe emissions compared to the conventional bus in higher speed cycles. For the New York bus cycle (NYBC), which has the lowest average speed among the cycles evaluated, the series bus tailpipe emissions were somewhat higher than they were for the conventional bus, while the parallel hybrid bus had significantly lower tailpipe emissions. All three bus powertrains were found to require periodic active DPF regeneration to maintain PM control. Plug-in operation of series hybrid buses appears to offer significant fuel economy benefits and is easily employed due to the relatively large battery capacity that is typical of the series hybrid configuration.

  14. A comparison of high-order explicit Runge–Kutta, extrapolation, and deferred correction methods in serial and parallel

    KAUST Repository

    Ketcheson, David I.

    2014-06-13

    We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.

  15. Development of a flow method for the determination of phosphate in estuarine and freshwaters-Comparison of flow cells in spectrophotometric sequential injection analysis

    International Nuclear Information System (INIS)

    Mesquita, Raquel B.R.; Ferreira, M. Teresa S.O.B.; Toth, Ildiko V.; Bordalo, Adriano A.; McKelvie, Ian D.; Rangel, Antonio O.S.S.

    2011-01-01

    Highlights: → Sequential injection determination of phosphate in estuarine and freshwaters. → Alternative spectrophotometric flow cells are compared. → Minimization of schlieren effect was assessed. → Proposed method can cope with wide salinity ranges. → Multi-reflective cell shows clear advantages. - Abstract: A sequential injection system with dual analytical line was developed and applied in the comparison of two different detection systems viz; a conventional spectrophotometer with a commercial flow cell, and a multi-reflective flow cell coupled with a photometric detector under the same experimental conditions. The study was based on the spectrophotometric determination of phosphate using the molybdenum-blue chemistry. The two alternative flow cells were compared in terms of their response to variation of sample salinity, susceptibility to interferences and to refractive index changes. The developed method was applied to the determination of phosphate in natural waters (estuarine, river, well and ground waters). The achieved detection limit (0.007 μM PO 4 3- ) is consistent with the requirement of the target water samples, and a wide quantification range (0.024-9.5 μM) was achieved using both detection systems.

  16. Development of a flow method for the determination of phosphate in estuarine and freshwaters-Comparison of flow cells in spectrophotometric sequential injection analysis

    Energy Technology Data Exchange (ETDEWEB)

    Mesquita, Raquel B.R. [CBQF/Escola Superior de Biotecnologia, Universidade Catolica Portuguesa, R. Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Laboratory of Hydrobiology, Institute of Biomedical Sciences Abel Salazar (ICBAS) and Institute of Marine Research (CIIMAR), Universidade do Porto, Lg. Abel Salazar 2, 4099-003 Porto (Portugal); Ferreira, M. Teresa S.O.B. [CBQF/Escola Superior de Biotecnologia, Universidade Catolica Portuguesa, R. Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Toth, Ildiko V. [REQUIMTE, Departamento de Quimica, Faculdade de Farmacia, Universidade de Porto, Rua Anibal Cunha, 164, 4050-047 Porto (Portugal); Bordalo, Adriano A. [Laboratory of Hydrobiology, Institute of Biomedical Sciences Abel Salazar (ICBAS) and Institute of Marine Research (CIIMAR), Universidade do Porto, Lg. Abel Salazar 2, 4099-003 Porto (Portugal); McKelvie, Ian D. [School of Chemistry, University of Melbourne, Victoria 3010 (Australia); Rangel, Antonio O.S.S., E-mail: aorangel@esb.ucp.pt [CBQF/Escola Superior de Biotecnologia, Universidade Catolica Portuguesa, R. Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal)

    2011-09-02

    Highlights: {yields} Sequential injection determination of phosphate in estuarine and freshwaters. {yields} Alternative spectrophotometric flow cells are compared. {yields} Minimization of schlieren effect was assessed. {yields} Proposed method can cope with wide salinity ranges. {yields} Multi-reflective cell shows clear advantages. - Abstract: A sequential injection system with dual analytical line was developed and applied in the comparison of two different detection systems viz; a conventional spectrophotometer with a commercial flow cell, and a multi-reflective flow cell coupled with a photometric detector under the same experimental conditions. The study was based on the spectrophotometric determination of phosphate using the molybdenum-blue chemistry. The two alternative flow cells were compared in terms of their response to variation of sample salinity, susceptibility to interferences and to refractive index changes. The developed method was applied to the determination of phosphate in natural waters (estuarine, river, well and ground waters). The achieved detection limit (0.007 {mu}M PO{sub 4}{sup 3-}) is consistent with the requirement of the target water samples, and a wide quantification range (0.024-9.5 {mu}M) was achieved using both detection systems.

  17. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  18. Comparison of simultaneous and sequential administration of fentanyl-propofol for surgical abortion: a randomized single-blinded controlled trial.

    Science.gov (United States)

    Gao, Wei; Sha, Baoyong; Zhao, Yuan; Fan, Zhe; Liu, Lin; Shen, Xin

    2017-08-01

    Propofol lipid emulsion (PLE) is a nanosized sedative, and it is used with a combination of salted antalgic prodrug, fentanyl citrate (FC). To illustrate the synergistic effect of mixing, we compared the sedation/analgesia resulting from simultaneous and sequential administration in surgically induced abortion (No. ChiCTR-IPC-15006153). Simultaneous group showed lower bispectral index, blood pressure, and heart rate, when cannula was inserted into the uterus. It also showed less frequency of hypertension, sinus tachycardia, movement, pain at the injection site, and additional FC. Therefore, premixing of PLE and FC enhanced the sedation and analgesia; stabilized the hemodynamics; lessened the incidence of movement and injection pain; and reduced the requirement of drugs.

  19. Enhancing Application Performance Using Mini-Apps: Comparison of Hybrid Parallel Programming Paradigms

    Science.gov (United States)

    Lawson, Gary; Sosonkina, Masha; Baurle, Robert; Hammond, Dana

    2017-01-01

    In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 11 was measured for MPI+OpenMP.

  20. MRI of degenerative lumbar spine disease: comparison of non-accelerated and parallel imaging

    International Nuclear Information System (INIS)

    Noelte, Ingo; Gerigk, Lars; Brockmann, Marc A.; Kemmling, Andre; Groden, Christoph

    2008-01-01

    Parallel imaging techniques such as GRAPPA have been introduced to optimize image quality and acquisition time. For spinal imaging in a clinical setting no data exist on the equivalency of conventional and parallel imaging techniques. The purpose of this study was to determine whether T1- and T2-weighted GRAPPA sequences are equivalent to conventional sequences for the evaluation of degenerative lumbar spine disease in terms of image quality and artefacts. In patients with clinically suspected degenerative lumbar spine disease two neuroradiologists independently compared sagittal GRAPPA (acceleration factor 2, time reduction approximately 50%) and non-GRAPPA images (25 patients) and transverse GRAPPA (acceleration factor 2, time reduction approximately 50%) and non-GRAPPA images (23 lumbar segments in six patients). Comparative analyses included the minimal diameter of the spinal canal, disc abnormalities, foraminal stenosis, facet joint degeneration, lateral recess, nerve root compression and osteochondrotic vertebral and endplate changes. Image inhomogeneity was evaluated by comparing the nonuniformity in the two techniques. Image quality was assessed by grading the delineation of pathoanatomical structures. Motion and aliasing artefacts were classified from grade 1 (severe) to grade 5 (absent). There was no significant difference between GRAPPA and non-accelerated MRI in the evaluation of degenerative lumbar spine disease (P > 0.05), and there was no difference in the delineation of pathoanatomical structures. For inhomogeneity there was a trend in favour of the conventional sequences. No significant artefacts were observed with either technique. The GRAPPA technique can be used effectively to reduce scanning time in patients with degenerative lumbar spine disease while preserving image quality. (orig.)

  1. Comparison of first pass bolus AIFs extracted from sequential 18F-FDG PET and DSC-MRI of mice

    International Nuclear Information System (INIS)

    Evans, Eleanor; Sawiak, Stephen J.; Ward, Alexander O.; Buonincontri, Guido; Hawkes, Robert C.; Adrian Carpenter, T.

    2014-01-01

    Accurate kinetic modelling of in vivo physiological function using positron emission tomography (PET) requires determination of the tracer time–activity curve in plasma, known as the arterial input function (AIF). The AIF is usually determined by invasive blood sampling methods, which are prohibitive in murine studies due to low total blood volumes. Extracting AIFs from PET images is also challenging due to large partial volume effects (PVE). We hypothesise that in combined PET with magnetic resonance imaging (PET/MR), a co-injected bolus of MR contrast agent and PET ligand can be tracked using fast MR acquisitions. This protocol would allow extraction of a MR AIF from MR contrast agent concentration–time curves, at higher spatial and temporal resolution than an image-derived PET AIF. A conversion factor could then be applied to the MR AIF for use in PET kinetic analysis. This work has compared AIFs obtained from sequential DSC-MRI and PET with separate injections of gadolinium contrast agent and 18 F-FDG respectively to ascertain the technique′s validity. An automated voxel selection algorithm was employed to improve MR AIF reproducibility. We found that MR and PET AIFs displayed similar character in the first pass, confirmed by gamma variate fits (p<0.02). MR AIFs displayed reduced PVE compared to PET AIFs, indicating their potential use in PET/MR studies

  2. Quantum dots as chemiluminescence enhancers tested by sequential injection technique: Comparison of flow and flow-batch conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sklenářová, Hana, E-mail: sklenarova@faf.cuni.cz [Charles University in Prague, Faculty of Pharmacy in Hradec Králové, Department of Analytical Chemistry, Hradec Králové (Czech Republic); Voráčová, Ivona [Institute of Analytical Chemistry of the CAS, v. v. i., Brno (Czech Republic); Chocholouš, Petr; Polášek, Miroslav [Charles University in Prague, Faculty of Pharmacy in Hradec Králové, Department of Analytical Chemistry, Hradec Králové (Czech Republic)

    2017-04-15

    The effect of 0.01–100 µmol L{sup −1} Quantum Dots (QDs) with different emission wavelengths (520–640 nm) and different surface modifications (mercaptopropionic, mercaptoundecanoic, thioglycolic acids and mercaptoethylamine) on permanganate-induced and luminol–hydrogen peroxide chemiluminescence (CL) was studied in detail by a sequential injection technique using a spiral detection flow cell and a flow-batch detection cell operated in flow and stop-flow modes. In permanganate CL system no significant enhancement of the CL signal was observed while for the luminol–hydrogen peroxide CL substantial increase (>100% and >90% with the spiral detection cell in flow and stop-flow modes, respectively) was attained for CdTe QDs. Enhancement exceeding 120% was observed for QDs with emissions at 520, 575 and 603 nm (sizes of 2.8 nm, 3.3 nm and 3.6 nm) using the flow-batch detection cell in the stop-flow mode. Pronounced effect was noted for surface modifications while mercaptoethylamine was the most efficient in CL enhancement compared to mercaptopropionic acid the most commonly applied coating. Significant difference between results obtained in flow and flow-batch conditions based on the entire kinetics of the extremely fast CL reaction was discussed. The increase of the CL signal was always accompanied by reduced lifetime of the CL emission thus application of QDs in flow techniques should be always coupled with the study of the CL lifetime.

  3. Comparison of 250 MHz R10K Origin 2000 and 400 MHz Origin 2000 Using NAS Parallel Benchmarks

    Science.gov (United States)

    Turney, Raymond D.; Thigpen, William W. (Technical Monitor)

    2001-01-01

    This report describes results of benchmark tests on Steger, a 250 MHz Origin 2000 system with R10K processors, currently installed at the NASA Ames National Advanced Supercomputing (NAS) facility. For comparison purposes, the tests were also run on Lomax, a 400 MHz Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to measure system performance. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versions used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  4. A comparison of temporal, spatial and parallel phase shifting algorithms for digital image plane holography

    International Nuclear Information System (INIS)

    Arroyo, M P; Lobera, J

    2008-01-01

    This paper investigates the performance of several phase shifting (PS) techniques when using digital image plane holography (DIPH) as a fluid velocimetry technique. The main focus is on increasing the recording system aperture in order to overcome the limitation on the little light available in fluid applications. Some experiments with small rotations of a fluid-like solid object have been used to test the ability of PS-DIPH to faithfully reconstruct the object complex amplitude. Holograms for several apertures and for different defocusing distances have been recorded using spatial phase shifting (SPS) or temporal phase shifting (TPS) techniques. The parallel phase shifted holograms (H PPS ) have been generated from the TPS holograms (H TPS ). The data obtained from TPS-DIPH have been taken as the true object complex amplitude, which is used to benchmark that recovered using the other techniques. The findings of this work show that SPS and PPS are very similar indeed, and suggest that both can work for bigger apertures yet retain phase information

  5. Paralleled comparison of vectors for the generation of CAR-T cells.

    Science.gov (United States)

    Qin, Di-Yuan; Huang, Yong; Li, Dan; Wang, Yong-Sheng; Wang, Wei; Wei, Yu-Quan

    2016-09-01

    T-lymphocytes genetically engineered with the chimeric antigen receptor (CAR-T) have shown great therapeutic potential in cancer treatment. A variety of preclinical researches and clinical trials of CAR-T therapy have been carried out to lay the foundation for future clinical application. In these researches, several gene-transfer methods were used to deliver CARs or other genes into T-lymphocytes, equipping CAR-modified T cells with a property of recognizing and attacking antigen-expressing tumor cells in a major histocompatibility complex-independent manner. Here, we summarize the gene-transfer vectors commonly used in the generation of CAR-T cell, including retrovirus vectors, lentivirus vectors, the transposon/transposase system, the plasmid-based system, and the messenger RNA electroporation system. The following aspects were compared in parallel: efficiency of gene transfer, the integration methods in the modified T cells, foreground of scale-up production, and application and development in clinical trials. These aspects should be taken into account to generate the optimal CAR-gene vector that may be suitable for future clinical application.

  6. Comparison of microbial community shifts in two parallel multi-step drinking water treatment processes.

    Science.gov (United States)

    Xu, Jiajiong; Tang, Wei; Ma, Jun; Wang, Hong

    2017-07-01

    Drinking water treatment processes remove undesirable chemicals and microorganisms from source water, which is vital to public health protection. The purpose of this study was to investigate the effects of treatment processes and configuration on the microbiome by comparing microbial community shifts in two series of different treatment processes operated in parallel within a full-scale drinking water treatment plant (DWTP) in Southeast China. Illumina sequencing of 16S rRNA genes of water samples demonstrated little effect of coagulation/sedimentation and pre-oxidation steps on bacterial communities, in contrast to dramatic and concurrent microbial community shifts during ozonation, granular activated carbon treatment, sand filtration, and disinfection for both series. A large number of unique operational taxonomic units (OTUs) at these four treatment steps further illustrated their strong shaping power towards the drinking water microbial communities. Interestingly, multidimensional scaling analysis revealed tight clustering of biofilm samples collected from different treatment steps, with Nitrospira, the nitrite-oxidizing bacteria, noted at higher relative abundances in biofilm compared to water samples. Overall, this study provides a snapshot of step-to-step microbial evolvement in multi-step drinking water treatment systems, and the results provide insight to control and manipulation of the drinking water microbiome via optimization of DWTP design and operation.

  7. Comparison of parallel temperature measurements from conventional and automatic weather stations at Fabra Observatory (Barcelona).

    Science.gov (United States)

    Aguilar, Enric; Gilabert, Alba; Prohom, Marc

    2013-04-01

    Fabra Observatory , located in a promontory at 411 meters above sea level in the outskirts of Barcelona, hosts a continuous climate record since 1913. Additionally, it has been recording since 1996 simultaneous temperature and precipitation data with conventional instruments and automated systems. The automatization of recording sites employed with climatological purposes is happening elsewhere in the country and across the globe. Unfortunately, in most cases long lasting parallel measurements, are not kept. Thereafter, this site offers an excellent opportunity to study the impact of the introduction of Automatic Weather Stations (AWS). The conventional station (CON) equips a liquid in glass thermometer, located inside a standard Stevenson screen. The automatic measurements (AWS) have been taken using MCV-STA sensors sheltered in a MCV small plate-like ventilated screen between 1996 and the end of July 2007. For our analysis, this MCV period is split in two (T1, T2) due to an obvious jump in the differences AWS-CON in October 2002, produced by unknown reasons. From August 2007 to the present (T3), a Vaisala HMP45AL sensor was placed inside a Stevenson Screen and used for automatic measurements. For daily maximum temperatures, the median differences reach 3.2°C in T1, 1.1°C in T2 and merely -0.1°C in T3. In this later period, 94% of the differences are comprised in a ±0.5°C range, compared to 23% in T2 and only 6% in T1. It is interesting to note how the overheating of the MCV screen dominates the difference series, as 85% of the AWS values taken in T1 and T2 are warmer than the conventional measurements, contrasting with only 27% of cases during T3, when the automated measurements were taken inside a Stevenson screen. These differences are highly temperature dependent: low (high) AWS temperatures are associated with small (large) differences with the CON series. This effect is also evident if temperatures are analyzed by seasons: summer differences are much

  8. Social complexity parallels vocal complexity: a comparison of three non-human primate species.

    Science.gov (United States)

    Bouchet, Hélène; Blois-Heulin, Catherine; Lemasson, Alban

    2013-01-01

    Social factors play a key role in the structuring of vocal repertoires at the individual level, notably in non-human primates. Some authors suggested that, at the species level too, social life may have driven the evolution of communicative complexity, but this has rarely been empirically tested. Here, we use a comparative approach to address this issue. We investigated vocal variability, at both the call type and the repertoire levels, in three forest-dwelling species of Cercopithecinae presenting striking differences in their social systems, in terms of social organization as well as social structure. We collected female call recordings from twelve De Brazza's monkeys (Cercopithecus neglectus), six Campbell's monkeys (Cercopithecus campbelli) and seven red-capped mangabeys (Cercocebus torquatus) housed in similar conditions. First, we noted that the level of acoustic variability and individual distinctiveness found in several call types was related to their importance in social functioning. Contact calls, essential to intra-group cohesion, were the most individually distinctive regardless of the species, while threat calls were more structurally variable in mangabeys, the most "despotic" of our three species. Second, we found a parallel between the degree of complexity of the species' social structure and the size, diversity, and usage of its vocal repertoire. Mangabeys (most complex social structure) called twice as often as guenons and displayed the largest and most complex repertoire. De Brazza's monkeys (simplest social structure) displayed the smallest and simplest repertoire. Campbell's monkeys displayed an intermediate pattern. Providing evidence of higher levels of vocal variability in species presenting a more complex social system, our results are in line with the theory of a social-vocal coevolution of communicative abilities, opening new perspectives for comparative research on the evolution of communication systems in different animal taxa.

  9. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    Science.gov (United States)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  10. Social complexity parallels vocal complexity: a comparison of three nonhuman primate species

    Directory of Open Access Journals (Sweden)

    Hélène eBOUCHET

    2013-07-01

    Full Text Available Social factors play a key role in the structuring of vocal repertoires at the individual level, notably in nonhuman primates. Some authors suggested that, at the species level too, social life may have driven the evolution of communicative complexity, but this has rarely been empirically tested. Here, we use a comparative approach to address this issue. We investigated vocal variability, at both the call type and the repertoire levels, in three forest-dwelling species of Cercopithecinae presenting striking differences in their social systems, in terms of social organization as well as social structure. We collected female call recordings from twelve De Brazza’s monkeys (Cercopithecus neglectus, six Campbell’s monkeys (Cercopithecus campbelli and seven red-capped mangabeys (Cercocebus torquatus housed in similar conditions. First, we noted that the level of acoustic variability and individual distinctiveness found in several call types was related to their importance in social functioning. Contact calls, essential to intra-group cohesion, were the most individually distinctive regardless of the species, while threat calls were more structurally variable in mangabeys, the most ‘despotic’ of our three species. Second, we found a parallel between the degree of complexity of the species’ social structure and the size, diversity, and usage of its vocal repertoire. Mangabeys (most complex social structure called twice as often as guenons and displayed the largest and most complex repertoire. De Brazza’s monkeys (simplest social structure displayed the smallest and simplest repertoire. Campbell’s monkeys displayed an intermediate pattern. Providing evidence of higher levels of vocal variability in species presenting a more complex social system, our results are in line with the theory of a social-vocal coevolution of communicative abilities, opening new perspectives for comparative research on the evolution of communication systems in

  11. Primary radiotherapy of stage IIA/B-IIIB cervical carcinoma. A comparison of continuous versus sequential regimens

    International Nuclear Information System (INIS)

    Mayer, A.; Nemeskeri, C.; Petnehazi, C.; Varga, S.; Naszaly, A.; Borgulya, G.

    2004-01-01

    Background: comprehensive literature on cervical cancer demonstrates, even today, the need for optimization of the timing of external-beam radiotherapy (EBRT) and high-dose-rate brachytherapy (HDR-BT) in the treatment of stage IIA/B-IIIB cervical carcinoma. Patients and methods: 210 patients with carcinoma of the cervix were treated in the Municipal Center of Oncoradiology between January 1991 and December 1996 (FIGO IIA: n = 10, FIGO IIB: n = 113, and FIGO IIIB: n = 87). Two regimens were compared: sequential radiation therapy (SRT) with 4 x 8 Gy HDR-BT to point A followed by EBRT, and continuous radiation therapy (CRT) in which 5 x 6 Gy HDR-BT to point A, one session per week, was integrated into the EBRT. A total dose of 68-70 Gy to point A and 52-54 Gy to point B was given in EBRT with SRT, five fractions per week were applied. Four fractions per week were applied in CRT, i.e., no EBRT was performed on the day of HDR-BT. Total doses to points A and B were identical in both regimens. Overall treatment time (OTT) amounted to 56 days for SRT and 35 days for CRT. Median follow-up time was 3.4 (2.5-4.2) years. Results: progression-free 5-year-survival (PFS) was 71% in the CRT and 56% in the SRT group. Nevertheless, this difference was not statistically significant (p = 1.00), and the same was found in a subgroup analysis of the different tumor stages, showing, however, an unequivocal trend. Late bladder and rectal injuries occurred in 13% and 25%, respectively. Late rectal injuries were significantly more frequent with SRT than CRT (35 patients in the SRT and 18 patients in the CRT group; p = 0.037). This was due to the higher doses per fraction of HDR-BT in the SRT group. No difference was found regarding late bladder injuries (p = 0.837). Conclusion: for the patients included in this study, no advantage has been found so far in using CRT, i.e., shortening the OTT by weekly integration of HDR-BT into EBRT. Nevertheless, an obvious trend exists. The dose of 8 Gy per

  12. Comparison of proximally versus distally placed spatially distributed sequential stimulation electrodes in a dynamic knee extension task

    Directory of Open Access Journals (Sweden)

    Marco Laubacher

    2016-06-01

    Full Text Available Spatially distributed sequential stimulation (SDSS has demonstrated substantial power output and fatigue benefits compared to single electrode stimulation (SES in the application of functional electrical stimulation (FES. This asymmetric electrode setup brings new possibilities but also new questions since precise placement of the electrodes is one critical factor for good muscle activation. The aim of this study was to compare the power output, fatigue and activation properties of proximally versus distally placed SDSS electrodes in an isokinetic knee extension task simulating knee movement during recumbent cycling. M. vastus lateralis and medialis of seven able-bodied subjects were stimulated with rectangular bi-phasic pulses of constant amplitude of 40 mA and at an SDSS frequency of 35 Hz for 6 min on both legs with both setups (i.e. n=14. Torque was measured during knee-extension movement by a dynamometer at an angular velocity of 110 deg/s. Mean power, peak power and activation time were calculated and compared for the initial and final stimulation phases, together with an overall fatigue index. Power output values (Pmean, Ppeak were scaled to a standardised reference input pulse width of 100 μs (Pmean,s, Ppeak,s. The initial evaluation phase showed no significant differences between the two setups for all outcome measures. Ppeak and Ppeak,s were both significantly higher in the final phase for the distal setup (25.4 ± 8.1 W vs. 28.2 ± 6.2 W, p=0.0062 and 34.8 ± 9.5 W vs. 38.9 ± 6.7 W, p=0.021, respectively. With distal SDSS, there was modest evidence of higher Pmean and Pmean,s (p=0.071, p=0.14, respectively but of longer activation time (p=0.096. The rate of fatigue was similar for both setups. For practical FES applications, distal placement of the SDSS electrodes is preferable.

  13. Intra-individual diagnostic image quality and organ-specific-radiation dose comparison between spiral cCT with iterative image reconstruction and z-axis automated tube current modulation and sequential cCT

    International Nuclear Information System (INIS)

    Wenz, Holger; Maros, Máté E.; Meyer, Mathias; Gawlitza, Joshua; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O.; Groden, Christoph; Henzler, Thomas

    2016-01-01

    •Superiority of spiral versus sequential cCT in image quality and organ-specific-radiation dose.•Spiral cCT: lower organ-specific-radiation-dose in eye lense compared to tilted sequential cCT.•State-of-the-art IR spiral cCT techniques has significant advantages over sequential cCT techniques. Superiority of spiral versus sequential cCT in image quality and organ-specific-radiation dose. Spiral cCT: lower organ-specific-radiation-dose in eye lense compared to tilted sequential cCT. State-of-the-art IR spiral cCT techniques has significant advantages over sequential cCT techniques. To prospectively evaluate image quality and organ-specific-radiation dose of spiral cranial CT (cCT) combined with automated tube current modulation (ATCM) and iterative image reconstruction (IR) in comparison to sequential tilted cCT reconstructed with filtered back projection (FBP) without ATCM. 31 patients with a previous performed tilted non-contrast enhanced sequential cCT aquisition on a 4-slice CT system with only FBP reconstruction and no ATCM were prospectively enrolled in this study for a clinical indicated cCT scan. All spiral cCT examinations were performed on a 3rd generation dual-source CT system using ATCM in z-axis direction. Images were reconstructed using both, FBP and IR (level 1–5). A Monte-Carlo-simulation-based analysis was used to compare organ-specific-radiation dose. Subjective image quality for various anatomic structures was evaluated using a 4-point Likert-scale and objective image quality was evaluated by comparing signal-to-noise ratios (SNR). Spiral cCT led to a significantly lower (p < 0.05) organ-specific-radiation dose in all targets including eye lense. Subjective image quality of spiral cCT datasets with an IR reconstruction level 5 was rated significantly higher compared to the sequential cCT acquisitions (p < 0.0001). Consecutive mean SNR was significantly higher in all spiral datasets (FBP, IR 1–5) when compared to sequential cCT with a mean

  14. Parallel search engine optimisation and pay-per-click campaigns: A comparison of cost per acquisition

    Directory of Open Access Journals (Sweden)

    Wouter T. Kritzinger

    2017-07-01

    Full Text Available Background: It is imperative that commercial websites should rank highly in search engine result pages because these provide the main entry point to paying customers. There are two main methods to achieve high rankings: search engine optimisation (SEO and pay-per-click (PPC systems. Both require a financial investment – SEO mainly at the beginning, and PPC spread over time in regular amounts. If marketing budgets are applied in the wrong area, this could lead to losses and possibly financial ruin. Objectives: The objective of this research was to investigate, using three real-world case studies, the actual expenditure on and income from both SEO and PPC systems. These figures were then compared, and specifically, the cost per acquisition (CPA was used to decide which system yielded the best results. Methodology: Three diverse websites were chosen, and analytics data for all three were compared over a 3-month period. Calculations were performed to reduce the figures to single ratios, to make comparisons between them possible. Results: Some of the resultant ratios varied widely between websites. However, the CPA was shown to be on average 52.1 times lower for SEO than for PPC systems. Conclusion: It was concluded that SEO should be the marketing system of preference for e-commerce-based websites. However, there are cases where PPC would yield better results – when instant traffic is required, and when a large initial expenditure is not possible.

  15. On synchronous parallel computations with independent probabilistic choice

    International Nuclear Information System (INIS)

    Reif, J.H.

    1984-01-01

    This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity

  16. Comparison of combined use of fluconazole and clotrimazole with the sequential dose of fluconazole in the treatment of recurrent Candida vaginitis

    Directory of Open Access Journals (Sweden)

    Tayebeh Gharibi

    2009-09-01

    Full Text Available Background: fluconazole is one of the systemic anti-fungal agents and clotrimazole vaginal cream is a topical agent against Candida Albicans. In this study, comparison between of the two regimes (Fluconazole with and without vaginal clotrimazole in recurrent Candida albicans was assessed .with that of sequential dose of fluconazole for the treatment of Candida vaginitis, this evaluation was done. Methods: A double blind randomized clinical trial was carried out on 80 married women (20-45 years old having chronic vaginal Candidiasis. The patients were divided in to two groups (40 in each. The first groups received two doses of fluconazole at two different timing (Zero and 72 hours along with clotrimazole vaginal cream 1% ( for 7 days . The second group recived only two doses of fluconazole (Zero time and 72 hours later. Then the patients were examined at 2 and 6 weeks after the treatment. Results: The signs and symptoms of disease (itching, erythema, excoriation, edema and fissure in both groups were significantly decreased after two weeks of the treatment (P = 0.00. The final examination of both groups also showed that the treatment was more effective in the first group compared to the second group. The difference was significant statistically (P<0.05. Conclusion: the data shows that adding topical clotrimazole in treatment of patients with recurrent Candida vaginitis Is more effective.

  17. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  18. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  19. Dosimetric comparison of standard three-dimensional conformal radiotherapy followed by intensity-modulated radiotherapy boost schedule (sequential IMRT plan) with simultaneous integrated boost-IMRT (SIB IMRT) treatment plan in patients with localized carcinoma prostate.

    Science.gov (United States)

    Bansal, A; Kapoor, R; Singh, S K; Kumar, N; Oinam, A S; Sharma, S C

    2012-07-01

    DOSIMETERIC AND RADIOBIOLOGICAL COMPARISON OF TWO RADIATION SCHEDULES IN LOCALIZED CARCINOMA PROSTATE: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose-volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT.

  20. A Comparison Study on Motion/Force Transmissibility of Two Typical 3-DOF Parallel Manipulators: The Sprint Z3 and A3 Tool Heads

    Directory of Open Access Journals (Sweden)

    Xiang Chen

    2014-01-01

    Full Text Available This paper presents a comparison study of two important three-degree-of-freedom (DOF parallel manipulators, the Sprint Z3 head and the A3 head, both commonly used in industry. As an initial step, the inverse kinematics are derived and an analysis of two classes of limbs is carried out via screw theory. For comparison, three transmission indices are then defined to describe their motion/force transmission performance. Based on the same main parameters, the compared results reveal some distinct characteristics in addition to the similarities between the two parallel manipulators. To a certain extent, the A3 head outperforms the common Sprint Z3 head, providing a new and satisfactory option for a machine tool head in industry.

  1. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)

    2015-05-26

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R

  2. An Evaluation of Parallel Synchronous and Conservative Asynchronous Logic-Level Simulations

    Directory of Open Access Journals (Sweden)

    Ausif Mahmood

    1996-01-01

    a circuit remain fixed during the entire simulation. We remove this limitation and, by extending the analyses to multi-input, multi-output circuits with an arbitrary number of input events, show that the conservative asynchronous simulation extracts more parallelism and executes faster than synchronous simulation in general. Our conclusions are supported by a comparison of the idealized execution times of synchronous and conservative asynchronous algorithms on ISCAS combinational and sequential benchmark circuits.

  3. Exact parallel maximum clique algorithm for general and protein graphs.

    Science.gov (United States)

    Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka

    2013-09-23

    A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.

  4. Mixing modes in a population-based interview survey: comparison of a sequential and a concurrent mixed-mode design for public health research.

    Science.gov (United States)

    Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia

    2018-01-01

    Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n  = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n  = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between

  5. Work-Efficient Parallel Skyline Computation for the GPU

    DEFF Research Database (Denmark)

    Bøgh, Kenneth Sejdenfaden; Chester, Sean; Assent, Ira

    2015-01-01

    offers the potential for parallelizing skyline computation across thousands of cores. However, attempts to port skyline algorithms to the GPU have prioritized throughput and failed to outperform sequential algorithms. In this paper, we introduce a new skyline algorithm, designed for the GPU, that uses...... a global, static partitioning scheme. With the partitioning, we can permit controlled branching to exploit transitive relationships and avoid most point-to-point comparisons. The result is a non-traditional GPU algorithm, SkyAlign, that prioritizes work-effciency and respectable throughput, rather than...

  6. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  7. Time-resolved echo-shared parallel MRA of the lung: observer preference study of image quality in comparison with non-echo-shared sequences

    International Nuclear Information System (INIS)

    Fink, C.; Puderbach, M.; Zaporozhan, J.; Plathow, C.; Kauczor, H.-U.; Ley, S.

    2005-01-01

    The aim of this study was to evaluate the image quality of time-resolved echo-shared parallel MRA of the lung. The pulmonary vasculature of nine patients (seven females, two males; median age: 44 years) with pulmonary disease was examined using a time-resolved MRA sequence combining echo sharing with parallel imaging (time-resolved echo-shared angiography technique, or TREAT). The sharpness of the vessel borders, conspicuousness of peripheral lung vessels, artifact level, and overall image quality of TREAT was assessed independently by four readers in a side-by-side comparison with non-echo-shared time-resolved parallel MRA data (pMRA) previously acquired in the same patients. Furthermore, the SNR of pulmonary arteries (PA) and veins (PV) achieved with both pulse sequences was compared. The mean voxel size of TREAT MRA was decreased by 24% compared with the non-echo-shared MRA. Regarding the sharpness of the vessel borders, conspicuousness of peripheral lung vessels, and overall image quality the TREAT sequence was rated superior in 75-76% of all cases. If the TREAT images were preferred over the pMRA images, the advantage was rated as major in 61-71% of all cases. The level of artifacts was not increased with the TREAT sequence. The mean interobserver agreement for all categories ranged between fair (artifact level) and good (overall image quality). The maximum SNR of TREAT did not differ from non-echo-shared parallel MRA (PA: TREAT: 273±45; pMRA: 280±71; PV: TREAT: 273±33; pMRA: 258±62). TREAT achieves a higher spatial resolution than non-echo-shared parallel MRA which is also perceived as an improved image quality. (orig.)

  8. Prospectively ECG-triggered sequential dual-source coronary CT angiography in patients with atrial fibrillation: comparison with retrospectively ECG-gated helical CT

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Lei; Yang, Lin; Zhang, Zhaoqi [Capital Medical University, Department of Radiology, Beijing Anzhen Hospital, Beijing (China); Wang, Yining; Jin, Zhengyu [Chinese Academy of Medical Sciences, Department of Radiology, Peking Union Medical College Hospital, Beijing (China); Zhang, Longjiang; Lu, Guangming [Nanjing University, Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing, Jiangsu (China)

    2013-07-15

    To investigate the feasibility of applying prospectively ECG-triggered sequential coronary CT angiography (CCTA) to patients with atrial fibrillation (AF) and evaluate the image quality and radiation dose compared with a retrospectively ECG-gated helical protocol. 100 patients with persistent AF were enrolled. Fifty patients were randomly assigned to a prospective protocol and the other patients to a retrospective protocol using a second-generation dual-source CT (DS-CT). Image quality was evaluated using a four-point grading scale (1 = excellent, 2 = good, 3 = moderate, 4 = poor) by two reviewers on a per-segment basis. The coronary artery segments were considered non-diagnostic with a quality score of 4. The radiation dose was evaluated. Diagnostic segment rate in the prospective group was 99.4 % (642/646 segments), while that in the retrospective group was 96.5 % (604/626 segments) (P < 0.001). Effective dose was 4.29 {+-} 1.86 and 11.95 {+-} 5.34 mSv for each of the two protocols (P < 0.001), which was a 64 % reduction in the radiation dose for prospective sequential imaging compared with retrospective helical imaging. In AF patients, prospectively ECG-triggered sequential CCTA is feasible using second-generation DS-CT and can decrease >60 % radiation exposure compared with retrospectively ECG-gated helical imaging while improving diagnostic image quality. (orig.)

  9. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  10. Comparison of Coregistration Accuracy of Pelvic Structures Between Sequential and Simultaneous Imaging During Hybrid PET/MRI in Patients with Bladder Cancer.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Balar, Arjun V; Huang, William C; Jackson, Kimberly; Friedman, Kent P

    2015-08-01

    The aim of this study was to compare coregistration of the bladder wall, bladder masses, and pelvic lymph nodes between sequential and simultaneous PET and MRI acquisitions obtained during hybrid (18)F-FDG PET/MRI performed using a diuresis protocol in bladder cancer patients. Six bladder cancer patients underwent (18)F-FDG hybrid PET/MRI, including IV Lasix administration and oral hydration, before imaging to achieve bladder clearance. Axial T2-weighted imaging (T2WI) was obtained approximately 40 minutes before PET ("sequential") and concurrently with PET ("simultaneous"). Three-dimensional spatial coordinates of the bladder wall, bladder masses, and pelvic lymph nodes were recorded for PET and T2WI. Distances between these locations on PET and T2WI sequences were computed and used to compare in-plane (x-y plane) and through-plane (z-axis) misregistration relative to PET between T2WI acquisitions. The bladder increased in volume between T2WI acquisitions (sequential, 176 [139] mL; simultaneous, 255 [146] mL). Four patients exhibited a bladder mass, all with increased activity (SUV, 9.5-38.4). Seven pelvic lymph nodes in 4 patients showed increased activity (SUV, 2.2-9.9). The bladder wall exhibited substantially less misregistration relative to PET for simultaneous, compared with sequential, acquisitions in in-plane (2.8 [3.1] mm vs 7.4 [9.1] mm) and through-plane (1.7 [2.2] mm vs 5.7 [9.6] mm) dimensions. Bladder masses exhibited slightly decreased misregistration for simultaneous, compared with sequential, acquisitions in in-plane (2.2 [1.4] mm vs 2.6 [1.9] mm) and through-plane (0.0 [0.0] mm vs 0.3 [0.8] mm) dimensions. FDG-avid lymph nodes exhibited slightly decreased in-plane misregistration (1.1 [0.8] mm vs 2.5 [0.6] mm), although identical through-plane misregistration (4.0 [1.9] mm vs 4.0 [2.8] mm). Using hybrid PET/MRI, simultaneous imaging substantially improved bladder wall coregistration and slightly improved coregistration of bladder masses and

  11. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  12. A comparison of disinfection by-products formation during sequential or simultaneous disinfection of surface waters with chlorine dioxide and chlor(am)ine.

    Science.gov (United States)

    Shi, Yanwei; Ling, Wencui; Qiang, Zhimin

    2013-01-01

    The effect of chlorine dioxide (ClO2) oxidation on the formation of disinfection by-products (DBPs) during sequential (ClO2 pre-oxidation for 30 min) and simultaneous disinfection processes with free chlorine (FC) or monochloramine (MCA) was investigated. The formation of DBPs from synthetic humic acid (HA) water and three natural surface waters containing low bromide levels (11-27 microg/L) was comparatively examined in the FC-based (single FC, sequential ClO2-FC, and simultaneous ClO2/FC) and MCA-based (single MCA, ClO2-MCA, and ClO2/MCA) disinfection processes. The results showed that much more DBPs were formed from the synthetic HA water than from the three natural surface waters with comparative levels of dissolved organic carbon. In the FC-based processes, ClO2 oxidation could reduce trihalomethanes (THMs) by 27-35% and haloacetic acids (HAAs) by 14-22% in the three natural surface waters, but increased THMs by 19% and HAAs by 31% in the synthetic HA water after an FC contact time of 48 h. In the MCA-based processes, similar trends were observed although DBPs were produced at a much lower level. There was an insignificant difference in DBPs formation between the sequential and simultaneous processes. The presence of a high level of bromide (320 microg/L) remarkably promoted the DBPs formation in the FC-based processes. Therefore, the simultaneous disinfection process of ClO2/MCA is recommended particularly for waters with a high bromide level.

  13. A real-time comparison between direct control, sequential pattern recognition control and simultaneous pattern recognition control using a Fitts' law style assessment procedure.

    Science.gov (United States)

    Wurth, Sophie M; Hargrove, Levi J

    2014-05-30

    Pattern recognition (PR) based strategies for the control of myoelectric upper limb prostheses are generally evaluated through offline classification accuracy, which is an admittedly useful metric, but insufficient to discuss functional performance in real time. Existing functional tests are extensive to set up and most fail to provide a challenging, objective framework to assess the strategy performance in real time. Nine able-bodied and two amputee subjects gave informed consent and participated in the local Institutional Review Board approved study. We designed a two-dimensional target acquisition task, based on the principles of Fitts' law for human motor control. Subjects were prompted to steer a cursor from the screen center of into a series of subsequently appearing targets of different difficulties. Three cursor control systems were tested, corresponding to three electromyography-based prosthetic control strategies: 1) amplitude-based direct control (the clinical standard of care), 2) sequential PR control, and 3) simultaneous PR control, allowing for a concurrent activation of two degrees of freedom (DOF). We computed throughput (bits/second), path efficiency (%), reaction time (second), and overshoot (%)) and used general linear models to assess significant differences between the strategies for each metric. We validated the proposed methodology by achieving very high coefficients of determination for Fitts' law. Both PR strategies significantly outperformed direct control in two-DOF targets and were more intuitive to operate. In one-DOF targets, the simultaneous approach was the least precise. The direct control was efficient in one-DOF targets but cumbersome to operate in two-DOF targets through a switch-depended sequential cursor control. We designed a test, capable of comprehensively describing prosthetic control strategies in real time. When implemented on control subjects, the test was able to capture statistically significant differences (p

  14. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  15. Comparison of Different Analytic Solutions to Axisymmetric Squeezing Fluid Flow between Two Infinite Parallel Plates with Slip Boundary Conditions

    Directory of Open Access Journals (Sweden)

    Hamid Khan

    2012-01-01

    Full Text Available We investigate squeezing flow between two large parallel plates by transforming the basic governing equations of the first grade fluid to an ordinary nonlinear differential equation using the stream functions ur(r,z,t=(1/r(∂ψ/∂z and uz(r,z,t=−(1/r(∂ψ/∂r and a transformation ψ(r,z=r2F(z. The velocity profiles are investigated through various analytical techniques like Adomian decomposition method, new iterative method, homotopy perturbation, optimal homotopy asymptotic method, and differential transform method.

  16. Multi-Stage Recognition of Speech Emotion Using Sequential Forward Feature Selection

    Directory of Open Access Journals (Sweden)

    Liogienė Tatjana

    2016-07-01

    Full Text Available The intensive research of speech emotion recognition introduced a huge collection of speech emotion features. Large feature sets complicate the speech emotion recognition task. Among various feature selection and transformation techniques for one-stage classification, multiple classifier systems were proposed. The main idea of multiple classifiers is to arrange the emotion classification process in stages. Besides parallel and serial cases, the hierarchical arrangement of multi-stage classification is most widely used for speech emotion recognition. In this paper, we present a sequential-forward-feature-selection-based multi-stage classification scheme. The Sequential Forward Selection (SFS and Sequential Floating Forward Selection (SFFS techniques were employed for every stage of the multi-stage classification scheme. Experimental testing of the proposed scheme was performed using the German and Lithuanian emotional speech datasets. Sequential-feature-selection-based multi-stage classification outperformed the single-stage scheme by 12–42 % for different emotion sets. The multi-stage scheme has shown higher robustness to the growth of emotion set. The decrease in recognition rate with the increase in emotion set for multi-stage scheme was lower by 10–20 % in comparison with the single-stage case. Differences in SFS and SFFS employment for feature selection were negligible.

  17. A parallel approach to the stable marriage problem

    DEFF Research Database (Denmark)

    Larsen, Jesper

    1997-01-01

    This paper describes two parallel algorithms for the stable marriage problem implemented on a MIMD parallel computer. The algorithms are tested against sequential algorithms on randomly generated and worst-case instances. The results clearly show that the combination fo a very simple problem...... and a commercial MIMD system results in parallel algorithms which are not competitive with sequential algorithms wrt. practical performance. 1 Introduction In 1962 the Stable Marriage Problem was....

  18. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  19. Fast Evaluation of Segmentation Quality with Parallel Computing

    Directory of Open Access Journals (Sweden)

    Henry Cruz

    2017-01-01

    Full Text Available In digital image processing and computer vision, a fairly frequent task is the performance comparison of different algorithms on enormous image databases. This task is usually time-consuming and tedious, such that any kind of tool to simplify this work is welcome. To achieve an efficient and more practical handling of a normally tedious evaluation, we implemented the automatic detection system, with the help of MATLAB®’s Parallel Computing Toolbox™. The key parts of the system have been parallelized to achieve simultaneous execution and analysis of segmentation algorithms on the one hand and the evaluation of detection accuracy for the nonforested regions, such as a study case, on the other hand. As a positive side effect, CPU usage was reduced and processing time was significantly decreased by 68.54% compared to sequential processing (i.e., executing the system with each algorithm one by one.

  20. MO-C-17A-10: Comparison of Dose Deformable Accumulation by Using Parallel and Serial Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Z; Li, M; Wong, J [Morristown Medical Center, Morristown, NJ (United States)

    2014-06-15

    Purpose: The uncertainty of dose accumulation over multiple CT datasets with deformable fusion may have significant impact on clinical decisions. In this study, we investigate the difference of two dose summation approaches involving deformable fusion. Methods: Five patients, four external beam and one brachytherapy(BT), were chosen for the study. The BT patient was treated with CT-based HDR. The CT image sets acquired in the imageguidance process (8-11 CTs/patient) were used to determine the dose delivered to the four external beam patients. (prostate, pelvis, lung and head and neck). For the HDR patient (cervix), five CT image sets and the corresponding BT plans were used. In total 44 CT datasets and RT dose/plans were imported into the image fusion software MiM (6.0.4) for analysis.For each of the five clinical cases, the dose from each fraction was accumulated into the primary CT dataset by using both Parallel and Serial approaches. The dose-volume histogram (DVH) for CTV and selected organs-at-risks (OAR) were generated. The D95(CTV), OAR(mean) and OAR(max) for the four external beam cases the D90(CTV), and the max dose to bladder and rectum for the BT case were compared. Results: For the four external beam patients, the difference in D95(CTV) were <1.2% PD between the parallel and the serial approaches. The differences of the OAR(mean) and the OAR(max ) range from 0 to 3.7% and <1% PD respectively. For the HDR patient, the dose difference for D90 is 11% PD while that of the max dose to bladder and rectum were 11.5% and 23.3% respectively. Conclusion: For external beam treatments, the parallel and serial approaches have <5% difference probably because tumor volume and OAR have less changes from fraction to fraction. For the brachytherapy case, >10% dose difference between the two approaches was observed as significant volume changes of tumor and OAR were observed among treatment fractions.

  1. Parallel computing for homogeneous diffusion and transport equations in neutronics

    International Nuclear Information System (INIS)

    Pinchedez, K.

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  2. Parallelism in computations in quantum and statistical mechanics

    International Nuclear Information System (INIS)

    Clementi, E.; Corongiu, G.; Detrich, J.H.

    1985-01-01

    Often very fundamental biochemical and biophysical problems defy simulations because of limitations in today's computers. We present and discuss a distributed system composed of two IBM 4341 s and/or an IBM 4381 as front-end processors and ten FPS-164 attached array processors. This parallel system - called LCAP - has presently a peak performance of about 110 Mflops; extensions to higher performance are discussed. Presently, the system applications use a modified version of VM/SP as the operating system: description of the modifications is given. Three applications programs have been migrated from sequential to parallel: a molecular quantum mechanical, a Metropolis-Monte Carlo and a molecular dynamics program. Descriptions of the parallel codes are briefly outlined. Use of these parallel codes has already opened up new capabilities for our research. The very positive performance comparisons with today's supercomputers allow us to conclude that parallel computers and programming, of the type we have considered, represent a pragmatic answer to many computationally intensive problems. (orig.)

  3. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  4. Comparison between AAPM TG-51 and IAEA TRS-398 for plane parallel ionization chambers irradiated by clinical electron beams

    International Nuclear Information System (INIS)

    Mahmoud, M.A.

    2005-01-01

    We compared the results of absorbed dose determined at reference conditions according to AAPM T G-51 and IAEA TRS-398 using plane parallel ionization chambers. The study showed agreement between the two protocols for Holt ,Exradin P11, NACP, Attix RMI 449 and Roos ionization chambers. For Markus ionization chambers the absorbed dose calculated using AAPM TG-51 is higher than that calculated using IAEA TRS-398 by 1.8 % for R 5 0 =2 cm and decrease with increased R 5 0 to reach 1.2 % for R 5 0 =20 cm. For Capintec PS-033 ionization chambers the absorbed dose calculated using AAPM TG-51 is constantly higher than that calculated by IAEA TRS-398 by 1.5 %. A theoretical explanation was introduced for these results

  5. Homogeneous and Stratified Liquid-Liquid Flow Effect of a Viscosity Reducer: I. Comparison in parallel plates for heavy crude

    Directory of Open Access Journals (Sweden)

    E. J. Suarez-Dominguez

    2016-12-01

    Full Text Available Production of heavy crude oil in Mexico, and worldwide, is increasing which has led to the application of different methods to reduce viscosity or to enhance transport through stratified flow to continue using the existing infrastructures. In this context, injecting a viscosity improver that does not mix completely with the crude, establishes a liquid-liquid stratified flow. On the basis of a parallel plates model, comparing the increase of flow that occurs in the one-phase case which assumes a complete mixture between the crude and the viscosity improver against another stratified liquid-liquid (no mixing between the oil and compared improver; it was found that in both cases there is a flow increase for the same pressure drop with a maximum for the case in which the flow improver is between the plates and the crude.

  6. Comparison of metformin and insulin versus insulin alone for type 2 diabetes: systematic review of randomised clinical trials with meta-analyses and trial sequential analyses.

    Science.gov (United States)

    Hemmingsen, Bianca; Christensen, Louise Lundby; Wetterslev, Jørn; Vaag, Allan; Gluud, Christian; Lund, Søren S; Almdal, Thomas

    2012-04-19

    To compare the benefits and harms of metformin and insulin versus insulin alone as reported in randomised clinical trials of patients with type 2 diabetes. Systematic review of randomised clinical trials with meta-analyses and trial sequential analyses. The Cochrane Library, Medline, Embase, Science Citation Index Expanded, Latin American Caribbean Health Sciences Literature, and Cumulative Index to Nursing and Allied Health Literature until March 2011. We also searched abstracts presented at the American Diabetes Association and European Association for the Study of Diabetes Congresses, contacted relevant trial authors and pharmaceutical companies, hand searched reference lists of included trials, and searched the US Food and Drug Administration website. Two authors independently screened titles and abstracts for randomised clinical trials comparing metformin and insulin versus insulin alone (with or without placebo) in patients with type 2 diabetes, older than 18 years, and with an intervention period of at least 12 weeks. We included trials irrespective of language, publication status, predefined outcomes, antidiabetic interventions used before randomisation, and reported outcomes. We included 26 randomised trials with 2286 participants, of which 23 trials with 2117 participants could provide data. All trials had high risk of bias. Data were sparse for outcomes relevant to patients. Metformin and insulin versus insulin alone did not significantly affect all cause mortality (relative risk 1.30, 95% confidence interval 0.57 to 2.99) or cardiovascular mortality (1.70, 0.35 to 8.30). Trial sequential analyses showed that more trials were needed before reliable conclusions could be drawn regarding these outcomes. In a fixed effect model, but not in a random effects model, severe hypoglycaemia was significantly more frequent with metformin and insulin than with insulin alone (2.83, 1.17 to 6.86). In a random effects model, metformin and insulin resulted in reduced Hb

  7. Comparison of sequential vs same-day simultaneous collagen cross-linking and topography-guided PRK for treatment of keratoconus.

    Science.gov (United States)

    Kanellopoulos, Anastasios John

    2009-09-01

    The safety and efficacy of corneal collagen cross-linking (CXL) and topography-guided photorefractive keratectomy (PRK) using a different sequence and timing were evaluated in consecutive keratoconus cases. This study included a total of 325 eyes with keratoconus. Eyes were divided into two groups. The first group (n=127 eyes) underwent CXL with subsequent topography-guided PRK performed 6 months later (sequential group) and the second group (n=198 eyes) underwent CXL and PRK in a combined procedure on the same day (simultaneous group). Statistical differences were examined for pre- to postoperative changes in uncorrected (UCVA, logMAR) and best-spectacle-corrected visual acuity (BSCVA, logMAR), manifest refraction spherical equivalent (MRSE), keratometry (K), topography, central corneal thickness, endothelial cell count, corneal haze, and ectatic progression. Mean follow-up was 36+/-18 months (range: 24 to 68 months). At last follow-up in the sequential group, the mean UCVA improved from 0.9+/-0.3 logMAR to 0.49+/-0.25 logMAR, and mean BSCVA from 0.41+/-0.25 logMAR to 0.16+/-0.22 logMAR. Mean reduction in spherical equivalent refraction was 2.50+/-1.20 diopters (D), mean haze score was 1.2+/-0.5, and mean reduction in K was 2.75+/-1.30 D. In the simultaneous group, mean UCVA improved from 0.96+/-0.2 logMAR to 0.3+/-0.2 logMAR, and mean BSCVA from 0.39+/-0.3 logMAR to 0.11+/-0.16 logMAR. Mean reduction in spherical equivalent refraction was 3.20+/-1.40 D, mean haze score was 0.5+/-0.3, and mean reduction in K was 3.50+/-1.3 D. Endothelial cell count preoperatively and at last follow-up was unchanged (PPRK and CXL appears to be superior to sequential CXL with later PRK in the visual rehabilitation of progressing keratoconus. Copyright 2009, SLACK Incorporated.

  8. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  9. Comparison of the bronchodilatation produced by inhalation of ipratropium bromide and salbutamol sequentially and in fixed dose combination in stable bronchial asthma patients

    Directory of Open Access Journals (Sweden)

    Mohan A

    2006-01-01

    Full Text Available Objectives : The combination of a 43-2 agonist and an anticholinergic agent is of-ten used to manage bronchial asthma. However, it is unclear whether these drugs should be given separately in sequence or in a fixed dose combination for maximum effect. Methods : 27 patients with stable bronchial asthma were given the above two drugs in two separate sessions one week apart. In one session they were given the above two drugs as a fixed dose combination and in the other session, they were given se-quentially with salbutamol following ipratropium after 30 minutes. Spirometry was performed at baseline and 15, 30 and 60 minutes after inhaling the second drug. Results : Both groups showed significant improvement in forced vital capacity (FVC, forced expiratory time in one second (FEV 1 , peak expiratory flow rate (PEFR and forced expiratory flow (FEF 25-75 from baseline upto one hour. FVC increased initially and then stabilized; however, the increase was more sustained in the group getting combination treatment. This group also showed a higher rise in FEV 1 (p=0.02. Both FEV 1 and FEF 25-75 decreased after 30 minutes in the group that received sequential therapy. PEFR increased continuously till 60 minutes in both groups and there was no significant difference between them (p=0.98. Interpretation and Conclusion: Both methods of drug dosing produce equivalent bronchodilation. Fixed dose combinations produced a more sustained rise in FVC and higher increase in FEV 1 . Hence fixed dose combinations are more effective short-term bronchodilators and give an added advantage of reducing the number of inhalers required, thus improv-ing compliance.

  10. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  11. A comparison of the electrochemical recovery of palladium using a parallel flat plate flow-by reactor and a rotating cylinder electrode reactor

    International Nuclear Information System (INIS)

    Terrazas-Rodriguez, J.E.; Gutierrez-Granados, S.; Alatorre-Ordaz, M.A.; Ponce de Leon, C.; Walsh, F.C.

    2011-01-01

    The production of catalytic converters generates large amounts of waste water containing Pd 2+ , Rh 3+ and Nd 3+ ions. The electrochemical treatment of these solutions offers an economic and effective alternative to recover the precious metals in comparison with other traditional metal recovery technologies. The separation of palladium from this mixture of metal ions by catalytic deposition was carried out using a rotating cylinder electrode reactor (RCER) and a parallel plate reactor (FM01-LC) with the same cathode area (64 cm 2 ) and electrolyte volume (300 cm 3 ). The study was carried out at mean linear flow velocities of 1.27 -1 (120 e /v -1 (7390 2+ ions in the parallel plate electrode reactor was 35% while the recovery of 97% of Pd 2+ in the RCER was 62%. The volumetric energy consumption during the electrolysis was 0.56 kW h m -3 and 2.1 kW h m -3 for the RCER and the FM01-LC reactors, respectively. Using a three-dimensional stainless steel electrode in the FM01-LC laboratory reactor, 99% of palladium ions were recovered after 30 min of electrolysis while in the RCER, 120 min were necessary.

  12. Comparisons of Energy Management Methods for a Parallel Plug-In Hybrid Electric Vehicle between the Convex Optimization and Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Renxin Xiao

    2018-01-01

    Full Text Available This paper proposes a comparison study of energy management methods for a parallel plug-in hybrid electric vehicle (PHEV. Based on detailed analysis of the vehicle driveline, quadratic convex functions are presented to describe the nonlinear relationship between engine fuel-rate and battery charging power at different vehicle speed and driveline power demand. The engine-on power threshold is estimated by the simulated annealing (SA algorithm, and the battery power command is achieved by convex optimization with target of improving fuel economy, compared with the dynamic programming (DP based method and the charging depleting–charging sustaining (CD/CS method. In addition, the proposed control methods are discussed at different initial battery state of charge (SOC values to extend the application. Simulation results validate that the proposed strategy based on convex optimization can save the fuel consumption and reduce the computation burden obviously.

  13. Comparison of Efficacy and Threat Perception Processes in Predicting Smoking among University Students Based on Extended Parallel Process Model

    Directory of Open Access Journals (Sweden)

    S. Bashirian

    2014-04-01

    Full Text Available Introduction & Objective: The survey of smoking as the most toxic, common and cheapest ad-diction, and its psychological and demographic variables especially among the youth who are efficient and constructive individuals of the society is of great importance. This study was performed to compare efficacy and threat perception in predicting cigarette smoking among university students based on Expended Parallel Process Model (EPPM. Material & Methods: This cross sectional descriptive study was carried out on 700 college stu-dents of Hamadan recruited with a stratified sampling method. The participants completed a self-administered questionnaire including demographic characteristics, smoking status and EPPM Data analysis was done with the SPSS software (version 16, using t-test, one way ANOVA, Pierson correlation and logistic regression methods. Results: The average scores of threat and efficacy perception were 39.7 and 38.6, respectively. The prevalence of cigarette smoking among participants was 27.1 percent. Also, there were significant differences between the average score of efficacy perception and age, gender, his-tory of drug abuse and dwelling of students (P<0.05. Efficacy and threat perception both predicted student cigarette smoking. Conclusions: Cognitive mediating process of threat perception was a more powerful predictor of cigarette smoking as an unsafe behavior. Therefore, increasing self efficacy and response efficacy of university students aimed at facilitating the acceptance of safe behavior could be note-worthy as a principle in education. (Sci J Hamadan Univ Med Sci 2014; 21 (1:58-65

  14. Comparison study of exhaust plume impingement effects of small mono- and bipropellant thrusters using parallelized DSMC method.

    Directory of Open Access Journals (Sweden)

    Kyun Ho Lee

    Full Text Available A space propulsion system is important for the normal mission operations of a spacecraft by adjusting its attitude and maneuver. Generally, a mono- and a bipropellant thruster have been mainly used for low thrust liquid rocket engines. But as the plume gas expelled from these small thrusters diffuses freely in a vacuum space along all directions, unwanted effects due to the plume collision onto the spacecraft surfaces can dramatically cause a deterioration of the function and performance of a spacecraft. Thus, aim of the present study is to investigate and compare the major differences of the plume gas impingement effects quantitatively between the small mono- and bipropellant thrusters using the computational fluid dynamics (CFD. For an efficiency of the numerical calculations, the whole calculation domain is divided into two different flow regimes depending on the flow characteristics, and then Navier-Stokes equations and parallelized Direct Simulation Monte Carlo (DSMC method are adopted for each flow regime. From the present analysis, thermal and mass influences of the plume gas impingements on the spacecraft were analyzed for the mono- and the bipropellant thrusters. As a result, it is concluded that a careful understanding on the plume impingement effects depending on the chemical characteristics of different propellants are necessary for the efficient design of the spacecraft.

  15. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  16. Dosimetric comparison of the related parameters between simultaneous integrated boost intensity-modulated radiotherapy and sequential boost conformal radiotherapy for postoperative malignant glioma of the brain

    International Nuclear Information System (INIS)

    Shao Qian; Lu Jie; Li Jianbin; Sun Tao; Bai Tong; Liu Tonghai; Yin Yong

    2011-01-01

    Objective: To compare the dosimetric of different parameter of simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT) with sequential boost conformal radiotherapy (SB-CRT) for postoperative malignant glioma of the brain. Methods: Ten patients with malignant glioma of brain were selected to study. Each patient was simulated all by CT and MRI, and the imagings of CT and MRI were all sent to Pinnacle 3 planning system. The fusion technology with MR-CT imaging was used on Pinnacle 3 planning system. The target volume was delineated and defined based on MRI. The postoperative residual lesion and resection cavity were defined as gross tumor volume (GTV) and expanded GTV some scope was defined as clinical target volume (CTV). The margins of GTV expanded 10 mm and 25 mm were defined as CTV1 and CTV2 respectively. CTV1 and CTV2 all enlarged 5 mm were defined as PTV1 and PTV2 respectively. The plans of simultaneous integrated boost intensity-modulated radiotherapy and sequential boost conformal radiotherapy were respectively designed for each patient using Pinnacle 3 planning system and the dosimetric of different parameter was compared. The prescribe dose of SIB-IMRT was PTV1: 62.5 Gy/25 f, PTV2: 50.0 Gy/25 f; and SB-CRT was PTV1: 66.0 Gy/33 f, PTV2: 50.0 Gy/25 f. The dosimetries of different parameters of SIB-IMRT and SB-CRT were compared by using Paired-Samples T Test. Results: The maximum and mean dose of PTV1, PTV2, and brainstem were of significant difference (P 0.05). Conclusion: The SIB-IMRT plan is better than the SB-CRT plan. The CI and HI of SIB-IMRT are superior to SB-CRT. At the same time, it can preserve the important organs such as brainstem and reduce the mean dose of whole brain. On the other hand it can shorten the total period of therapy time. (authors)

  17. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  18. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  19. Experimental Comparison of Knife-Edge and Multi-Parallel Slit Collimators for Prompt Gamma Imaging of Proton Pencil Beams

    Science.gov (United States)

    Smeets, Julien; Roellinghoff, Frauke; Janssens, Guillaume; Perali, Irene; Celani, Andrea; Fiorini, Carlo; Freud, Nicolas; Testa, Etienne; Prieels, Damien

    2016-01-01

    More and more camera concepts are being investigated to try and seize the opportunity of instantaneous range verification of proton therapy treatments offered by prompt gammas emitted along the proton tracks. Focusing on one-dimensional imaging with a passive collimator, the present study experimentally compared in combination with the first, clinically compatible, dedicated camera device the performances of instances of the two main options: a knife-edge slit (KES) and a multi-parallel slit (MPS) design. These two options were experimentally assessed in this specific context as they were previously demonstrated through analytical and numerical studies to allow similar performances in terms of Bragg peak retrieval precision and spatial resolution in a general context. Both collimators were prototyped according to the conclusions of Monte Carlo optimization studies under constraints of equal weight (40 mm tungsten alloy equivalent thickness) and of the specificities of the camera device under consideration (in particular 4 mm segmentation along beam axis and no time-of-flight discrimination, both of which less favorable to the MPS performance than to the KES one). Acquisitions of proton pencil beams of 100, 160, and 230 MeV in a PMMA target revealed that, in order to reach a given level of statistical precision on Bragg peak depth retrieval, the KES collimator requires only half the dose the present MPS collimator needs, making the KES collimator a preferred option for a compact camera device aimed at imaging only the Bragg peak position. On the other hand, the present MPS collimator proves more effective at retrieving the entrance of the beam in the target in the context of an extended camera device aimed at imaging the whole proton track within the patient. PMID:27446802

  20. Experimental Comparison of Knife-Edge and Multi-Parallel Slit Collimators for Prompt Gamma Imaging of Proton Pencil Beams.

    Science.gov (United States)

    Smeets, Julien; Roellinghoff, Frauke; Janssens, Guillaume; Perali, Irene; Celani, Andrea; Fiorini, Carlo; Freud, Nicolas; Testa, Etienne; Prieels, Damien

    2016-01-01

    More and more camera concepts are being investigated to try and seize the opportunity of instantaneous range verification of proton therapy treatments offered by prompt gammas emitted along the proton tracks. Focusing on one-dimensional imaging with a passive collimator, the present study experimentally compared in combination with the first, clinically compatible, dedicated camera device the performances of instances of the two main options: a knife-edge slit (KES) and a multi-parallel slit (MPS) design. These two options were experimentally assessed in this specific context as they were previously demonstrated through analytical and numerical studies to allow similar performances in terms of Bragg peak retrieval precision and spatial resolution in a general context. Both collimators were prototyped according to the conclusions of Monte Carlo optimization studies under constraints of equal weight (40 mm tungsten alloy equivalent thickness) and of the specificities of the camera device under consideration (in particular 4 mm segmentation along beam axis and no time-of-flight discrimination, both of which less favorable to the MPS performance than to the KES one). Acquisitions of proton pencil beams of 100, 160, and 230 MeV in a PMMA target revealed that, in order to reach a given level of statistical precision on Bragg peak depth retrieval, the KES collimator requires only half the dose the present MPS collimator needs, making the KES collimator a preferred option for a compact camera device aimed at imaging only the Bragg peak position. On the other hand, the present MPS collimator proves more effective at retrieving the entrance of the beam in the target in the context of an extended camera device aimed at imaging the whole proton track within the patient.

  1. Comparison of radiofrequency body coils for MRI at 3 Tesla: a simulation study using parallel transmission on various anatomical targets

    Science.gov (United States)

    Wu, Xiaoping; Zhang, Xiaotong; Tian, Jinfeng; Schmitter, Sebastian; Hanna, Brian; Strupp, John; Pfeuffer, Josef; Hamm, Michael; Wang, Dingxin; Nistler, Juergen; He, Bin; Vaughan, J. Thomas; Ugurbil, Kamil; Van de Moortele, Pierre-Francois

    2015-01-01

    The performance of multichannel transmit coil layouts and parallel transmission (pTx) radiofrequency (RF) pulse design was evaluated with respect to transmit B1 (B1+) homogeneity and Specific Absorption Rate (SAR) at 3 Tesla for a whole body coil. Five specific coils were modeled and compared: a 32-rung birdcage body coil (driven either in a fixed quadrature mode or a two-channel transmit mode), two single-ring stripline arrays (with either 8 or 16 elements), and two multi-ring stripline arrays (with 2 or 3 identical rings, stacked in the z-axis and each comprising eight azimuthally distributed elements). Three anatomical targets were considered, each defined by a 3D volume representative of a meaningful region of interest (ROI) in routine clinical applications. For a given anatomical target, global or local SAR controlled pTx pulses were designed to homogenize RF excitation within the ROI. At the B1+ homogeneity achieved by the quadrature driven birdcage design, pTx pulses with multichannel transmit coils achieved up to ~8 fold reduction in local and global SAR. When used for imaging head and cervical spine or imaging thoracic spine, the double-ring array outperformed all coils including the single-ring arrays. While the advantage of the double-ring array became much less pronounced for pelvic imaging with a substantially larger ROI, the pTx approach still provided significant gains over the quadrature birdcage coil. For all design scenarios, using the 3-ring array did not necessarily improve the RF performance. Our results suggest that pTx pulses with multichannel transmit coils can reduce local and global SAR substantially for body coils while attaining improved B1+ homogeneity, particularly for a “z-stacked” double-ring design with coil elements arranged on two transaxial rings. PMID:26332290

  2. New parallel SOR method by domain partitioning

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Dexuan [Courant Inst. of Mathematical Sciences New York Univ., NY (United States)

    1996-12-31

    In this paper, we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning together with an interprocessor data-communication technique. For the 5-point approximation to the Poisson equation on a square, we show that the ordering of the PSOR based on the strip partition leads to a consistently ordered matrix, and hence the PSOR and the SOR using the row-wise ordering have the same convergence rate. However, in general, the ordering used in PSOR may not be {open_quote}consistently ordered{close_quotes}. So, there is a need to analyze the convergence of PSOR directly. In this paper, we present a PSOR theory, and show that the PSOR method can have the same asymptotic rate of convergence as the corresponding sequential SOR method for a wide class of linear systems in which the matrix is {open_quotes}consistently ordered{close_quotes}. Finally, we demonstrate the parallel performance of the PSOR method on four different message passing multiprocessors (a KSR1, the Intel Delta, an Intel Paragon and an IBM SP2), along with a comparison with the point Red-Black and four-color SOR methods.

  3. Lexical diversity and omission errors as predictors of language ability in the narratives of sequential Spanish-English bilinguals: a cross-language comparison.

    Science.gov (United States)

    Jacobson, Peggy F; Walden, Patrick R

    2013-08-01

    This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Narrative samples were obtained from 48 bilingual children in both of their languages using the suggested narrative retell protocol and coding conventions as per Systematic Analysis of Language Transcripts (SALT; Miller & Iglesias, 2008) software. An additional lexical diversity measure, VocD, was also calculated. A series of logistical hierarchical regressions explored the utility of the number of different words, VocD statistic, and word and morpheme omissions in each language for predicting language status. Omission errors turned out to be the best predictors of bilingual language impairment at all ages, and this held true across languages. Although lexical diversity measures did not predict typical or atypical language status, the measures were significantly related to oral language proficiency in English and Spanish. The results underscore the significance of omission errors in bilingual language impairment while simultaneously revealing the limitations of lexical diversity measures as indicators of impairment. The relationship between lexical diversity and oral language proficiency highlights the importance of considering relative language proficiency in bilingual assessment.

  4. Comparison of first pass bolus AIFs extracted from sequential {sup 18}F-FDG PET and DSC-MRI of mice

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Eleanor, E-mail: ee244@cam.ac.uk [Wolfson Brain Imaging Centre, Department of Clinical Neurosciences, School of Clinical Medicine, University of Cambridge, Cambridge Biomedical Campus, Cambridge, CB2 0QQ (United Kingdom); Sawiak, Stephen J. [Wolfson Brain Imaging Centre, Department of Clinical Neurosciences, School of Clinical Medicine, University of Cambridge, Cambridge Biomedical Campus, Cambridge, CB2 0QQ (United Kingdom); Behavioural and Clinical Neuroscience Institute, Department of Experimental Psychology, University of Cambridge, Cambridge, CB2 3EB (United Kingdom); Ward, Alexander O.; Buonincontri, Guido; Hawkes, Robert C.; Adrian Carpenter, T. [Wolfson Brain Imaging Centre, Department of Clinical Neurosciences, School of Clinical Medicine, University of Cambridge, Cambridge Biomedical Campus, Cambridge, CB2 0QQ (United Kingdom)

    2014-01-11

    Accurate kinetic modelling of in vivo physiological function using positron emission tomography (PET) requires determination of the tracer time–activity curve in plasma, known as the arterial input function (AIF). The AIF is usually determined by invasive blood sampling methods, which are prohibitive in murine studies due to low total blood volumes. Extracting AIFs from PET images is also challenging due to large partial volume effects (PVE). We hypothesise that in combined PET with magnetic resonance imaging (PET/MR), a co-injected bolus of MR contrast agent and PET ligand can be tracked using fast MR acquisitions. This protocol would allow extraction of a MR AIF from MR contrast agent concentration–time curves, at higher spatial and temporal resolution than an image-derived PET AIF. A conversion factor could then be applied to the MR AIF for use in PET kinetic analysis. This work has compared AIFs obtained from sequential DSC-MRI and PET with separate injections of gadolinium contrast agent and {sup 18}F-FDG respectively to ascertain the technique′s validity. An automated voxel selection algorithm was employed to improve MR AIF reproducibility. We found that MR and PET AIFs displayed similar character in the first pass, confirmed by gamma variate fits (p<0.02). MR AIFs displayed reduced PVE compared to PET AIFs, indicating their potential use in PET/MR studies.

  5. A comparison of a modified sequential oral sensory approach to an applied behavior-analytic approach in the treatment of food selectivity in children with autism spectrum disorder.

    Science.gov (United States)

    Peterson, Kathryn M; Piazza, Cathleen C; Volkert, Valerie M

    2016-09-01

    Treatments of pediatric feeding disorders based on applied behavior analysis (ABA) have the most empirical support in the research literature (Volkert & Piazza, 2012); however, professionals often recommend, and caregivers often use, treatments that have limited empirical support. In the current investigation, we compared a modified sequential oral sensory approach (M-SOS; Benson, Parke, Gannon, & Muñoz, 2013) to an ABA approach for the treatment of the food selectivity of 6 children with autism. We randomly assigned 3 children to ABA and 3 children to M-SOS and compared the effects of treatment in a multiple baseline design across novel, healthy target foods. We used a multielement design to assess treatment generalization. Consumption of target foods increased for children who received ABA, but not for children who received M-SOS. We subsequently implemented ABA with the children for whom M-SOS was not effective and observed a potential treatment generalization effect during ABA when M-SOS preceded ABA. © 2016 Society for the Experimental Analysis of Behavior.

  6. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  7. Parallel genetic algorithms with migration for the hybrid flow shop scheduling problem

    Directory of Open Access Journals (Sweden)

    K. Belkadi

    2006-01-01

    Full Text Available This paper addresses scheduling problems in hybrid flow shop-like systems with a migration parallel genetic algorithm (PGA_MIG. This parallel genetic algorithm model allows genetic diversity by the application of selection and reproduction mechanisms nearer to nature. The space structure of the population is modified by dividing it into disjoined subpopulations. From time to time, individuals are exchanged between the different subpopulations (migration. Influence of parameters and dedicated strategies are studied. These parameters are the number of independent subpopulations, the interconnection topology between subpopulations, the choice/replacement strategy of the migrant individuals, and the migration frequency. A comparison between the sequential and parallel version of genetic algorithm (GA is provided. This comparison relates to the quality of the solution and the execution time of the two versions. The efficiency of the parallel model highly depends on the parameters and especially on the migration frequency. In the same way this parallel model gives a significant improvement of computational time if it is implemented on a parallel architecture which offers an acceptable number of processors (as many processors as subpopulations.

  8. Sequential {sup 123}I-iododexetimide scans in temporal lobe epilepsy: comparison with neuroimaging scans (MR imaging and {sup 18}F-FDG PET imaging)

    Energy Technology Data Exchange (ETDEWEB)

    Mohamed, Armin [Royal Prince Alfred Hospital, Department of PET and Nuclear Medicine, Camperdown, NSW (Australia); Royal Prince Alfred Hospital, Comprehensive Epilepsy Service, Camperdown, NSW (Australia); University of Sydney, Faculty of Medicine, Sydney, NSW (Australia); Eberl, Stefan; Henderson, David; Beveridge, Scott; Constable, Chris [Royal Prince Alfred Hospital, Department of PET and Nuclear Medicine, Camperdown, NSW (Australia); Fulham, Michael J. [Royal Prince Alfred Hospital, Department of PET and Nuclear Medicine, Camperdown, NSW (Australia); Kassiou, Michael [Royal Prince Alfred Hospital, Department of PET and Nuclear Medicine, Camperdown, NSW (Australia); University of Sydney, Department of Pharmacology, Sydney, NSW (Australia); Zaman, Aysha [University of Sydney, Faculty of Medicine, Sydney, NSW (Australia); Lo, Sing Kai [University of Sydney, Institute of International Health, Sydney, NSW (Australia)

    2005-02-01

    Muscarinic acetylcholine receptors (mAChRs) play an important role in the generation of seizures. Single-photon emission computed tomography (SPECT) with {sup 123}I-iododexetimide (IDEX) depicts tracer uptake by mAChRs. Our aims were to: (a) determine the optimum time for interictal IDEX SPECT imaging; (b) determine the accuracy of IDEX scans in the localisation of seizure foci when compared with video EEG and MR imaging in patients with temporal lobe epilepsy (TLE); (c) characterise the distribution of IDEX binding in the temporal lobes and (d) compare IDEX SPECT and {sup 18}F-fluorodeoxyglucose (FDG) positron emission tomography (PET) in identifying seizure foci. We performed sequential scans using IDEX SPECT imaging at 0, 3, 6 and 24 h in 12 consecutive patients with refractory TLE undergoing assessment for epilepsy surgery. Visual and region of interest analyses of the mesial, lateral and polar regions of the temporal lobes were used to compare IDEX SPECT, FDG PET and MR imaging in seizure onset localisation. The 6-h IDEX scan (92%; {kappa}=0.83, p=0.003) was superior to the 0-h (36%; {kappa}=0.01, p>0.05), 3-h (55%; {kappa}=0.13, p>0.05) and 24-h IDEX scans in identifying the temporal lobe of seizure origin. The 6-h IDEX scan correctly predicted the temporal lobe of seizure origin in two patients who required intracranial EEG recordings to define the seizure onset. Reduced ligand binding was most marked at the temporal pole and mesial temporal structures. IDEX SPECT was superior to interictal FDG PET (75%; {kappa}=0.66, p=0.023) in seizure onset localisation. MR imaging was non-localising in two patients in whom it was normal and in another patient in whom there was bilateral symmetrical hippocampal atrophy. The 6-h IDEX SPECT scan is a viable alternative to FDG PET imaging in seizure onset localisation in TLE. (orig.)

  9. Comparison of peripapillary retinal nerve fiber layer loss and visual outcome in fellow eyes following sequential bilateral non-arteritic anterior ischemic optic neuropathy.

    Science.gov (United States)

    Dotan, Gad; Kesler, Anat; Naftaliev, Elvira; Skarf, Barry

    2015-05-01

    To report on the correlation of structural damage to the axons of the optic nerve and visual outcome following bilateral non-arteritic anterior ischemic optic neuropathy. A retrospective review of the medical records of 25 patients with bilateral sequential non-arteritic anterior ischemic optic neuropathy was performed. Outcome measures were peripapillary retinal nerve fiber layer thickness measured with the Stratus optical coherence tomography scanner, visual acuity and visual field loss. Median peripapillary retinal nerve fiber layer (RNFL) thickness, mean deviation (MD) of visual field, and visual acuity of initially involved NAION eyes (54.00 µm, -17.77 decibels (dB), 0.4, respectively) were comparable to the same parameters measured following development of second NAION event in the other eye (53.70 µm, p = 0.740; -16.83 dB, p = 0.692; 0.4, p = 0.942, respectively). In patients with bilateral NAION, there was a significant correlation of peripapillary RNFL thickness (r = 0.583, p = 0.002) and MD of the visual field (r = 0.457, p = 0.042) for the pairs of affected eyes, whereas a poor correlation was found in visual acuity of these eyes (r = 0.279, p = 0.176). Peripapillary RNFL thickness following NAION was positively correlated with MD of visual field (r = 0.312, p = 0.043) and negatively correlated with logMAR visual acuity (r = -0.365, p = 0.009). In patients who experience bilateral NAION, the magnitude of RNFL loss is similar in each eye. There is a greater similarity in visual field loss than in visual acuity between the two affected eyes with NAION of the same individual.

  10. Sequential 123I-iododexetimide scans in temporal lobe epilepsy: comparison with neuroimaging scans (MR imaging and 18F-FDG PET imaging)

    International Nuclear Information System (INIS)

    Mohamed, Armin; Eberl, Stefan; Henderson, David; Beveridge, Scott; Constable, Chris; Fulham, Michael J.; Kassiou, Michael; Zaman, Aysha; Lo, Sing Kai

    2005-01-01

    Muscarinic acetylcholine receptors (mAChRs) play an important role in the generation of seizures. Single-photon emission computed tomography (SPECT) with 123 I-iododexetimide (IDEX) depicts tracer uptake by mAChRs. Our aims were to: (a) determine the optimum time for interictal IDEX SPECT imaging; (b) determine the accuracy of IDEX scans in the localisation of seizure foci when compared with video EEG and MR imaging in patients with temporal lobe epilepsy (TLE); (c) characterise the distribution of IDEX binding in the temporal lobes and (d) compare IDEX SPECT and 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) in identifying seizure foci. We performed sequential scans using IDEX SPECT imaging at 0, 3, 6 and 24 h in 12 consecutive patients with refractory TLE undergoing assessment for epilepsy surgery. Visual and region of interest analyses of the mesial, lateral and polar regions of the temporal lobes were used to compare IDEX SPECT, FDG PET and MR imaging in seizure onset localisation. The 6-h IDEX scan (92%; κ=0.83, p=0.003) was superior to the 0-h (36%; κ=0.01, p>0.05), 3-h (55%; κ=0.13, p>0.05) and 24-h IDEX scans in identifying the temporal lobe of seizure origin. The 6-h IDEX scan correctly predicted the temporal lobe of seizure origin in two patients who required intracranial EEG recordings to define the seizure onset. Reduced ligand binding was most marked at the temporal pole and mesial temporal structures. IDEX SPECT was superior to interictal FDG PET (75%; κ=0.66, p=0.023) in seizure onset localisation. MR imaging was non-localising in two patients in whom it was normal and in another patient in whom there was bilateral symmetrical hippocampal atrophy. The 6-h IDEX SPECT scan is a viable alternative to FDG PET imaging in seizure onset localisation in TLE. (orig.)

  11. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  13. Sequential Dependencies in Driving

    Science.gov (United States)

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  14. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  15. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...... flattening necessitates all sub-computations to materialize at the same time. For example, naive n by n matrix multiplication requires n^3 space in NESL because the algorithm contains n^3 independent scalar multiplications. For large values of n, this is completely unacceptable. We address the problem...

  16. Parallel versus Sequential Processing in Print and Braille Reading

    Science.gov (United States)

    Veispak, Anneli; Boets, Bart; Ghesquiere, Pol

    2012-01-01

    In the current study we investigated word, pseudoword and story reading in Dutch speaking braille and print readers. To examine developmental patterns, these reading skills were assessed in both children and adults. The results reveal that braille readers read less accurately and fast than print readers. While item length has no impact on word…

  17. Fast parallel computation of polynomials using few processors

    DEFF Research Database (Denmark)

    Valiant, Leslie; Skyum, Sven

    1981-01-01

    It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors....

  18. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    Science.gov (United States)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  19. Deep evolutionary comparison of gene expression identifies parallel recruitment of trans-factors in two independent origins of C4 photosynthesis.

    Directory of Open Access Journals (Sweden)

    Sylvain Aubry

    2014-06-01

    Full Text Available With at least 60 independent origins spanning monocotyledons and dicotyledons, the C4 photosynthetic pathway represents one of the most remarkable examples of convergent evolution. The recurrent evolution of this highly complex trait involving alterations to leaf anatomy, cell biology and biochemistry allows an increase in productivity by ∼ 50% in tropical and subtropical areas. The extent to which separate lineages of C4 plants use the same genetic networks to maintain C4 photosynthesis is unknown. We developed a new informatics framework to enable deep evolutionary comparison of gene expression in species lacking reference genomes. We exploited this to compare gene expression in species representing two independent C4 lineages (Cleome gynandra and Zea mays whose last common ancestor diverged ∼ 140 million years ago. We define a cohort of 3,335 genes that represent conserved components of leaf and photosynthetic development in these species. Furthermore, we show that genes encoding proteins of the C4 cycle are recruited into networks defined by photosynthesis-related genes. Despite the wide evolutionary separation and independent origins of the C4 phenotype, we report that these species use homologous transcription factors to both induce C4 photosynthesis and to maintain the cell specific gene expression required for the pathway to operate. We define a core molecular signature associated with leaf and photosynthetic maturation that is likely shared by angiosperm species derived from the last common ancestor of the monocotyledons and dicotyledons. We show that deep evolutionary comparisons of gene expression can reveal novel insight into the molecular convergence of highly complex phenotypes and that parallel evolution of trans-factors underpins the repeated appearance of C4 photosynthesis. Thus, exploitation of extant natural variation associated with complex traits can be used to identify regulators. Moreover, the transcription factors

  20. Deep evolutionary comparison of gene expression identifies parallel recruitment of trans-factors in two independent origins of C4 photosynthesis.

    Science.gov (United States)

    Aubry, Sylvain; Kelly, Steven; Kümpers, Britta M C; Smith-Unna, Richard D; Hibberd, Julian M

    2014-06-01

    With at least 60 independent origins spanning monocotyledons and dicotyledons, the C4 photosynthetic pathway represents one of the most remarkable examples of convergent evolution. The recurrent evolution of this highly complex trait involving alterations to leaf anatomy, cell biology and biochemistry allows an increase in productivity by ∼ 50% in tropical and subtropical areas. The extent to which separate lineages of C4 plants use the same genetic networks to maintain C4 photosynthesis is unknown. We developed a new informatics framework to enable deep evolutionary comparison of gene expression in species lacking reference genomes. We exploited this to compare gene expression in species representing two independent C4 lineages (Cleome gynandra and Zea mays) whose last common ancestor diverged ∼ 140 million years ago. We define a cohort of 3,335 genes that represent conserved components of leaf and photosynthetic development in these species. Furthermore, we show that genes encoding proteins of the C4 cycle are recruited into networks defined by photosynthesis-related genes. Despite the wide evolutionary separation and independent origins of the C4 phenotype, we report that these species use homologous transcription factors to both induce C4 photosynthesis and to maintain the cell specific gene expression required for the pathway to operate. We define a core molecular signature associated with leaf and photosynthetic maturation that is likely shared by angiosperm species derived from the last common ancestor of the monocotyledons and dicotyledons. We show that deep evolutionary comparisons of gene expression can reveal novel insight into the molecular convergence of highly complex phenotypes and that parallel evolution of trans-factors underpins the repeated appearance of C4 photosynthesis. Thus, exploitation of extant natural variation associated with complex traits can be used to identify regulators. Moreover, the transcription factors that are shared by

  1. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  2. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  3. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  4. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  5. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  6. Parallelizing More Loops with Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    2012-01-01

    an interactive compilation feedback system that guides programmers in iteratively modifying their application source code. This helps leverage the compiler’s ability to generate loop-parallel code. We employ our system to modify two sequential benchmarks dealing with image processing and edge detection...

  7. Learning and Parallelization Boost Constraint Search

    Science.gov (United States)

    Yun, Xi

    2013-01-01

    Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

  8. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  9. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  10. Sequential decay of Reggeons

    International Nuclear Information System (INIS)

    Yoshida, Toshihiro

    1981-01-01

    Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)

  11. Leveraging Parallel Data Processing Frameworks with Verified Lifting

    Directory of Open Access Journals (Sweden)

    Maaz Bin Safeer Ahmad

    2016-11-01

    Full Text Available Many parallel data frameworks have been proposed in recent years that let sequential programs access parallel processing. To capitalize on the benefits of such frameworks, existing code must often be rewritten to the domain-specific languages that each framework supports. This rewriting–tedious and error-prone–also requires developers to choose the framework that best optimizes performance given a specific workload. This paper describes Casper, a novel compiler that automatically retargets sequential Java code for execution on Hadoop, a parallel data processing framework that implements the MapReduce paradigm. Given a sequential code fragment, Casper uses verified lifting to infer a high-level summary expressed in our program specification language that is then compiled for execution on Hadoop. We demonstrate that Casper automatically translates Java benchmarks into Hadoop. The translated results execute on average 3.3x faster than the sequential implementations and scale better, as well, to larger datasets.

  12. A comparison of simple and realistic eye models for calculation of fluence to dose conversion coefficients in a broad parallel beam incident of protons

    International Nuclear Information System (INIS)

    Sakhaee, Mahmoud; Vejdani-Noghreiyan, Alireza; Ebrahimi-Khankook, Atiyeh

    2015-01-01

    Radiation induced cataract has been demonstrated among people who are exposed to ionizing radiation. To evaluate the deterministic effects of ionizing radiation on the eye lens, several papers dealing with the eye lens dose have been published. ICRP Publication 103 states that the lens of the eye may be more radiosensitive than previously considered. Detailed investigation of the response of the lens showed that there are strong differences in sensitivity to ionizing radiation exposure with respect to cataract induction among the tissues of the lens of the eye. This motivated several groups to look deeper into issue of the dose to a sensitive cell population within the lens, especially for radiations with low energy penetrability that have steep dose gradients inside the lens. Two sophisticated mathematical models of the eye including the inner structure have been designed for the accurate dose estimation in recent years. This study focuses on the calculations of the absorbed doses of different parts of the eye using the stylized models located in UF-ORNL phantom and comparison with the data calculated with the reference computational phantom in a broad parallel beam incident of protons with energies between 20 MeV and 10 GeV. The obtained results indicate that the total lens absorbed doses of reference phantom has good compliance with those of the more sensitive regions of stylized models. However, total eye absorbed dose of these models greatly differ with each other for lower energies. - Highlights: • The validation of reference data for the eye was studied for proton exposures. • Two real mathematical models of the eye were imported into the UF-ORNL phantom. • Fluence to dose conversion coefficients were calculated for different eye sections. • Obtained Results were compared with that of assessed by ICRP adult male phantom

  13. Study on convective mixing for thermal striping phenomena. Thermal-hydraulic analyses on mixing process in parallel triple-jet and comparisons between numerical methods

    International Nuclear Information System (INIS)

    Kimura, Nobuyuki; Nishimura, Motohiko; Kamide, Hideki

    2000-03-01

    A quantitative evaluation on thermal striping, in which temperature fluctuation due to convective mixing among jets imposes thermal fatigue on structural components, is of importance for reactor safety. In the present study, a water experiment was performed on parallel triple-jet: cold jet at the center and hot jets in both sides. Three kinds of numerical analyses based on the finite difference method were carried out to compare the similarity with the experiment by use of respective different handling of turbulence such as a k-ε two equation turbulence model (k-ε Model), a low Reynolds number stress and heat flux equation model (LRSFM) and a direct numerical simulation (DNS). In the experiment, the jets were mainly mixed due to the coherent oscillation. The numerical result using k-ε Model could not reproduce the coherent oscillating motion of jets due to rolling-up fluid. The oscillations of the jets predicted by LRSFM and DNS were in good agreements with the experiment. The comparison between the coherent and random components in experimental temperature fluctuation obtained by using the phase-averaging shows that k-ε Model and LRSFM overestimated the random component and the coherent component respectively. The ratios of coherent to random components in total temperature fluctuation obtained from DNS were in good agreements with the experiment. The numerical analysis using DNS can reproduce the coherent oscillation of the jets and the coherent / random components in temperature fluctuation. The analysis using LRSFM could simulate the mixing process of the jets with the low frequency. (author)

  14. Pharmacokinetics and bioavailability of plant lignan 7-hydroxymatairesinol and effects on serum enterolactone and clinical symptoms in postmenopausal women: a single-blinded, parallel, dose-comparison study.

    Science.gov (United States)

    Udani, Jay K; Brown, Donald J; Tan, Maria Olivia C; Hardy, Mary

    2013-01-01

    7-Hydroxymaitairesinol (7-HMR) is a naturally occurring plant lignan found in whole grains and the Norway spruce (Piciea abies). The purpose of this study was to evaluate the bioavailability of a proprietary 7-HMR product (HMRlignan, Linnea SA, Locarno, Switzerland) through measurement of lignan metabolites and metabolic precursors. A single-blind, parallel, pharmacokinetic and dose-comparison study was conducted on 22 postmenopausal females not receiving hormone replacement therapy. Subjects were enrolled in either a 36 mg/d (low-dose) or 72 mg/d dose (high-dose) regimen for 8 weeks. Primary measured outcomes included plasma levels of 7-HMR and enterolactone (ENL), and single-dose pharmacokinetic analysis was performed on a subset of subjects in the low-dose group. Safety data and adverse event reports were collected as well as data on hot flash frequency and severity. Pharmacokinetic studies demonstrated 7-HMR C max = 757.08 ng/ml at 1 hour and ENL C max = 4.8 ng/ml at 24 hours. From baseline to week 8, plasma 7-HMR levels increased by 191% in the low-dose group (p < 0.01) and by 1238% in the high-dose group (p < 0.05). Plasma ENL levels consistently increased as much as 157% from baseline in the low-dose group and 137% in the high-dose group. Additionally, the mean number of weekly hot flashes decreased by 50%, from 28.0/week to 14.3/week (p < 0.05) in the high-dose group. No significant safety issues were identified in this study. The results demonstrate that HMRlignan is quickly absorbed into the plasma and is metabolized to ENL in healthy postmenopausal women. Clinically, the data demonstrate a statistically significant improvement in hot flash frequency. Doses up to 72 mg/d HMRlignan for 8 weeks were safe and well tolerated in this population.

  15. Parallel computing for homogeneous diffusion and transport equations in neutronics; Calcul parallele pour les equations de diffusion et de transport homogenes en neutronique

    Energy Technology Data Exchange (ETDEWEB)

    Pinchedez, K

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  16. Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

    2008-01-01

    A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...

  17. Parallel SN transport calculations on a transputer network

    International Nuclear Information System (INIS)

    Kim, Yong Hee; Cho, Nam Zin

    1994-01-01

    A parallel computing algorithm for the neutron transport problems has been implemented on a transputer network and two reactor benchmark problems (a fixed-source problem and an eigenvalue problem) are solved. We have shown that the parallel calculations provided significant reduction in execution time over the sequential calculations

  18. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  19. Simulation Study of Real Time 3-D Synthetic Aperture Sequential Beamforming for Ultrasound Imaging

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Rasmussen, Morten Fischer; Stuart, Matthias Bo

    2014-01-01

    in the main system. The real-time imaging capability is achieved using a synthetic aperture beamforming technique, utilizing the transmit events to generate a set of virtual elements that in combination can generate an image. The two core capabilities in combination is named Synthetic Aperture Sequential......This paper presents a new beamforming method for real-time three-dimensional (3-D) ultrasound imaging using a 2-D matrix transducer. To obtain images with sufficient resolution and contrast, several thousand elements are needed. The proposed method reduces the required channel count from...... Beamforming (SASB). Simulations are performed to evaluate the image quality of the presented method in comparison to Parallel beamforming utilizing 16 receive beamformers. As indicators for image quality the detail resolution and Cystic resolution are determined for a set of scatterers at a depth of 90mm...

  20. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  1. Magnetic Field Emission Comparison for Series-Parallel and Series-Series Wireless Power Transfer to Vehicles – PART 1/2

    DEFF Research Database (Denmark)

    Batra, Tushar; Schaltz, Erik

    2014-01-01

    Resonant circuits of wireless power transfer system can be designed in four possible ways by placing the primary and secondary capacitor in a series or parallel order with respect to the corresponding inductor. The two topologies series-parallel and series-series under investigation have been...... already compared in terms of their output behavior (current or voltage source) and reflection of the secondary impedance on the primary side. In this paper it is shown that for the same power rating series-parallel topology emits lesser magnetic fields to the surroundings than its series...

  2. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  3. Adaptive sequential controller

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkawi, Mohamed A. (Renton, WA); Xing, Jian (Seattle, WA); Butler, Nicholas G. (Newberg, OR); Rodriguez, Alonso (Pasadena, CA)

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  4. Adaptive sequential controller

    Science.gov (United States)

    El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  5. X-ray computed tomography comparison of individual and parallel assembled commercial lithium iron phosphate batteries at end of life after high rate cycling

    Science.gov (United States)

    Carter, Rachel; Huhman, Brett; Love, Corey T.; Zenyuk, Iryna V.

    2018-03-01

    X-ray computed tomography (X-ray CT) across multiple length scales is utilized for the first time to investigate the physical abuse of high C-rate pulsed discharge on cells wired individually and in parallel.. Manufactured lithium iron phosphate cells boasting high rate capability were pulse power tested in both wiring conditions with high discharge currents of 10C for a high number of cycles (up to 1200) until end of life (health (SOH) monitoring methods, is diagnosed using CT by rendering the interior current collector without harm or alteration to the active materials. Correlation of CT observations to the electrochemical pulse data from the parallel-wired cells reveals the risk of parallel wiring during high C-rate pulse discharge.

  6. Comparison of two commercial embryo culture media (SAGE-1 step single medium vs. G1-PLUSTM/G2-PLUSTM sequential media): Influence on in vitro fertilization outcomes and human embryo quality.

    Science.gov (United States)

    López-Pelayo, Iratxe; Gutiérrez-Romero, Javier María; Armada, Ana Isabel Mangano; Calero-Ruiz, María Mercedes; Acevedo-Yagüe, Pablo Javier Moreno de

    2018-04-26

    To compare embryo quality, fertilization, implantation, miscarriage and clinical pregnancy rates for embryos cultured in two different commercial culture media until D-2 or D-3. In this retrospective study, we analyzed 189 cycles performed in 2016. Metaphase II oocytes were microinjected and allocated into single medium (SAGE 1-STEP, Origio) until transferred, frozen or discarded; or, if sequential media were used, the oocytes were cultured in G1-PLUSTM (Vitrolife) up to D-2 or D-3 and in G2-PLUSTM (Vitrolife) to transfer. On the following day, the oocytes were checked for normal fertilization and on D-2 and D-3 for morphological classification. Statistical analysis was performed using the chi-square and Mann-Whitney tests in PASW Statistics 18.0. The fertilization rates were 70.07% for single and 69.11% for sequential media (p=0.736). The mean number of embryos with high morphological quality (class A/B) was higher in the single medium than in the sequential media: D-2 [class A (190 vs. 107, pcultured in single medium were frozen: 197 (21.00%) vs. sequential: 102 (11.00%), pculture in single medium yields greater efficiency per cycle than in sequential media. Higher embryo quality and quantity were achieved, resulting in more frozen embryos. There were no differences in clinical pregnancy rates.

  7. Image Quality of 3rd Generation Spiral Cranial Dual-Source CT in Combination with an Advanced Model Iterative Reconstruction Technique: A Prospective Intra-Individual Comparison Study to Standard Sequential Cranial CT Using Identical Radiation Dose.

    Science.gov (United States)

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas

    2015-01-01

    To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (pspiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels.

  8. Magnetic Field Emission Comparison for Series-Parallel and Series-Series Wireless Power Transfer to Vehicles – PART 2/2

    DEFF Research Database (Denmark)

    Batra, Tushar; Schaltz, Erik

    2014-01-01

    Series-series and series-parallel topologies are the most favored topologies for design of wireless power transfer system for vehicle applications. The series-series topology has the advantage of reflecting only the resistive part on the primary side. On the other hand, the current source output...... characteristics of the series-parallel topology are more suited for the battery of the vehicle. This paper compares the two topologies in terms of magnetic emissions to the surroundings for the same input power, primary current, quality factor and inductors. Theoretical and simulation results show that the series...

  9. A soft sensor for bioprocess control based on sequential filtering of metabolic heat signals.

    Science.gov (United States)

    Paulsson, Dan; Gustavsson, Robert; Mandenius, Carl-Fredrik

    2014-09-26

    Soft sensors are the combination of robust on-line sensor signals with mathematical models for deriving additional process information. Here, we apply this principle to a microbial recombinant protein production process in a bioreactor by exploiting bio-calorimetric methodology. Temperature sensor signals from the cooling system of the bioreactor were used for estimating the metabolic heat of the microbial culture and from that the specific growth rate and active biomass concentration were derived. By applying sequential digital signal filtering, the soft sensor was made more robust for industrial practice with cultures generating low metabolic heat in environments with high noise level. The estimated specific growth rate signal obtained from the three stage sequential filter allowed controlled feeding of substrate during the fed-batch phase of the production process. The biomass and growth rate estimates from the soft sensor were also compared with an alternative sensor probe and a capacitance on-line sensor, for the same variables. The comparison showed similar or better sensitivity and lower variability for the metabolic heat soft sensor suggesting that using permanent temperature sensors of a bioreactor is a realistic and inexpensive alternative for monitoring and control. However, both alternatives are easy to implement in a soft sensor, alone or in parallel.

  10. A Soft Sensor for Bioprocess Control Based on Sequential Filtering of Metabolic Heat Signals

    Directory of Open Access Journals (Sweden)

    Dan Paulsson

    2014-09-01

    Full Text Available Soft sensors are the combination of robust on-line sensor signals with mathematical models for deriving additional process information. Here, we apply this principle to a microbial recombinant protein production process in a bioreactor by exploiting bio-calorimetric methodology. Temperature sensor signals from the cooling system of the bioreactor were used for estimating the metabolic heat of the microbial culture and from that the specific growth rate and active biomass concentration were derived. By applying sequential digital signal filtering, the soft sensor was made more robust for industrial practice with cultures generating low metabolic heat in environments with high noise level. The estimated specific growth rate signal obtained from the three stage sequential filter allowed controlled feeding of substrate during the fed-batch phase of the production process. The biomass and growth rate estimates from the soft sensor were also compared with an alternative sensor probe and a capacitance on-line sensor, for the same variables. The comparison showed similar or better sensitivity and lower variability for the metabolic heat soft sensor suggesting that using permanent temperature sensors of a bioreactor is a realistic and inexpensive alternative for monitoring and control. However, both alternatives are easy to implement in a soft sensor, alone or in parallel.

  11. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  12. Data parallel sorting for particle simulation

    Science.gov (United States)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  13. A comparison of parallel dust and fibre measurements of airborne chrysotile asbestos in a large mine and processing factories in the Russian Federation

    NARCIS (Netherlands)

    Feletto, Eleonora; Schonfeld, Sara J; Kovalevskiy, Evgeny V; Bukhtiyarov, Igor V; Kashanskiy, Sergey V; Moissonnier, Monika; Straif, Kurt; Kromhout, Hans

    2017-01-01

    INTRODUCTION: Historic dust concentrations are available in a large-scale cohort study of workers in a chrysotile mine and processing factories in Asbest, Russian Federation. Parallel dust (gravimetric) and fibre (phase-contrast optical microscopy) concentrations collected in 1995, 2007 and 2013/14

  14. Program For Parallel Discrete-Event Simulation

    Science.gov (United States)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  15. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  16. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  17. Parallel direct numerical simulation of turbulent flows in rotor-stator cavities. Comparison with k-{epsilon} modeling; Simulation numerique directe parallele d`ecoulements turbulents en cavites rotor-stator comparaisons avec les modilisations k-{epsilon}

    Energy Technology Data Exchange (ETDEWEB)

    Jacques, R.; Le Quere, P.; Daube, O. [Centre National de la Recherche Scientifique (CNRS), 91 - Orsay (France)

    1997-12-31

    Turbulent flows between a fixed disc and a rotating disc are encountered in various applications like turbo-machineries or torque converters of automatic gear boxes. These flows are characterised by particular physical phenomena mainly due to the effects of rotation (Coriolis and inertia forces) and thus, classical k-{epsilon}-type modeling gives approximative results. The aim of this work is to study these flows using direct numerical simulation in order to provide precise information about the statistical turbulent quantities and to improve the k-{epsilon} modeling in the industrial MATHILDA code of the ONERA and used by SNECMA company (aerospace industry). The results presented are restricted to the comparison between results obtained with direct simulation and results obtained with the MATHILDA code in the same configuration. (J.S.) 8 refs.

  18. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  19. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  20. In Vivo Evaluation of Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Hansen, Peter Møller; Lange, Theis

    2012-01-01

    Ultrasound in vivo imaging using synthetic aperture sequential beamformation (SASB) is compared with conventional imaging in a double blinded study using side-by-side comparisons. The objective is to evaluate if the image quality in terms of penetration depth, spatial resolution, contrast...

  1. Intra-individual diagnostic image quality and organ-specific-radiation dose comparison between spiral cCT with iterative image reconstruction and z-axis automated tube current modulation and sequential cCT

    Directory of Open Access Journals (Sweden)

    Holger Wenz

    2016-01-01

    Conclusions: Spiral cCT combined with ATCM and IR allows for significant-radiation dose reduction including a reduce eye lens organ-dose when compared to a tilted sequential cCT while improving subjective and objective image quality.

  2. Evaluation of parathyroid imaging methods with 99mTc-MIBI. The comparison of planar images obtained using a pinhole collimator and a parallel-hole collimator

    International Nuclear Information System (INIS)

    Fujii, Hirofumi; Iwasaki, Ryuichiro; Hashimoto, Jun; Nakamura, Kayoko; Kunieda, Etsuo; Sanmiya, Toshikazu; Kubo, Atsushi; Ogawa, Koichi; Inagaki, Kazutoshi

    1999-01-01

    Parathyroid scintigraphy with 99m Tc-MIBI was performed using two kinds of collimators, namely, a pinhole one and a parallel-hole one, to evaluate which one was more suitable for the detection of hyperfunctioning parathyroid lesions. In the studies using 99m Tc source, the pinhole collimator showed better efficiency and spatial resolution in the distance where the parathyroid scan are actually performed. In the phantom study, the nodular activities modeling parathyroid lesions were visualized better on the images obtained using the pinhole collimator. In clinical studies for 30 patients suspicious of hyperparathyroidism, hyperfunctioning parathyroid nodules were better detected when the pinhole collimator was used. In conclusion, the pinhole collimator was thought to be more suitable for parathyroid scintigraphy with 99m Tc-MIBI than the parallel-hole collimator. (author)

  3. Evaluation of parathyroid imaging methods with {sup 99m}Tc-MIBI. The comparison of planar images obtained using a pinhole collimator and a parallel-hole collimator

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, Hirofumi; Iwasaki, Ryuichiro; Hashimoto, Jun; Nakamura, Kayoko; Kunieda, Etsuo; Sanmiya, Toshikazu; Kubo, Atsushi [Keio Univ., Tokyo (Japan). School of Medicine; Ogawa, Koichi; Inagaki, Kazutoshi

    1999-07-01

    Parathyroid scintigraphy with {sup 99m}Tc-MIBI was performed using two kinds of collimators, namely, a pinhole one and a parallel-hole one, to evaluate which one was more suitable for the detection of hyperfunctioning parathyroid lesions. In the studies using {sup 99m}Tc source, the pinhole collimator showed better efficiency and spatial resolution in the distance where the parathyroid scan are actually performed. In the phantom study, the nodular activities modeling parathyroid lesions were visualized better on the images obtained using the pinhole collimator. In clinical studies for 30 patients suspicious of hyperparathyroidism, hyperfunctioning parathyroid nodules were better detected when the pinhole collimator was used. In conclusion, the pinhole collimator was thought to be more suitable for parathyroid scintigraphy with {sup 99m}Tc-MIBI than the parallel-hole collimator. (author)

  4. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  5. Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.

    Science.gov (United States)

    O'Connor, B P

    1999-11-01

    This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.

  6. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  7. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  8. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  9. Event-shape analysis: Sequential versus simultaneous multifragment emission

    International Nuclear Information System (INIS)

    Cebra, D.A.; Howden, S.; Karn, J.; Nadasen, A.; Ogilvie, C.A.; Vander Molen, A.; Westfall, G.D.; Wilson, W.K.; Winfield, J.S.; Norbeck, E.

    1990-01-01

    The Michigan State University 4π array has been used to select central-impact-parameter events from the reaction 40 Ar+ 51 V at incident energies from 35 to 85 MeV/nucleon. The event shape in momentum space is an observable which is shown to be sensitive to the dynamics of the fragmentation process. A comparison of the experimental event-shape distribution to sequential- and simultaneous-decay predictions suggests that a transition in the breakup process may have occurred. At 35 MeV/nucleon, a sequential-decay simulation reproduces the data. For the higher energies, the experimental distributions fall between the two contrasting predictions

  10. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  11. Left Ventricular Function Evaluation on a 3T MR Scanner with Parallel RF Transmission Technique: Prospective Comparison of Cine Sequences Acquired before and after Gadolinium Injection.

    Science.gov (United States)

    Caspar, Thibault; Schultz, Anthony; Schaeffer, Mickaël; Labani, Aïssam; Jeung, Mi-Young; Jurgens, Paul Thomas; El Ghannudi, Soraya; Roy, Catherine; Ohana, Mickaël

    To compare cine MR b-TFE sequences acquired before and after gadolinium injection, on a 3T scanner with a parallel RF transmission technique in order to potentially improve scanning time efficiency when evaluating LV function. 25 consecutive patients scheduled for a cardiac MRI were prospectively included and had their b-TFE cine sequences acquired before and right after gadobutrol injection. Images were assessed qualitatively (overall image quality, LV edge sharpness, artifacts and LV wall motion) and quantitatively with measurement of LVEF, LV mass, and telediastolic volume and contrast-to-noise ratio (CNR) between the myocardium and the cardiac chamber. Statistical analysis was conducted using a Bayesian paradigm. No difference was found before or after injection for the LVEF, LV mass and telediastolic volume evaluations. Overall image quality and CNR were significantly lower after injection (estimated coefficient cine after > cine before gadolinium: -1.75 CI = [-3.78;-0.0305], prob(coef>0) = 0% and -0.23 CI = [-0.49;0.04], prob(coef>0) = 4%) respectively), but this decrease did not affect the visual assessment of LV wall motion (cine after > cine before gadolinium: -1.46 CI = [-4.72;1.13], prob(coef>0) = 15%). In 3T cardiac MRI acquired with parallel RF transmission technique, qualitative and quantitative assessment of LV function can reliably be performed with cine sequences acquired after gadolinium injection, despite a significant decrease in the CNR and the overall image quality.

  12. Comparison of performance between a parallel and a series solar-heat pump system; Solar heat pump system ni okeru heiretsu setsuzoku no seino hikaku

    Energy Technology Data Exchange (ETDEWEB)

    Kanayama, K; Zhao, J; Baba, H; Endo, N [Kitami Institute of Technology, Hokkaido (Japan)

    1997-11-25

    In a solar heat pump system, a single-tank system was fabricated, in which a heat pump is installed in series between a heat collecting tank and a heat storage tank. At the same time, a double-tank system was also fabricated, in which two tanks are assembled into one to which a solar system and a heat pump are connected in parallel. Performance of both systems was analyzed by using measured values and estimated values. Heat collecting efficiency in the double-tank system is higher by about 13 points than in the single-tank system. Nevertheless, the coefficient of performance for the single-tank system is 1.03 to 1.51 times greater than that of the double-tank system. Dependency of the single-tank system on natural energy is higher by 0.3 to 3 points than the double-tank system. Putting the above facts together, it may be said that the single-tank system connecting the solar system and the heat pump in parallel is superior in performance to the double-tank system of the series connection. 3 refs., 5 figs., 2 tabs.

  13. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  14. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  15. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  16. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  17. A double blind parallel group placebo controlled comparison of sedative and mnesic effects of etifoxine and lorazepam in healthy subjects [corrected].

    Science.gov (United States)

    Micallef, J; Soubrouillard, C; Guet, F; Le Guern, M E; Alquier, C; Bruguerolle, B; Blin, O

    2001-06-01

    This paper describes the psychomotor and mnesic effects of single oral doses of etifoxine (50 and 100 mg) and lorazepam (2 mg) in healthy subjects. Forty-eight healthy subjects were included in this randomized double blind, placebo controlled parallel group study [corrected]. The effects of drugs were assessed by using a battery of subjective and objective tests that explored mood and vigilance (Visual Analog Scale), attention (Barrage test), psychomotor performance (Choice Reaction Time) and memory (digit span, immediate and delayed free recall of a word list). Whereas vigilance, psychomotor performance and free recall were significantly impaired by lorazepam, neither dosage of etifoxine (50 and 100 mg) produced such effects. These results suggest that 50 and 100 mg single dose of etifoxine do not induce amnesia and sedation as compared to lorazepam.

  18. A comparison of long-term parallel measurements of sunshine duration obtained with a Campbell-Stokes sunshine recorder and two automated sunshine sensors

    Science.gov (United States)

    Baumgartner, D. J.; Pötzi, W.; Freislich, H.; Strutzmann, H.; Veronig, A. M.; Foelsche, U.; Rieder, H. E.

    2017-06-01

    In recent decades, automated sensors for sunshine duration (SD) measurements have been introduced in meteorological networks, thereby replacing traditional instruments, most prominently the Campbell-Stokes (CS) sunshine recorder. Parallel records of automated and traditional SD recording systems are rare. Nevertheless, such records are important to understand the differences/similarities in SD totals obtained with different instruments and how changes in monitoring device type affect the homogeneity of SD records. This study investigates the differences/similarities in parallel SD records obtained with a CS and two automated SD sensors between 2007 and 2016 at the Kanzelhöhe Observatory, Austria. Comparing individual records of daily SD totals, we find differences of both positive and negative sign, with smallest differences between the automated sensors. The larger differences between CS-derived SD totals and those from automated sensors can be attributed (largely) to the higher sensitivity threshold of the CS instrument. Correspondingly, the closest agreement among all sensors is found during summer, the time of year when sensitivity thresholds are least critical. Furthermore, we investigate the performance of various models to create the so-called sensor-type-equivalent (STE) SD records. Our analysis shows that regression models including all available data on daily (or monthly) time scale perform better than simple three- (or four-) point regression models. Despite general good performance, none of the considered regression models (of linear or quadratic form) emerges as the "optimal" model. Although STEs prove useful for relating SD records of individual sensors on daily/monthly time scales, this does not ensure that STE (or joint) records can be used for trend analysis.

  19. Late gadolinium enhancement cardiac imaging on a 3T scanner with parallel RF transmission technique: prospective comparison of 3D-PSIR and 3D-IR

    International Nuclear Information System (INIS)

    Schultz, Anthony; Caspar, Thibault; Schaeffer, Mickael; Labani, Aissam; Jeung, Mi-Young; El Ghannudi, Soraya; Roy, Catherine; Ohana, Mickael

    2016-01-01

    To qualitatively and quantitatively compare different late gadolinium enhancement (LGE) sequences acquired at 3T with a parallel RF transmission technique. One hundred and sixty participants prospectively enrolled underwent a 3T cardiac MRI with 3 different LGE sequences: 3D Phase-Sensitive Inversion-Recovery (3D-PSIR) acquired 5 minutes after injection, 3D Inversion-Recovery (3D-IR) at 9 minutes and 3D-PSIR at 13 minutes. All LGE-positive patients were qualitatively evaluated both independently and blindly by two radiologists using a 4-level scale, and quantitatively assessed with measurement of contrast-to-noise ratio and LGE maximal surface. Statistical analyses were calculated under a Bayesian paradigm using MCMC methods. Fifty patients (70 % men, 56yo ± 19) exhibited LGE (62 % were post-ischemic, 30 % related to cardiomyopathy and 8 % post-myocarditis). Early and late 3D-PSIR were superior to 3D-IR sequences (global quality, estimated coefficient IR > early-PSIR: -2.37 CI = [-3.46; -1.38], prob(coef > 0) = 0 % and late-PSIR > IR: 3.12 CI = [0.62; 4.41], prob(coef > 0) = 100 %), LGE surface estimated coefficient IR > early-PSIR: -0.09 CI = [-1.11; -0.74], prob(coef > 0) = 0 % and late-PSIR > IR: 0.96 CI = [0.77; 1.15], prob(coef > 0) = 100 %. Probabilities for late PSIR being superior to early PSIR concerning global quality and CNR were over 90 %, regardless of the aetiological subgroup. In 3T cardiac MRI acquired with parallel RF transmission technique, 3D-PSIR is qualitatively and quantitatively superior to 3D-IR. (orig.)

  20. Evaluation Using Sequential Trials Methods.

    Science.gov (United States)

    Cohen, Mark E.; Ralls, Stephen A.

    1986-01-01

    Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

  1. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  2. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  3. Simultaneous optimization of sequential IMRT plans

    International Nuclear Information System (INIS)

    Popple, Richard A.; Prellop, Perri B.; Spencer, Sharon A.; Santos, Jennifer F. de los; Duan, Jun; Fiveash, John B.; Brezovich, Ivan A.

    2005-01-01

    plans was equivalent to the independently optimized plans actually used for treatment. Tolerance doses of the critical structures were respected for the plan sum; however, the dose to critical structures for the individual initial and boost plans was different between the simultaneously optimized and the independently optimized plans. In conclusion, we have demonstrated a method for optimization of initial and boost plans that treat volume reductions using the same dose per fraction. The method is efficient, as it avoids the iterative approach necessitated by currently available TPSs, and is generalizable to more than two treatment phases. Comparison with clinical plans developed independently suggests that current manual techniques for planning sequential treatments may be suboptimal

  4. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  5. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  6. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  7. Comparing and Optimising Parallel Haskell Implementations for Multicore Machines

    DEFF Research Database (Denmark)

    Berthold, Jost; Marlow, Simon; Hammond, Kevin

    2009-01-01

    In this paper, we investigate the differences and tradeoffs imposed by two parallel Haskell dialects running on multicore machines. GpH and Eden are both constructed using the highly-optimising sequential GHC compiler, and share thread scheduling, and other elements, from a common code base. The ...

  8. Fast Parallel Computation of Polynomials Using Few Processors

    DEFF Research Database (Denmark)

    Valiant, Leslie G.; Skyum, Sven; Berkowitz, S.

    1983-01-01

    It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors....

  9. Computation of watersheds based on parallel graph algorithms

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.; Maragos, P; Schafer, RW; Butt, MA

    1996-01-01

    In this paper the implementation of a parallel watershed algorithm is described. The algorithm has been implemented on a Cray J932, which is a shared memory architecture with 32 processors. The watershed transform has generally been considered to be inherently sequential, but recently a few research

  10. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  11. The BLAZE language - A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  12. The BLAZE language: A parallel language for scientific programming

    Science.gov (United States)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  13. A comparison of two treatments for childhood apraxia of speech: methods and treatment protocol for a parallel group randomised control trial

    Directory of Open Access Journals (Sweden)

    Murray Elizabeth

    2012-08-01

    Full Text Available Abstract Background Childhood Apraxia of Speech is an impairment of speech motor planning that manifests as difficulty producing the sounds (articulation and melody (prosody of speech. These difficulties may persist through life and are detrimental to academic, social, and vocational development. A number of published single subject and case series studies of speech treatments are available. There are currently no randomised control trials or other well designed group trials available to guide clinical practice. Methods/Design A parallel group, fixed size randomised control trial will be conducted in Sydney, Australia to determine the efficacy of two treatments for Childhood Apraxia of Speech: 1 Rapid Syllable Transition Treatment and the 2 Nuffield Dyspraxia Programme – Third edition. Eligible children will be English speaking, aged 4–12 years with a diagnosis of suspected CAS, normal or adjusted hearing and vision, and no comprehension difficulties or other developmental diagnoses. At least 20 children will be randomised to receive one of the two treatments in parallel. Treatments will be delivered by trained and supervised speech pathology clinicians using operationalised manuals. Treatment will be administered in 1-hour sessions, 4 times per week for 3 weeks. The primary outcomes are speech sound and prosodic accuracy on a customised 292 item probe and the Diagnostic Evaluation of Articulation and Phonology inconsistency subtest administered prior to treatment and 1 week, 1 month and 4 months post-treatment. All post assessments will be completed by blinded assessors. Our hypotheses are: 1 treatment effects at 1 week post will be similar for both treatments, 2 maintenance of treatment effects at 1 and 4 months post will be greater for Rapid Syllable Transition Treatment than Nuffield Dyspraxia Programme treatment, and 3 generalisation of treatment effects to untrained related speech behaviours will be greater for Rapid

  14. Multicenter, double-blind, parallel group study investigating the non-inferiority of efficacy and safety of a 2% miconazole nitrate shampoo in comparison with a 2% ketoconazole shampoo in the treatment of seborrhoeic dermatitis of the scalp.

    Science.gov (United States)

    Buechner, Stanislaw A

    2014-06-01

    This study investigated the non-inferiority of efficacy and tolerance of 2% miconazole nitrate shampoo in comparison with 2% ketoconazole shampoo in the treatment of scalp seborrheic dermatitis. A randomized, double-blind, comparative, parallel group, multicenter study was done. A total of 274 patients (145 miconazole, 129 ketoconazole) were enrolled. Treatment was twice-weekly for 4 weeks. Safety and efficacy assessments were made at baseline and at weeks 2 and 4. Assessments included symptoms of erythema, itching, scaling ['Symptom Scale of Seborrhoeic Dermatitis' (SSSD)], disease severity and global change [Clinical Global Impressions (CGIs) and Patient Global Impressions (PGIs)]. Miconazole shampoo is at least as effective and safe as ketoconazole shampoo in treating scalp seborrheic dermatitis scalp.

  15. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  16. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  17. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    Science.gov (United States)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  18. Comparison of pacing algorithms to avoid unnecessary ventricular pacing in patients with sick sinus node syndrome: a single-centre, observational, parallel study.

    Science.gov (United States)

    Poghosyan, Hermine R; Jamalyan, Smbat V

    2012-10-01

    Reduction of unnecessary ventricular pacing (uVP) is an essential component in the treatment strategy in any pacing population in general. The aim of this study was to evaluate the efficacy of different algorithms to reduce uVP in an adult population with sick sinus syndrome (SSS) treated outside of clinical trials. Evaluation of the relationship between different types of pacing algorithms and clinical outcomes is also provided. This was a single-centre, observational, parallel study, based on retrospective analysis of the Arrhythmology Cardiology Center of Armenia electronic clinical database. This study evaluated atrial pacing percentage (AP%), ventricular pacing percentage (VP%), and the incidence of atrial high rate episodes in 56 patients with SSS using three different pacing strategies: managed VP, search atrioventricular (AV), and fixed long AV. We did not find statistically significant differences in the amount of VP between the groups. Although the atrial high rate percentage (AHR%) tended to be higher in the fixed long AV group, this difference was not statistically significant. Mean VP% and AP% were similar in all three groups. In our study, all three programmed strategies produced the same mean AP% and VP%, and were equally efficient in uVP reduction. There was no relationship between chosen algorithms and the incidence of pacemaker syndrome, hospitalizations, or change in New York Heart Association class. The percentage of AHR was not associated with pacing strategy or co-morbidities but showed borderline correlation with left atrial size.

  19. [Comparison of benazepril monotherapy to amlodipine plus benazepril in the treatment of patients with mild and moderate hypertension: a multicentre, randomized, double-blind, parallel-controlled study].

    Science.gov (United States)

    Fan, Chao-mei; Yan, Li-rong; Tao, Yong-kang; Wang, Li; Li, Yu-qing; Gao, Ming-ming; Wang, Yan-ni; Li, Cheng-xiang; Wang, Xiao-wan; Lu, Xiao-lei; Pang, Hui-min; Li, Yi-shi

    2011-01-01

    To evaluate the efficacy and tolerability of the fixed combination of amlodipine 5 mg/benazepril 10 mg once-daily therapy, compared with benazepril, 10 mg, monotherapy in patients with mild and moderate hypertension, and to evaluate the 24 h antihypertensive efficacy and the duration of action by ambulatory blood pressure monitoring. In a multicenter, randomized, double-blind, parallel controlled trial, 356 cases of hypertensive patients after 2 weeks wash-out, and then given 4 weeks of benazepril 10 mg monotherapy, 220 patients with mean seated diastolic blood pressure (SeDBP) remained ≥ 90 mm Hg (1 mm Hg = 0.133 kPa) were randomly divided into benazepril 10 mg/amlodipine 5 mg (BZ10/AML5) fixed-dose combination therapy group (once a day, n = 113), and benazepril monotherapy group (daily 20 mg, n = 107). In the two groups the patients with SeDBP ≥ 90 mm Hg were doubled the dosage of the initial regimen at the end of 4-week treatment for additional 4 weeks, and the patients with SeDBP benazepril/amlodipine (10 mg/5 mg) and benazepril (20 mg) alone were 83.1%/76.0% and 85.8%/79.5%, respectively. Adverse events rates were 16.8% in the combination therapy group and 35.5% in the monotherapy group (P benazepril/amlodipine was superior to benazepril monotherapy and was well tolerated in patients with essential hypertension and allowing a satisfactory BP control for 24 hours.

  20. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  1. Parallel Tempering of Dark Matter from the Ebola Virus Proteome: Comparison of CHARMM36m and CHARMM22 Force Fields with Implicit Solvent.

    Science.gov (United States)

    Olson, Mark A

    2018-01-22

    Intrinsically disordered proteins are characterized by their large manifold of thermally accessible conformations and their related statistical weights, making them an interesting target of simulation studies. To assess the development of a computational framework for modeling this distinct class of proteins, this work examines temperature-based replica-exchange simulations to generate a conformational ensemble of a 28-residue peptide from the Ebola virus protein VP35. Starting from a prefolded helix-β-turn-helix topology observed in a crystallographic assembly, the simulation strategy tested is the recently refined CHARMM36m force field combined with a generalized Born solvent model. A comparison of two replica-exchange methods is provided, where one is a traditional approach with a fixed set of temperatures and the other is an adaptive scheme in which the thermal windows are allowed to move in temperature space. The assessment is further extended to include a comparison with equivalent CHARMM22 simulation data sets. The analysis finds CHARMM36m to shift the minimum in the potential of mean force (PMF) to a lower fractional helicity compared with CHARMM22, while the latter showed greater conformational plasticity along the helix-forming reaction coordinate. Among the simulation models, only the adaptive tempering method with CHARMM36m found an ensemble of conformational heterogeneity consisting of transitions between α-helix-β-hairpin folds and unstructured states that produced a PMF of fractional fold propensity in qualitative agreement with circular dichroism experiments reporting a disordered peptide.

  2. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  3. Comparison of Generated Parallel Capillary Arrays to Three-Dimensional Reconstructed Capillary Networks in Modeling Oxygen Transport in Discrete Microvascular Volumes

    Science.gov (United States)

    Fraser, Graham M.; Goldman, Daniel; Ellis, Christopher G.

    2013-01-01

    Objective We compare Reconstructed Microvascular Networks (RMN) to Parallel Capillary Arrays (PCA) under several simulated physiological conditions to determine how the use of different vascular geometry affects oxygen transport solutions. Methods Three discrete networks were reconstructed from intravital video microscopy of rat skeletal muscle (84×168×342 μm, 70×157×268 μm and 65×240×571 μm) and hemodynamic measurements were made in individual capillaries. PCAs were created based on statistical measurements from RMNs. Blood flow and O2 transport models were applied and the resulting solutions for RMN and PCA models were compared under 4 conditions (rest, exercise, ischemia and hypoxia). Results Predicted tissue PO2 was consistently lower in all RMN simulations compared to the paired PCA. PO2 for 3D reconstructions at rest were 28.2±4.8, 28.1±3.5, and 33.0±4.5 mmHg for networks I, II, and III compared to the PCA mean values of 31.2±4.5, 30.6±3.4, and 33.8±4.6 mmHg. Simulated exercise yielded mean tissue PO2 in the RMN of 10.1±5.4, 12.6±5.7, and 19.7±5.7 mmHg compared to 15.3±7.3, 18.8±5.3, and 21.7±6.0 in PCA. Conclusions These findings suggest that volume matched PCA yield different results compared to reconstructed microvascular geometries when applied to O2 transport modeling; the predominant characteristic of this difference being an over estimate of mean tissue PO2. Despite this limitation, PCA models remain important for theoretical studies as they produce PO2 distributions with similar shape and parameter dependence as RMN. PMID:23841679

  4. Structural comparison of anodic nanoporous-titania fabricated from single-step and three-step of anodization using two paralleled-electrodes anodizing cell

    Directory of Open Access Journals (Sweden)

    Mallika Thabuot

    2016-02-01

    Full Text Available Anodization of Ti sheet in the ethylene glycol electrolyte containing 0.38wt% NH4F with the addition of 1.79wt% H2O at room temperature was studied. Applied potential of 10-60 V and anodizing time of 1-3 h were conducted by single-step and three-step of anodization within the two paralleled-electrodes anodizing cell. Their structural and textural properties were investigated by X-ray diffraction (XRD and scanning electron microscopy (SEM. After annealing at 600°C in the air furnace for 3 h, TiO2-nanotubes was transformed to the higher proportion of anatase crystal phase. Also crystallization of anatase phase was enhanced as the duration of anodization as the final step increased. By using single-step of anodization, pore texture of oxide film was started to reveal at the applied potential of 30 V. Better orderly arrangement of the TiO2-nanotubes array with larger pore size was obtained with the increase of applied potential. The applied potential of 60 V was selected for the three-step of anodization with anodizing time of 1-3 h. Results showed that the well-smooth surface coverage with higher density of porous-TiO2 was achieved using prolonging time at the first and second step, however, discontinuity tube in length was produced instead of the long-vertical tube. Layer thickness of anodic oxide film depended on the anodizing time at the last step of anodization. More well arrangement of nanostructured-TiO2 was produced using three-step of anodization under 60 V with 3 h for each step.

  5. Spectroscopic study of the reaction between Br2 and dimethyl sulfide (DMS), and comparison with a parallel study made on Cl2 + DMS: possible atmospheric implications.

    Science.gov (United States)

    Beccaceci, Sonya; Ogden, J Steven; Dyke, John M

    2010-03-07

    The reaction between molecular bromine and dimethyl sulfide (DMS) has been studied both as a co-condensation reaction in low temperature matrices by infrared (IR) matrix isolation spectroscopy and in the gas-phase at low pressures by UV photoelectron spectroscopy (PES). The co-condensation reaction leads to the formation of the molecular van der Waals adduct DMS-Br(2). This was identified by IR spectroscopy supported by results of electronic structure calculations. Calculation of the minimum energy structures in important regions of the reaction surface and computed IR spectra of these structures, which could be compared with the experimental spectra, allowed the structure of the adduct (C(s)) to be determined. The low pressure (ca. 10(-5) mbar) gas-phase reaction was studied by UV-PES, but did not yield any observable products, indicating that a third body is necessary for the adduct to be stabilised. These results are compared with parallel co-condensation and gas-phase reactions between DMS and Cl(2). For this reaction, a similar van der Waals adduct DMS-Cl(2) is observed by IR spectroscopy in the co-condensation reactions, but in the gas-phase, this adduct converts to a covalently bound structure Me(2)SCl(2), observed in PES studies, which ultimately decomposes to monochlorodimethylsulfide and HCl. For these DMS + X(2) reactions, computed relative energies of minima and transition states on the potential energy surfaces are presented which provide an interpretation for the products observed from the two reactions studied. The implications of the results obtained to atmospheric chemistry are discussed.

  6. Cosmic shear analysis of archival HST/ACS data. I. Comparison of early ACS pure parallel data to the HST/GEMS survey

    Science.gov (United States)

    Schrabback, T.; Erben, T.; Simon, P.; Miralles, J.-M.; Schneider, P.; Heymans, C.; Eifler, T.; Fosbury, R. A. E.; Freudling, W.; Hetterscheidt, M.; Hildebrandt, H.; Pirzkal, N.

    2007-06-01

    Context: This is the first paper of a series describing our measurement of weak lensing by large-scale structure, also termed “cosmic shear”, using archival observations from the Advanced Camera for Surveys (ACS) on board the Hubble Space Telescope (HST). Aims: In this work we present results from a pilot study testing the capabilities of the ACS for cosmic shear measurements with early parallel observations and presenting a re-analysis of HST/ACS data from the GEMS survey and the GOODS observations of the Chandra Deep Field South (CDFS). Methods: We describe the data reduction and, in particular, a new correction scheme for the time-dependent ACS point-spread-function (PSF) based on observations of stellar fields. This is currently the only technique which takes the full time variation of the PSF between individual ACS exposures into account. We estimate that our PSF correction scheme reduces the systematic contribution to the shear correlation functions due to PSF distortions to MUSIC sample, we determine a local single field estimate for the mass power spectrum normalisation σ8, CDFS=0.52+0.11-0.15 (stat) ± 0.07(sys) (68% confidence assuming Gaussian cosmic variance) at a fixed matter density Ω_m=0.3 for a ΛCDM cosmology marginalising over the uncertainty of the Hubble parameter and the redshift distribution. We interpret this exceptionally low estimate to be due to a local under-density of the foreground structures in the CDFS. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archives at the Space Telescope European Coordinating Facility and the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  7. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  8. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  9. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  10. Synthetic Aperture Sequential Beamforming implemented on multi-core platforms

    DEFF Research Database (Denmark)

    Kjeldsen, Thomas; Lassen, Lee; Hemmsen, Martin Christian

    2014-01-01

    This paper compares several computational ap- proaches to Synthetic Aperture Sequential Beamforming (SASB) targeting consumer level parallel processors such as multi-core CPUs and GPUs. The proposed implementations demonstrate that ultrasound imaging using SASB can be executed in real- time with ...... per second) on an Intel Core i7 2600 CPU with an AMD HD7850 and a NVIDIA GTX680 GPU. The fastest CPU and GPU implementations use 14% and 1.3% of the real-time budget of 62 ms/frame, respectively. The maximum achieved processing rate is 1265 frames/s....

  11. Mathematical Methods and Algorithms of Mobile Parallel Computing on the Base of Multi-core Processors

    Directory of Open Access Journals (Sweden)

    Alexander B. Bakulev

    2012-11-01

    Full Text Available This article deals with mathematical models and algorithms, providing mobility of sequential programs parallel representation on the high-level language, presents formal model of operation environment processes management, based on the proposed model of programs parallel representation, presenting computation process on the base of multi-core processors.

  12. An Alternative Algorithm for Computing Watersheds on Shared Memory Parallel Computers

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.

    1995-01-01

    In this paper a parallel implementation of a watershed algorithm is proposed. The algorithm can easily be implemented on shared memory parallel computers. The watershed transform is generally considered to be inherently sequential since the discrete watershed of an image is defined using recursion.

  13. A highly scalable massively parallel fast marching method for the Eikonal equation

    Science.gov (United States)

    Yang, Jianming; Stern, Frederick

    2017-03-01

    The fast marching method is a widely used numerical method for solving the Eikonal equation arising from a variety of scientific and engineering fields. It is long deemed inherently sequential and an efficient parallel algorithm applicable to large-scale practical applications is not available in the literature. In this study, we present a highly scalable massively parallel implementation of the fast marching method using a domain decomposition approach. Central to this algorithm is a novel restarted narrow band approach that coordinates the frequency of communications and the amount of computations extra to a sequential run for achieving an unprecedented parallel performance. Within each restart, the narrow band fast marching method is executed; simple synchronous local exchanges and global reductions are adopted for communicating updated data in the overlapping regions between neighboring subdomains and getting the latest front status, respectively. The independence of front characteristics is exploited through special data structures and augmented status tags to extract the masked parallelism within the fast marching method. The efficiency, flexibility, and applicability of the parallel algorithm are demonstrated through several examples. These problems are extensively tested on six grids with up to 1 billion points using different numbers of processes ranging from 1 to 65536. Remarkable parallel speedups are achieved using tens of thousands of processes. Detailed pseudo-codes for both the sequential and parallel algorithms are provided to illustrate the simplicity of the parallel implementation and its similarity to the sequential narrow band fast marching algorithm.

  14. Foreword to Special Issue on "The Difference between Concurrent and Sequential Computation'' of Mathematical Structures

    DEFF Research Database (Denmark)

    Aceto, Luca; Longo, Giuseppe; Victor, Björn

    2003-01-01

    tarpit’, and argued that some of the most crucial distinctions in computing methodology, such as sequential versus parallel, deterministic versus non-deterministic, local versus distributed disappear if all one sees in computation is pure symbol pushing. How can we express formally the difference between...

  15. Process Creation and Full Sequential Composition in a Name-Passing Calculus

    NARCIS (Netherlands)

    Gehrke, Thomas; Rensink, Arend

    This paper presents a first attempt to formulate a process calculus featuring process creation and sequential composition, instead of the more usual parallel composition and action prefixing, in a setting where mobility is achieved by communicating channel names. We discuss the questions of scope

  16. On Coding the States of Sequential Machines with the Use of Partition Pairs

    DEFF Research Database (Denmark)

    Zahle, Torben U.

    1966-01-01

    This article introduces a new technique of making state assignment for sequential machines. The technique is in line with the approach used by Hartmanis [l], Stearns and Hartmanis [3], and Curtis [4]. It parallels the work of Dolotta and McCluskey [7], although it was developed independently...

  17. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  18. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  19. Performance Analysis of Parallel Mathematical Subroutine library PARCEL

    International Nuclear Information System (INIS)

    Yamada, Susumu; Shimizu, Futoshi; Kobayashi, Kenichi; Kaburaki, Hideo; Kishida, Norio

    2000-01-01

    The parallel mathematical subroutine library PARCEL (Parallel Computing Elements) has been developed by Japan Atomic Energy Research Institute for easy use of typical parallelized mathematical codes in any application problems on distributed parallel computers. The PARCEL includes routines for linear equations, eigenvalue problems, pseudo-random number generation, and fast Fourier transforms. It is shown that the results of performance for linear equations routines exhibit good parallelization efficiency on vector, as well as scalar, parallel computers. A comparison of the efficiency results with the PETSc (Portable Extensible Tool kit for Scientific Computations) library has been reported. (author)

  20. Applications of the parallel computing system using network

    International Nuclear Information System (INIS)

    Ido, Shunji; Hasebe, Hiroki

    1994-01-01

    Parallel programming is applied to multiple processors connected in Ethernet. Data exchanges between tasks located in each processing element are realized by two ways. One is socket which is standard library on recent UNIX operating systems. Another is a network connecting software, named as Parallel Virtual Machine (PVM) which is a free software developed by ORNL, to use many workstations connected to network as a parallel computer. This paper discusses the availability of parallel computing using network and UNIX workstations and comparison between specialized parallel systems (Transputer and iPSC/860) in a Monte Carlo simulation which generally shows high parallelization ratio. (author)

  1. Implementations of BLAST for parallel computers.

    Science.gov (United States)

    Jülich, A

    1995-02-01

    The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.

  2. A parallel model for SQL astronomical databases based on solid state storage. Application to the Gaia Archive PostgreSQL database

    Science.gov (United States)

    González-Núñez, J.; Gutiérrez-Sánchez, R.; Salgado, J.; Segovia, J. C.; Merín, B.; Aguado-Agelet, F.

    2017-07-01

    Query planning and optimisation algorithms in most popular relational databases were developed at the times hard disk drives were the only storage technology available. The advent of higher parallel random access capacity devices, such as solid state disks, opens up the way for intra-machine parallel computing over large datasets. We describe a two phase parallel model for the implementation of heavy analytical processes in single instance PostgreSQL astronomical databases. This model is particularised to fulfil two frequent astronomical problems, density maps and crossmatch computation with Quad Tree Cube (Q3C) indexes. They are implemented as part of the relational databases infrastructure for the Gaia Archive and performance is assessed. Improvement of a factor 28.40 in comparison to sequential execution is observed in the reference implementation for a histogram computation. Speedup ratios of 3.7 and 4.0 are attained for the reference positional crossmatches considered. We observe large performance enhancements over sequential execution for both CPU and disk access intensive computations, suggesting these methods might be useful with the growing data volumes in Astronomy.

  3. Characterizing and Mitigating Work Time Inflation in Task Parallel Programs

    Directory of Open Access Journals (Sweden)

    Stephen L. Olivier

    2013-01-01

    Full Text Available Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems. Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.

  4. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  5. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  6. (Nearly) portable PIC code for parallel computers

    International Nuclear Information System (INIS)

    Decyk, V.K.

    1993-01-01

    As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes

  7. Objective and subjective measures of simultaneous vs sequential bilateral cochlear implants in adults : A randomized clinical trial

    NARCIS (Netherlands)

    Kraaijenga, Véronique J.C.; Ramakers, Geerte G.J.; Smulders, Yvette E.; Van Zon, Alice; Stegeman, Inge; Smit, Adriana L.; Stokroos, Robert J.; Hendrice, Nadia; Free, Rolien H.; Maat, Bert; Frijns, Johan H M; Briaire, Jeroen J; Mylanus, Emmanuel A M; Huinck, Wendy J.; van Zanten, Gijsbert A.; Grolman, Wilko

    2017-01-01

    IMPORTANCE: To date, no randomized clinical trial on the comparison between simultaneous and sequential bilateral cochlear implants (BiCIs) has been performed.  OBJECTIVE: To investigate the hearing capabilities and the self-reported benefits of simultaneous BiCIs compared with those of sequential

  8. Objective and Subjective Measures of Simultaneous vs Sequential Bilateral Cochlear Implants in Adults: A Randomized Clinical Trial

    NARCIS (Netherlands)

    Kraaijenga, V.J.; Ramakers, G.G.; Smulders, Y.E.; Zon, A. van; Stegeman, I.; Smit, A.L.; Stokroos, R.J.; Hendrice, N.; Free, R.H.; Maat, B.; Frijns, J.H.; Briaire, J.J.; Mylanus, E.A.M.; Huinck, W.J.; Zanten, G.A.; Grolman, W.

    2017-01-01

    Importance: To date, no randomized clinical trial on the comparison between simultaneous and sequential bilateral cochlear implants (BiCIs) has been performed. Objective: To investigate the hearing capabilities and the self-reported benefits of simultaneous BiCIs compared with those of sequential

  9. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  10. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  11. Application Portable Parallel Library

    Science.gov (United States)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  12. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  13. Morphological and ecological parallels between sublittoral and abyssal foraminiferal species in the NE Atlantic: a comparison of Stainforthia fusiformis and Stainforthia sp.

    Science.gov (United States)

    Gooday, Andrew J.; Alve, Elisabeth

    Dead specimens of a minute fusiform rotaliid foraminifer are common in the 28-63 μm fraction of multiple corer samples from a 4850 m-deep site on the Porcupine Abyssal Plain (PAP). Their test morphology is remarkably similar to small specimens of Stainforthia fusiformis ( Williamson, 1858), a species which is well known from coastal settings (intertidal to outer shelf) around NW Europe and North America. A detailed comparison of the PAP form with typical individuals of S. fusiformis from Norwegian waters (55-203 m depth), however, reveals slight but consistent morphological differences. The PAP specimens are smaller (test length 40-140 μm) than those from Norway (test length 80-380 μm), the chambers tend to be rather less elongate, the density of pores in the test wall is much lower, and there are differences in apertural features. We therefore conclude that the diminutive abyssal form is a distinct species, here referred to as Stainforthia sp. This interpretation is consistent with increasing evidence for genetic differentiation in deep-sea organisms, particularly along bathymetric gradients. Stainforthia sp. was previously illustrated by Pawlowski as Fursenkoina sp. and appears to be widespread and abundant in the abyssal North Atlantic (>4000 m depth). Stainforthia fusiformis, on the other hand, is most abundant in continental shelf and coastal settings. It extends onto the continental slope in the North Atlantic but has not been reported reliably from depths greater than about 2500 m. We suggest that the striking morphological convergence between these two species reflects the adoption of similar ecological strategies in widely separated habitats. Both are enrichment opportunists, a life-style which may explain the rather broad bathymetric range of Stainforthia fusiformis. This is a dominant species in organically-enriched and sometimes extremely oxygen-depleted environments on the continental shelf, and is a rapid coloniser of formerly azoic habitats. Live

  14. A Parallel Algorithm for Connected Component Labelling of Gray-scale Images on Homogeneous Multicore Architectures

    International Nuclear Information System (INIS)

    Niknam, Mehdi; Thulasiraman, Parimala; Camorlinga, Sergio

    2010-01-01

    Connected component labelling is an essential step in image processing. We provide a parallel version of Suzuki's sequential connected component algorithm in order to speed up the labelling process. Also, we modify the algorithm to enable labelling gray-scale images. Due to the data dependencies in the algorithm we used a method similar to pipeline to exploit parallelism. The parallel algorithm method achieved a speedup of 2.5 for image size of 256 x 256 pixels using 4 processing threads.

  15. Polarization control of direct (non-sequential) two-photon double ionization of He

    International Nuclear Information System (INIS)

    Pronin, E A; Manakov, N L; Marmo, S I; Starace, Anthony F

    2007-01-01

    An ab initio parametrization of the doubly-differential cross section (DDCS) for two-photon double ionization (TPDI) from an s 2 subshell of an atom in a 1 S 0 -state is presented. Analysis of the elliptic dichroism (ED) effect in the DDCS for TPDI of He and its comparison with the same effect in the concurrent process of sequential double ionization shows their qualitative and quantitative differences, thus providing a means to control and to distinguish sequential and non-sequential processes by measuring the relative ED parameter

  16. A parallel implementation of 3-d CT image reconstruction on a hypercube multiprocessor

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.; Cho, Z.H.

    1990-01-01

    In this paper, the authors describe how image reconstruction in computerized tomography (CT) can be parallelized on a message-passing multiprocessor. In particular, the results obtained from parallel implementation of 3-D CT image reconstruction for parallel beam geometries on the Intel hypercube, iPSC/2, are presented. A two stage pipelining approach is employed for filtering (convolution) and backprojection. The conventional sequential convolution algorithm is modified such that the symmetry of the filter kernel is fully utilized for parallelization. In the backprojection stage, the 3-D incremental algorithm, the authors' recently developed backprojection scheme which is shown to be faster than conventional algorithm, is parallelized

  17. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  18. Sequential series for nuclear reactions

    International Nuclear Information System (INIS)

    Izumo, Ko

    1975-01-01

    A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)

  19. Coronary artery stent imaging with 128-slice dual-source CT using high-pitch spiral acquisition in a cardiac phantom: comparison with the sequential and low-pitch spiral mode

    International Nuclear Information System (INIS)

    Wolf, Florian; Loewe, Christian; Plank, Christina; Schernthaner, Ruediger; Bercaczy, Dominik; Lammer, Johannes; Leschka, Sebastian; Goetti, Robert; Marincek, Borut; Alkadhi, Hatem; Homolka, Peter; Friedrich, Guy; Feuchtner, Gudrun

    2010-01-01

    To evaluate coronary stents in vitro using 128-slice-dual-source computed tomography (CT). Twelve different coronary stents placed in a non-moving cardiac/chest phantom were examined by 128-slice dual-source CT using three CT protocols [high-pitch spiral (HPS), sequential (SEQ) and conventional spiral (SPIR)]. Artificial in-stent lumen narrowing (ALN), visible inner stent area (VIA), artificial in-stent lumen attenuation (ALA) in percent, image noise inside/outside the stent and CTDIvol were measured. Mean ALN was 46% for HPS, 44% for SEQ and 47% for SPIR without significant difference. Mean VIA was similar with 31% for HPS, 30% for SEQ and 33% for SPIR. Mean ALA was, at 5% for HPS, significantly lower compared with -11% for SPIR (p = 0.024), but not different from SEQ with -1%. Mean image noise was significantly higher for HPS compared with SEQ and SPIR inside and outside the stent (p < 0.001). CTDIvol was lower for HPS (5.17 mGy), compared with SEQ (9.02 mGy) and SPIR (55.97 mGy), respectively. The HPS mode of 128-slice dual-source CT yields fewer artefacts inside the stent lumen compared with SPIR and SEQ, but image noise is higher. ALN is still too high for routine stent evaluation in clinical practice. Radiation dose of the HPS mode is markedly (less than about tenfold) reduced. (orig.)

  20. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  1. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  2. Three-dimensional classical-ensemble modeling of non-sequential double ionization

    International Nuclear Information System (INIS)

    Haan, S.L.; Breen, L.; Tannor, D.; Panfili, R.; Ho, Phay J.; Eberly, J.H.

    2005-01-01

    Full text: We have been using 1d ensembles of classical two-electron atoms to simulate helium atoms that are exposed to pulses of intense laser radiation. In this talk we discuss the challenges in setting up a 3d classical ensemble that can mimic the quantum ground state of helium. We then report studies in which each one of 500,000 two-electron trajectories is followed in 3d through a ten-cycle (25 fs) 780 nm laser pulse. We examine double-ionization yield for various intensities, finding the familiar knee structure. We consider the momentum spread of outcoming electrons in directions both parallel and perpendicular to the direction of laser polarization, and find results that are consistent with experiment. We examine individual trajectories and recollision processes that lead to double ionization, considering the best phases of the laser cycle for recollision events and looking at the possible time delay between recollision and emergence. We consider also the number of recollision events, and find that multiple recollisions are common in the classical ensemble. We investigate which collisional processes lead to various final electron momenta. We conclude with comments regarding the ability of classical mechanics to describe non-sequential double ionization, and a quick summary of similarities and differences between 1d and 3d classical double ionization using energy-trajectory comparisons. Refs. 3 (author)

  3. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  4. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  5. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  6. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  7. Physico-chemical and viscoelastic properties of high pressure homogenized lemon peel fiber fraction suspensions obtained after sequential pectin extraction

    NARCIS (Netherlands)

    Willemsen, K.L.D.D.; Panozzo, A.; Moelants, K.; Debon, S.J.J.; Desmet, C.; Cardinaels, R.M.; Moldenaers, P.; Wallecan, J.; Hendrickx, M.E.G.

    2017-01-01

    The viscoelastic properties of high pressure homogenized lemon peel cell wall fiber suspensions, obtained after sequential selective pectin extraction, were investigated in the current study. For comparison, a general pectin extraction was additionally performed on lemon peel under acid thermal

  8. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  9. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  10. Competence and Praxis: Sequential Analysis in German Sociology

    Directory of Open Access Journals (Sweden)

    Kai-Olaf Maiwald

    2005-09-01

    Full Text Available In German social research nowadays most qualitative methodologies employ sequential analysis. This article explores the similarities and differences in conceptualising and practising this method. First, the working consensus, conceived as a shared set of methodological assumptions, is explicated. Second, with regard to three major paradigms of qualitative research in Germany—conversation analysis, objective hermeneutics, and hermeneutic sociology of knowledge—the dif­ferent ways of doing sequential analysis are investigated to locate the points of departure from a working consensus. It is argued that differences arise from different case-perspectives and, relative to that, from different modes of introducing general knowl­edge, i.e. knowledge that is not specific for the analysed case, into the interpretation. An import­ant notion to emerge from the comparison is the distinction between competence and praxis. URN: urn:nbn:de:0114-fqs0503310

  11. Immediate Sequential Bilateral Cataract Surgery

    DEFF Research Database (Denmark)

    Kessel, Line; Andresen, Jens; Erngaard, Ditte

    2015-01-01

    The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...

  12. Random and cooperative sequential adsorption

    Science.gov (United States)

    Evans, J. W.

    1993-10-01

    Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

  13. A Parallel Saturation Algorithm on Shared Memory Architectures

    Science.gov (United States)

    Ezekiel, Jonathan; Siminiceanu

    2007-01-01

    Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.

  14. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  15. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  16. Automatic parallelization of while-Loops using speculative execution

    International Nuclear Information System (INIS)

    Collard, J.F.

    1995-01-01

    Automatic parallelization of imperative sequential programs has focused on nests of for-loops. The most recent of them consist in finding an affine mapping with respect to the loop indices to simultaneously capture the temporal and spatial properties of the parallelized program. Such a mapping is usually called a open-quotes space-time transformation.close quotes This work describes an extension of these techniques to while-loops using speculative execution. We show that space-time transformations are a good framework for summing up previous restructuration techniques of while-loop, such as pipelining. Moreover, we show that these transformations can be derived and applied automatically

  17. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  18. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  19. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  20. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  1. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  2. Comparison of sequential planar 177Lu-DOTA-TATE dosimetry scans with 68Ga-DOTA-TATE PET/CT images in patients with metastasized neuroendocrine tumours undergoing peptide receptor radionuclide therapy

    International Nuclear Information System (INIS)

    Sainz-Esteban, Aurora; Carril, Jose Manuel; Prasad, Vikas; Schuchardt, Christiane; Zachert, Carolin; Baum, Richard P.

    2012-01-01

    The aim of the study was to compare sequential 177 Lu-DOTA-TATE planar scans ( 177 Lu-DOTA-TATE) in patients with metastasized neuroendocrine tumours (NET) acquired during peptide receptor radionuclide therapy (PRRT) for dosimetry purposes with the pre-therapeutic 68 Ga-DOTA-TATE positron emission tomography (PET)/CT ( 68 Ga-DOTA-TATE) maximum intensity projection (MIP) images obtained in the same patients concerning the sensitivity of the different methods. A total of 44 patients (59 ± 11 years old) with biopsy-proven NET underwent 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE imaging within 7.9 ± 7.5 days between the two examinations. 177 Lu-DOTA-TATE planar images were acquired at 0.5, 2, 24, 48 and 72 h post-injection; lesions were given a score from 0 to 4 depending on the uptake of the radiopharmaceutical (0 being lowest and 4 highest). The number of tumour lesions which were identified on 177 Lu-DOTA-TATE scans (in relation to the acquisition time after injection of the therapeutic dose as well as with regard to the body region) was compared to those detected on 68 Ga-DOTA-TATE studies obtained before PRRT. A total of 318 lesions were detected; 280 (88%) lesions were concordant. Among the discordant lesions, 29 were 68 Ga-DOTA-TATE positive and 177 Lu-DOTA-TATE negative, whereas 9 were 68 Ga-DOTA-TATE negative and 177 Lu-DOTA-TATE positive. The sensitivity, positive predictive value and accuracy for 177 Lu-DOTA-TATE as compared to 68 Ga-DOTA-TATE were 91, 97 and 88%, respectively. Significantly more lesions were seen on the delayed (72 h) 177 Lu-DOTA-TATE images (91%) as compared to the immediate (30 min) images (68%). The highest concordance was observed for bone metastases (97%) and the lowest for head/neck lesions (75%). Concordant lesions (n = 77; mean size 3.8 cm) were significantly larger than discordant lesions (n = 38; mean size 1.6 cm) (p max ). However, concordant liver lesions with a score from 1 to 3 in the 72-h 177 Lu-DOTA-TATE scan had a lower SUV max

  3. Comparison of sequential planar {sup 177}Lu-DOTA-TATE dosimetry scans with {sup 68}Ga-DOTA-TATE PET/CT images in patients with metastasized neuroendocrine tumours undergoing peptide receptor radionuclide therapy

    Energy Technology Data Exchange (ETDEWEB)

    Sainz-Esteban, Aurora; Carril, Jose Manuel [Hospital Universitario Marques de Valdecilla, Department of Nuclear Medicine, Santander (Spain); Prasad, Vikas; Schuchardt, Christiane; Zachert, Carolin; Baum, Richard P. [Zentralklinik Bad Berka, Department of Nuclear Medicine and Centre for PET/CT, Bad Berka (Germany)

    2012-03-15

    The aim of the study was to compare sequential {sup 177}Lu-DOTA-TATE planar scans ({sup 177}Lu-DOTA-TATE) in patients with metastasized neuroendocrine tumours (NET) acquired during peptide receptor radionuclide therapy (PRRT) for dosimetry purposes with the pre-therapeutic {sup 68}Ga-DOTA-TATE positron emission tomography (PET)/CT ({sup 68}Ga-DOTA-TATE) maximum intensity projection (MIP) images obtained in the same patients concerning the sensitivity of the different methods. A total of 44 patients (59 {+-} 11 years old) with biopsy-proven NET underwent {sup 68}Ga-DOTA-TATE and {sup 177}Lu-DOTA-TATE imaging within 7.9 {+-} 7.5 days between the two examinations. {sup 177}Lu-DOTA-TATE planar images were acquired at 0.5, 2, 24, 48 and 72 h post-injection; lesions were given a score from 0 to 4 depending on the uptake of the radiopharmaceutical (0 being lowest and 4 highest). The number of tumour lesions which were identified on {sup 177}Lu-DOTA-TATE scans (in relation to the acquisition time after injection of the therapeutic dose as well as with regard to the body region) was compared to those detected on {sup 68}Ga-DOTA-TATE studies obtained before PRRT. A total of 318 lesions were detected; 280 (88%) lesions were concordant. Among the discordant lesions, 29 were {sup 68}Ga-DOTA-TATE positive and {sup 177}Lu-DOTA-TATE negative, whereas 9 were {sup 68}Ga-DOTA-TATE negative and {sup 177}Lu-DOTA-TATE positive. The sensitivity, positive predictive value and accuracy for {sup 177}Lu-DOTA-TATE as compared to {sup 68}Ga-DOTA-TATE were 91, 97 and 88%, respectively. Significantly more lesions were seen on the delayed (72 h) {sup 177}Lu-DOTA-TATE images (91%) as compared to the immediate (30 min) images (68%). The highest concordance was observed for bone metastases (97%) and the lowest for head/neck lesions (75%). Concordant lesions (n = 77; mean size 3.8 cm) were significantly larger than discordant lesions (n = 38; mean size 1.6 cm) (p < 0.05). No such significance was

  4. Pharmacokinetic comparison of sustained- and immediate-release oral formulations of cilostazol in healthy Korean subjects: a randomized, open-label, 3-part, sequential, 2-period, crossover, single-dose, food-effect, and multiple-dose study.

    Science.gov (United States)

    Lee, Donghwan; Lim, Lay Ahyoung; Jang, Seong Bok; Lee, Yoon Jung; Chung, Jae Yong; Choi, Jong Rak; Kim, Kiyoon; Park, Jin Woo; Yoon, Hosang; Lee, Jaeyong; Park, Min Soo; Park, Kyungsoo

    2011-12-01

    A sustained-release (SR) formulation of cilostazol was recently developed in Korea and was expected to yield a lower C(max) and a similar AUC to the immediate-release (IR) formulation. The goal of the present study was to compare the pharmacokinetic profiles of a newly developed SR formulation and an IR formulation of cilostazol after single- and multiple-dose administration and to evaluate the influence of food in healthy Korean subjects. This study was developed as part of a product development project at the request of the Korean regulatory agency. This was a randomized, 3-part, sequential, open-label, 2-period crossover study. Each part consisted of different subjects between the ages of 19 and 55 years. In part 1, each subject received a single dose of SR (200 mg × 1 tablet, once daily) and IR (100 mg × 2 tablets, BID) formulations of cilostazol orally 7 days apart in a fasted state. In part 2, each subject received a single dose of the SR (200 mg × 1 tablet, once daily) formulation of cilostazol 7 days apart in a fasted and a fed state. In part 3, each subject received multiple doses of the 2 formulations for 8 consecutive days 21 days apart. Blood samples were taken for 72 hours after the dose. Cilostazol pharmacokinetics were determined for both the parent drug and its metabolites (OPC-13015 and OPC-13213). Adverse events were evaluated through interviews and physical examinations. Among the 92 enrolled subjects (66 men, 26 women; part 1, n = 26; part 2, n = 26; part 3, n = 40), 87 completed the study. In part 1, all the primary pharmacokinetic parameters satisfied the criterion for assumed bioequivalence both in cilostazol and its metabolites, yielding 90% CI ratios of 0.9624 to 1.2323, 0.8873 to 1.1208, and 0.8919 to 1.1283 for C(max) and 0.8370 to 1.0134, 0.8204 to 0.9807, and 0.8134 to 0.9699 for AUC(0-last) of cilostazol, OPC-13015, and OPC-13213, respectively. In part 2, food intake increased C(max) and AUC significantly (P food and 23 with a high

  5. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,; Fidel, Adam; Amato, Nancy M.; Rauchwerger, Lawrence

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable

  6. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  7. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  8. Sequential Scintigraphy in Renal Transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Winkel, K. zum; Harbst, H.; Schenck, P.; Franz, H. E.; Ritz, E.; Roehl, L.; Ziegler, M.; Ammann, W.; Maier-Borst, W. [Institut Fuer Nuklearmedizin, Deutsches Krebsforschungszentrum, Heidelberg, Federal Republic of Germany (Germany)

    1969-05-15

    Based on experience gained from more than 1600 patients with proved or suspected kidney diseases and on results on extended studies with dogs, sequential scintigraphy was performed after renal transplantation in dogs. After intravenous injection of 500 {mu}Ci. {sup 131}I-Hippuran scintiphotos were taken during the first minute with an exposure time of 15 sec each and thereafter with an exposure of 2 min up to at least 16 min.. Several examinations were evaluated digitally. 26 examinations were performed on 11 dogs with homotransplanted kidneys. Immediately after transplantation the renal function was almost normal arid the bladder was filled in due time. At the beginning of rejection the initial uptake of radioactive Hippuran was reduced. The intrarenal transport became delayed; probably the renal extraction rate decreased. Corresponding to the development of an oedema in the transplant the uptake area increased in size. In cases of thrombosis of the main artery there was no evidence of any uptake of radioactivity in the transplant. Similar results were obtained in 41 examinations on 15 persons. Patients with postoperative anuria due to acute tubular necrosis showed still some uptake of radioactivity contrary to those with thrombosis of the renal artery, where no uptake was found. In cases of rejection the most frequent signs were a reduced initial uptake and a delayed intrarenal transport of radioactive Hippuran. Infarction could be detected by a reduced uptake in distinct areas of the transplant. (author)

  9. Sequential provisional implant prosthodontics therapy.

    Science.gov (United States)

    Zinner, Ira D; Markovits, Stanley; Jansen, Curtis E; Reid, Patrick E; Schnader, Yale E; Shapiro, Herbert J

    2012-01-01

    The fabrication and long-term use of first- and second-stage provisional implant prostheses is critical to create a favorable prognosis for function and esthetics of a fixed-implant supported prosthesis. The fixed metal and acrylic resin cemented first-stage prosthesis, as reviewed in Part I, is needed for prevention of adjacent and opposing tooth movement, pressure on the implant site as well as protection to avoid micromovement of the freshly placed implant body. The second-stage prosthesis, reviewed in Part II, should be used following implant uncovering and abutment installation. The patient wears this provisional prosthesis until maturation of the bone and healing of soft tissues. The second-stage provisional prosthesis is also a fail-safe mechanism for possible early implant failures and also can be used with late failures and/or for the necessity to repair the definitive prosthesis. In addition, the screw-retained provisional prosthesis is used if and when an implant requires removal or other implants are to be placed as in a sequential approach. The creation and use of both first- and second-stage provisional prostheses involve a restorative dentist, dental technician, surgeon, and patient to work as a team. If the dentist alone cannot do diagnosis and treatment planning, surgery, and laboratory techniques, he or she needs help by employing the expertise of a surgeon and a laboratory technician. This team approach is essential for optimum results.

  10. Power stability methods for parallel systems

    International Nuclear Information System (INIS)

    Wallach, Y.

    1988-01-01

    Parallel-Processing Systems are already commercially available. This paper shows that if one of them - the Alternating Sequential Parallel, or ASP system - is applied to network stability calculations it will lead to a higher speed of solution. The ASP system is first described and is then shown to be cheaper, more reliable and available than other parallel systems. Also, no deadlock need be feared and the speedup is normally very high. A number of ASP systems were already assembled (the SMS systems, Topps, DIRMU etc.). At present, an IBM Local Area Network is being modified so that it too can work in the ASP mode. Existing ASP systems were programmed in Fortran or assembly language. Since newer systems (e.g. DIRMU) are programmed in Modula-2, this language can be used. Stability analysis is based on solving nonlinear differential and algebraic equations. The algorithm for solving the nonlinear differential equations on ASP, is described and programmed in Modula-2. The speedup is computed and is shown to be almost optimal

  11. A node linkage approach for sequential pattern mining.

    Directory of Open Access Journals (Sweden)

    Osvaldo Navarro

    Full Text Available Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT, has better performance and scalability in comparison with state of the art algorithms.

  12. A sequential adaptation technique and its application to the Mark 12 IFF system

    Science.gov (United States)

    Bailey, John S.; Mallett, John D.; Sheppard, Duane J.; Warner, F. Neal; Adams, Robert

    1986-07-01

    Sequential adaptation uses only two sets of receivers, correlators, and A/D converters which are time multiplexed to effect spatial adaptation in a system with (N) adaptive degrees of freedom. This technique can substantially reduce the hardware cost over what is realizable in a parallel architecture. A three channel L-band version of the sequential adapter was built and tested for use with the MARK XII IFF (identify friend or foe) system. In this system the sequentially determined adaptive weights were obtained digitally but implemented at RF. As a result, many of the post RF hardware induced sources of error that normally limit cancellation, such as receiver mismatch, are removed by the feedback property. The result is a system that can yield high levels of cancellation and be readily retrofitted to currently fielded equipment.

  13. Benefits of Parallel I/O in Ab Initio Nuclear Physics Calculations

    International Nuclear Information System (INIS)

    Laghave, Nikhil; Sosonkina, Masha; Maris, Pieter; Vary, James P.

    2009-01-01

    Many modern scientific applications rely on highly parallel calculations, which scale to 10's of thousands processors. However, most applications do not concentrate on parallelizing input/output operations. In particular, sequential I/O has been identified as a bottleneck for the highly scalable MFDn (Many Fermion Dynamics for nuclear structure) code performing ab initio nuclear structure calculations. In this paper, we develop interfaces and parallel I/O procedures to use a well-known parallel I/O library in MFDn. As a result, we gain efficient input/output of large datasets along with their portability and ease of use in the downstream processing.

  14. Analysis of a parallel multigrid algorithm

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1989-01-01

    The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.

  15. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  16. Application of Pfortran and Co-Array Fortran in the Parallelization of the GROMOS96 Molecular Dynamics Module

    Directory of Open Access Journals (Sweden)

    Piotr Bała

    2001-01-01

    Full Text Available After at least a decade of parallel tool development, parallelization of scientific applications remains a significant undertaking. Typically parallelization is a specialized activity supported only partially by the programming tool set, with the programmer involved with parallel issues in addition to sequential ones. The details of concern range from algorithm design down to low-level data movement details. The aim of parallel programming tools is to automate the latter without sacrificing performance and portability, allowing the programmer to focus on algorithm specification and development. We present our use of two similar parallelization tools, Pfortran and Cray's Co-Array Fortran, in the parallelization of the GROMOS96 molecular dynamics module. Our parallelization started from the GROMOS96 distribution's shared-memory implementation of the replicated algorithm, but used little of that existing parallel structure. Consequently, our parallelization was close to starting with the sequential version. We found the intuitive extensions to Pfortran and Co-Array Fortran helpful in the rapid parallelization of the project. We present performance figures for both the Pfortran and Co-Array Fortran parallelizations showing linear speedup within the range expected by these parallelization methods.

  17. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

  18. Sequential Generalized Transforms on Function Space

    Directory of Open Access Journals (Sweden)

    Jae Gil Choi

    2013-01-01

    Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

  19. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  20. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  1. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs

    Directory of Open Access Journals (Sweden)

    Vaughn Matthew

    2010-11-01

    Full Text Available Abstract Background Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ messages (Σ being the size of the alphabet. Results In this paper we present a Θ(n/p time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/BBlog(M/B (M being the main memory size and B being the size of the disk block. We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster - both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. Conclusions The bi

  2. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.

    Science.gov (United States)

    Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal

    2010-11-15

    Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for

  3. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  4. Parallel discrete event simulation using shared memory

    Science.gov (United States)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  5. External parallel sorting with multiprocessor computers

    International Nuclear Information System (INIS)

    Comanceau, S.I.

    1984-01-01

    This article describes methods of external sorting in which the entire main computer memory is used for the internal sorting of entries, forming out of them sorted segments of the greatest possible size, and outputting them to external memories. The obtained segments are merged into larger segments until all entries form one ordered segment. The described methods are suitable for sequential files stored on magnetic tape. The needs of the sorting algorithm can be met by using the relatively slow peripheral storage devices (e.g., tapes, disks, drums). The efficiency of the external sorting methods is determined by calculating the total sorting time as a function of the number of entries to be sorted and the number of parallel processors participating in the sorting process

  6. Efficacy of premixed versus sequential administration of ...

    African Journals Online (AJOL)

    sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...

  7. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  8. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  9. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  10. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  11. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  12. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  13. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  14. Parallel plate detectors

    International Nuclear Information System (INIS)

    Gardes, D.; Volkov, P.

    1981-01-01

    A 5x3cm 2 (timing only) and a 15x5cm 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters [fr

  15. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  16. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  17. Algorithms for the Construction of Parallel Tests by Zero-One Programming. Project Psychometric Aspects of Item Banking No. 7. Research Report 86-7.

    Science.gov (United States)

    Boekkooi-Timminga, Ellen

    Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…

  18. Sequential Interval Estimation of a Location Parameter with Fixed Width in the Nonregular Case

    OpenAIRE

    Koike, Ken-ichi

    2007-01-01

    For a location-scale parameter family of distributions with a finite support, a sequential confidence interval with a fixed width is obtained for the location parameter, and its asymptotic consistency and efficiency are shown. Some comparisons with the Chow-Robbins procedure are also done.

  19. Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition

    Directory of Open Access Journals (Sweden)

    Cécile Germain‐Renaud

    1999-01-01

    Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.

  20. Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.

    Science.gov (United States)

    Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M

    2011-02-01

    To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.

  1. Objective and Subjective Measures of Simultaneous vs Sequential Bilateral Cochlear Implants in Adults A Randomized Clinical Trial : A Randomized Clinical Trial

    NARCIS (Netherlands)

    Kraaijenga, Véronique J C; Ramakers, Geerte G J; Smulders, Yvette E; van Zon, Alice; Stegeman, Inge; Smit, Adriana L; Stokroos, Robert J; Hendrice, Nadia; Free, Rolien H; Maat, Bert; Frijns, Johan H M; Briaire, Jeroen J; Mylanus, E A M; Huinck, Wendy J; Van Zanten, Gijsbert A; Grolman, Wilko

    IMPORTANCE To date, no randomized clinical trial on the comparison between simultaneous and sequential bilateral cochlear implants (BiCIs) has been performed. OBJECTIVE To investigate the hearing capabilities and the self-reported benefits of simultaneous BiCIs compared with those of sequential

  2. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  3. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  4. More parallel please

    DEFF Research Database (Denmark)

    Gregersen, Frans; Josephson, Olle; Kristoffersen, Gjert

    of departure that English may be used in parallel with the various local, in this case Nordic, languages. As such, the book integrates the challenge of internationalization faced by any university with the wish to improve quality in research, education and administration based on the local language......Abstract [en] More parallel, please is the result of the work of an Inter-Nordic group of experts on language policy financed by the Nordic Council of Ministers 2014-17. The book presents all that is needed to plan, practice and revise a university language policy which takes as its point......(s). There are three layers in the text: First, you may read the extremely brief version of the in total 11 recommendations for best practice. Second, you may acquaint yourself with the extended version of the recommendations and finally, you may study the reasoning behind each of them. At the end of the text, we give...

  5. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  6. Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.

    Science.gov (United States)

    Domenger, D; Schwarting, R K W

    2008-10-31

    Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired

  7. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  8. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  9. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  10. Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis

    Directory of Open Access Journals (Sweden)

    H.X. Lin

    2004-01-01

    Full Text Available Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm.

  11. The specificity of learned parallelism in dual-memory retrieval.

    Science.gov (United States)

    Strobach, Tilo; Schubert, Torsten; Pashler, Harold; Rickard, Timothy

    2014-05-01

    Retrieval of two responses from one visually presented cue occurs sequentially at the outset of dual-retrieval practice. Exclusively for subjects who adopt a mode of grouping (i.e., synchronizing) their response execution, however, reaction times after dual-retrieval practice indicate a shift to learned retrieval parallelism (e.g., Nino & Rickard, in Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 373-388, 2003). In the present study, we investigated how this learned parallelism is achieved and why it appears to occur only for subjects who group their responses. Two main accounts were considered: a task-level versus a cue-level account. The task-level account assumes that learned retrieval parallelism occurs at the level of the task as a whole and is not limited to practiced cues. Grouping response execution may thus promote a general shift to parallel retrieval following practice. The cue-level account states that learned retrieval parallelism is specific to practiced cues. This type of parallelism may result from cue-specific response chunking that occurs uniquely as a consequence of grouped response execution. The results of two experiments favored the second account and were best interpreted in terms of a structural bottleneck model.

  12. Rubus: A compiler for seamless and extensible parallelism.

    Directory of Open Access Journals (Sweden)

    Muhammad Adnan

    Full Text Available Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU, originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84

  13. More power : Accelerating sequential Computer Vision algorithms using commodity parallel hardware

    NARCIS (Netherlands)

    Jaap van de Loosdrecht; K. Dijkstra

    2014-01-01

    The last decade has seen an increasing demand from the industrial field of computerized visual inspection. Applications rapidly become more complex and often with more demanding real time constraints. However, from 2004 onwards the clock frequency of CPUs has not increased significantly. Computer

  14. MaMiCo: Software design for parallel molecular-continuum flow simulations

    KAUST Repository

    Neumann, Philipp; Flohr, Hanno; Arora, Rahul; Jarmatz, Piet; Tchipev, Nikola; Bungartz, Hans-Joachim

    2015-01-01

    The macro-micro-coupling tool (MaMiCo) was developed to ease the development of and modularize molecular-continuum simulations, retaining sequential and parallel performance. We demonstrate the functionality and performance of MaMiCo by coupling

  15. Research in Parallel Algorithms and Software for Computational Aerosciences

    Science.gov (United States)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  16. Sequential Classification of Palm Gestures Based on A* Algorithm and MLP Neural Network for Quadrocopter Control

    Directory of Open Access Journals (Sweden)

    Wodziński Marek

    2017-06-01

    Full Text Available This paper presents an alternative approach to the sequential data classification, based on traditional machine learning algorithms (neural networks, principal component analysis, multivariate Gaussian anomaly detector and finding the shortest path in a directed acyclic graph, using A* algorithm with a regression-based heuristic. Palm gestures were used as an example of the sequential data and a quadrocopter was the controlled object. The study includes creation of a conceptual model and practical construction of a system using the GPU to ensure the realtime operation. The results present the classification accuracy of chosen gestures and comparison of the computation time between the CPU- and GPU-based solutions.

  17. Sequential dependencies in magnitude scaling of loudness

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2013-01-01

    Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...

  18. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  19. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  20. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    Science.gov (United States)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  1. SPRINT: A new parallel framework for R

    Directory of Open Access Journals (Sweden)

    Scharinger Florian

    2008-12-01

    Full Text Available Abstract Background Microarray analysis allows the simultaneous measurement of thousands to millions of genes or sequences across tens to thousands of different samples. The analysis of the resulting data tests the limits of existing bioinformatics computing infrastructure. A solution to this issue is to use High Performance Computing (HPC systems, which contain many processors and more memory than desktop computer systems. Many biostatisticians use R to process the data gleaned from microarray analysis and there is even a dedicated group of packages, Bioconductor, for this purpose. However, to exploit HPC systems, R must be able to utilise the multiple processors available on these systems. There are existing modules that enable R to use multiple processors, but these are either difficult to use for the HPC novice or cannot be used to solve certain classes of problems. A method of exploiting HPC systems, using R, but without recourse to mastering parallel programming paradigms is therefore necessary to analyse genomic data to its fullest. Results We have designed and built a prototype framework that allows the addition of parallelised functions to R to enable the easy exploitation of HPC systems. The Simple Parallel R INTerface (SPRINT is a wrapper around such parallelised functions. Their use requires very little modification to existing sequential R scripts and no expertise in parallel computing. As an example we created a function that carries out the computation of a pairwise calculated correlation matrix. This performs well with SPRINT. When executed using SPRINT on an HPC resource of eight processors this computation reduces by more than three times the time R takes to complete it on one processor. Conclusion SPRINT allows the biostatistician to concentrate on the research problems rather than the computation, while still allowing exploitation of HPC systems. It is easy to use and with further development will become more useful as more

  2. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  3. Dihydroazulene photoswitch operating in sequential tunneling regime

    DEFF Research Database (Denmark)

    Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg

    2012-01-01

    to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...

  4. Asynchronous Operators of Sequential Logic Venjunction & Sequention

    CERN Document Server

    Vasyukevich, Vadim

    2011-01-01

    This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

  5. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  6. Data driven parallelism in experimental high energy physics applications

    International Nuclear Information System (INIS)

    Pohl, M.

    1987-01-01

    I present global design principles for the implementation of high energy physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of high energy physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordiate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms). (orig.)

  7. Data driven parallelism in experimental high energy physics applications

    Science.gov (United States)

    Pohl, Martin

    1987-08-01

    I present global design principles for the implementation of High Energy Physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of High Energy Physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The Task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordinate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms).

  8. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  9. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  10. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  11. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  12. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults.

    Science.gov (United States)

    Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.

  13. Relative resilience to noise of standard and sequential approaches to measurement-based quantum computation

    Science.gov (United States)

    Gallagher, C. B.; Ferraro, A.

    2018-05-01

    A possible alternative to the standard model of measurement-based quantum computation (MBQC) is offered by the sequential model of MBQC—a particular class of quantum computation via ancillae. Although these two models are equivalent under ideal conditions, their relative resilience to noise in practical conditions is not yet known. We analyze this relationship for various noise models in the ancilla preparation and in the entangling-gate implementation. The comparison of the two models is performed utilizing both the gate infidelity and the diamond distance as figures of merit. Our results show that in the majority of instances the sequential model outperforms the standard one in regard to a universal set of operations for quantum computation. Further investigation is made into the performance of sequential MBQC in experimental scenarios, thus setting benchmarks for possible cavity-QED implementations.

  14. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  15. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  16. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    Science.gov (United States)

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  17. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    Directory of Open Access Journals (Sweden)

    Lin Chen

    2016-01-01

    Full Text Available We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1 the projection vectors for dimension reduction, (2 the input weights, biases, and output weights, and (3 the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD approach, adaptive multihyperplane machine (AMM, primal estimated subgradient solver (Pegasos, online sequential extreme learning machine (OSELM, and SVD + OSELM (feature selection based on SVD is performed before OSELM. The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  18. P-HS-SFM: a parallel harmony search algorithm for the reproduction of experimental data in the continuous microscopic crowd dynamic models

    Science.gov (United States)

    Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod

    2018-03-01

    Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.

  19. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  20. Interpretability degrees of finitely axiomatized sequential theories

    NARCIS (Netherlands)

    Visser, Albert

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed

  1. Interpretability Degrees of Finitely Axiomatized Sequential Theories

    NARCIS (Netherlands)

    Visser, Albert

    2012-01-01

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question

  2. S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.

    Science.gov (United States)

    CICIARELLI, V; LEONARD, JOSEPH

    A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…

  3. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  4. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  5. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  6. Sequential motor skill: cognition, perception and action

    NARCIS (Netherlands)

    Ruitenberg, M.F.L.

    2013-01-01

    Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role

  7. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim

    2014-01-01

    the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity

  8. A framework for sequential multiblock component methods

    NARCIS (Netherlands)

    Smilde, A.K.; Westerhuis, J.A.; Jong, S.de

    2003-01-01

    Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework

  9. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  10. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); V. Capasso

    2009-01-01

    htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects

  11. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Capasso, V.

    2009-01-01

    We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video

  12. Sequential Analysis: Hypothesis Testing and Changepoint Detection

    Science.gov (United States)

    2014-07-11

    maintains the flexibility of deciding sooner than the fixed sample size procedure at the price of some lower power [13, 514]. The sequential probability... markets , detection of signals with unknown arrival time in seismology, navigation, radar and sonar signal processing, speech segmentation, and the... skimming cruise missile can yield a significant increase in the probability of raid annihilation. Furthermore, usually detection systems are

  13. STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY

    Directory of Open Access Journals (Sweden)

    Damián Fernández

    2014-12-01

    Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.

  14. Truly costly sequential search and oligopolistic pricing

    NARCIS (Netherlands)

    Janssen, Maarten C W; Moraga-González, José Luis; Wildenbeest, Matthijs R.

    We modify the paper of Stahl (1989) [Stahl, D.O., 1989. Oligopolistic pricing with sequential consumer search. American Economic Review 79, 700-12] by relaxing the assumption that consumers obtain the first price quotation for free. When all price quotations are costly to obtain, the unique

  15. Zips : mining compressing sequential patterns in streams

    NARCIS (Netherlands)

    Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.

    2013-01-01

    We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be

  16. How to Read the Tractatus Sequentially

    Directory of Open Access Journals (Sweden)

    Tim Kraft

    2016-11-01

    Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

  17. Adult Word Recognition and Visual Sequential Memory

    Science.gov (United States)

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  18. Terminating Sequential Delphi Survey Data Collection

    Science.gov (United States)

    Kalaian, Sema A.; Kasim, Rafa M.

    2012-01-01

    The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…

  19. Simultaneous sequential monitoring of efficacy and safety led to masking of effects.

    Science.gov (United States)

    van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg

    2016-08-01

    Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Parallel generation of architecture on the GPU

    KAUST Repository

    Steinberger, Markus

    2014-05-01

    In this paper, we present a novel approach for the parallel evaluation of procedural shape grammars on the graphics processing unit (GPU). Unlike previous approaches that are either limited in the kind of shapes they allow, the amount of parallelism they can take advantage of, or both, our method supports state of the art procedural modeling including stochasticity and context-sensitivity. To increase parallelism, we explicitly express independence in the grammar, reduce inter-rule dependencies required for context-sensitive evaluation, and introduce intra-rule parallelism. Our rule scheduling scheme avoids unnecessary back and forth between CPU and GPU and reduces round trips to slow global memory by dynamically grouping rules in on-chip shared memory. Our GPU shape grammar implementation is multiple orders of magnitude faster than the standard in CPU-based rule evaluation, while offering equal expressive power. In comparison to the state of the art in GPU shape grammar derivation, our approach is nearly 50 times faster, while adding support for geometric context-sensitivity. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  1. A hybrid parallel framework for the cellular Potts model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yi [Los Alamos National Laboratory; He, Kejing [SOUTH CHINA UNIV; Dong, Shoubin [SOUTH CHINA UNIV

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approach achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).

  2. Efficient multitasking: parallel versus serial processing of multiple tasks.

    Science.gov (United States)

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  3. Sequential sampling: a novel method in farm animal welfare assessment.

    Science.gov (United States)

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  4. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  5. A fast and accurate online sequential learning algorithm for feedforward networks.

    Science.gov (United States)

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  6. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Digital intermediate frequency QAM modulator using parallel processing

    Science.gov (United States)

    Pao, Hsueh-Yuan [Livermore, CA; Tran, Binh-Nien [San Ramon, CA

    2008-05-27

    The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.

  8. Improving image quality of parallel phase-shifting digital holography

    International Nuclear Information System (INIS)

    Awatsuji, Yasuhiro; Tahara, Tatsuki; Kaneko, Atsushi; Koyama, Takamasa; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2008-01-01

    The authors propose parallel two-step phase-shifting digital holography to improve the image quality of parallel phase-shifting digital holography. The proposed technique can increase the effective number of pixels of hologram twice in comparison to the conventional parallel four-step technique. The increase of the number of pixels makes it possible to improve the image quality of the reconstructed image of the parallel phase-shifting digital holography. Numerical simulation and preliminary experiment of the proposed technique were conducted and the effectiveness of the technique was confirmed. The proposed technique is more practical than the conventional parallel phase-shifting digital holography, because the composition of the digital holographic system based on the proposed technique is simpler.

  9. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  10. Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei; Som, Sibendu

    2017-10-15

    The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual engine cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed

  11. Prosodic structure as a parallel to musical structure

    Directory of Open Access Journals (Sweden)

    Christopher Cullen Heffner

    2015-12-01

    Full Text Available What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.

  12. OpenMP Issues Arising in the Development of Parallel BLAS and LAPACK Libraries

    Directory of Open Access Journals (Sweden)

    C. Addison

    2003-01-01

    Full Text Available Dense linear algebra libraries need to cope efficiently with a range of input problem sizes and shapes. Inherently this means that parallel implementations have to exploit parallelism wherever it is present. While OpenMP allows relatively fine grain parallelism to be exploited in a shared memory environment it currently lacks features to make it easy to partition computation over multiple array indices or to overlap sequential and parallel computations. The inherent flexible nature of shared memory paradigms such as OpenMP poses other difficulties when it becomes necessary to optimise performance across successive parallel library calls. Notions borrowed from distributed memory paradigms, such as explicit data distributions help address some of these problems, but the focus on data rather than work distribution appears misplaced in an SMP context.

  13. Impact of Diagrams on Recalling Sequential Elements in Expository Texts.

    Science.gov (United States)

    Guri-Rozenblit, Sarah

    1988-01-01

    Examines the instructional effectiveness of abstract diagrams on recall of sequential relations in social science textbooks. Concludes that diagrams assist significantly the recall of sequential relations in a text and decrease significantly the rate of order mistakes. (RS)

  14. Parallel Algorithms for Graph Optimization using Tree Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL; Groer, Christopher S [ORNL

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  15. A Parallel Sweeping Preconditioner for Heterogeneous 3D Helmholtz Equations

    KAUST Repository

    Poulson, Jack

    2013-05-02

    A parallelization of a sweeping preconditioner for three-dimensional Helmholtz equations without large cavities is introduced and benchmarked for several challenging velocity models. The setup and application costs of the sequential preconditioner are shown to be O(γ2N4/3) and O(γN logN), where γ(ω) denotes the modestly frequency-dependent number of grid points per perfectly matched layer. Several computational and memory improvements are introduced relative to using black-box sparse-direct solvers for the auxiliary problems, and competitive runtimes and iteration counts are reported for high-frequency problems distributed over thousands of cores. Two open-source packages are released along with this paper: Parallel Sweeping Preconditioner (PSP) and the underlying distributed multifrontal solver, Clique. © 2013 Society for Industrial and Applied Mathematics.

  16. Parallel discrete event simulation: A shared memory approach

    Science.gov (United States)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  17. Parallel combinations of pre-ionized low jitter spark gaps

    International Nuclear Information System (INIS)

    Fitzsimmons, W.A.; Rosocha, L.A.

    1979-01-01

    The properties of 10 to 30 kV four electrode field emission pre-ionized triggered spark gaps have been studied. A mid-plane off-axis trigger electrode is biased at +V 0 /2, and a field emission point is located adjacent to and biased at the grounded cathode potential. Simultaneous application of a -V 0 trigger rapid pulse to both the electrodes results in the rapid sequential closing of the anode-trigger and trigger-cathode gaps. The observed jitter is about 1.5 ns. Parallel operation of these gaps (up to 10 so far) connected to a common capacitive load has been studied. A simple theory that predicts the number of gaps that may be expected to operate in parallel is discussed

  18. General-purpose parallel simulator for quantum computing

    International Nuclear Information System (INIS)

    Niwa, Jumpei; Matsumoto, Keiji; Imai, Hiroshi

    2002-01-01

    With current technologies, it seems to be very difficult to implement quantum computers with many qubits. It is therefore of importance to simulate quantum algorithms and circuits on the existing computers. However, for a large-size problem, the simulation often requires more computational power than is available from sequential processing. Therefore, simulation methods for parallel processors are required. We have developed a general-purpose simulator for quantum algorithms/circuits on the parallel computer (Sun Enterprise4500). It can simulate algorithms/circuits with up to 30 qubits. In order to test efficiency of our proposed methods, we have simulated Shor's factorization algorithm and Grover's database search, and we have analyzed robustness of the corresponding quantum circuits in the presence of both decoherence and operational errors. The corresponding results, statistics, and analyses are presented in this paper

  19. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; ST Charles, Jesse Lee [ORNL

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  20. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  1. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain and one...... of the phosphate groups in Watson-Crick strand. Also, it was shown that the nucleobase Y made a good stacking and binding with the other nucleobases in the TFO and Watson-Crick duplex, respectively. In contrast, the nucleobase Z with LNA moiety was forced to twist out of plane of Watson-Crick base pair which......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...

  2. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  3. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  4. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  5. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...

  6. A one-sided sequential test

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1996-04-16

    The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).

  7. Documentscape: Intertextuality, Sequentiality & Autonomy at Work

    DEFF Research Database (Denmark)

    Christensen, Lars Rune; Bjørn, Pernille

    2014-01-01

    On the basis of an ethnographic field study, this article introduces the concept of documentscape to the analysis of document-centric work practices. The concept of documentscape refers to the entire ensemble of documents in their mutual intertextual interlocking. Providing empirical data from...... a global software development case, we show how hierarchical structures and sequentiality across the interlocked documents are critical to how actors make sense of the work of others and what to do next in a geographically distributed setting. Furthermore, we found that while each document is created...... as part of a quasi-sequential order, this characteristic does not make the document, as a single entity, into a stable object. Instead, we found that the documents were malleable and dynamic while suspended in intertextual structures. Our concept of documentscape points to how the hierarchical structure...

  8. A minimax procedure in the context of sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    1999-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

  9. Applying the minimax principle to sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    2002-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

  10. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  11. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985807 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016

  12. Sequential pattern recognition by maximum conditional informativity

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří

    2014-01-01

    Roč. 45, č. 1 (2014), s. 39-45 ISSN 0167-8655 R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S Keywords : Multivariate statistics * Statistical pattern recognition * Sequential decision making * Product mixtures * EM algorithm * Shannon information Subject RIV: IN - Informatics, Computer Sci ence Impact factor: 1.551, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/grim-0428565.pdf

  13. Comparing two Poisson populations sequentially: an application

    International Nuclear Information System (INIS)

    Halteman, E.J.

    1986-01-01

    Rocky Flats Plant in Golden, Colorado monitors each of its employees for radiation exposure. Excess exposure is detected by comparing the means of two Poisson populations. A sequential probability ratio test (SPRT) is proposed as a replacement for the fixed sample normal approximation test. A uniformly most efficient SPRT exists, however logistics suggest using a truncated SPRT. The truncated SPRT is evaluated in detail and shown to possess large potential savings in average time spent by employees in the monitoring process

  14. Heat accumulation during sequential cortical bone drilling.

    Science.gov (United States)

    Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

    2016-03-01

    Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  15. Sequential Monte Carlo with Highly Informative Observations

    OpenAIRE

    Del Moral, Pierre; Murray, Lawrence M.

    2014-01-01

    We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

  16. Sequential test procedures for inventory differences

    International Nuclear Information System (INIS)

    Goldman, A.S.; Kern, E.A.; Emeigh, C.W.

    1985-01-01

    By means of a simulation study, we investigated the appropriateness of Page's and power-one sequential tests on sequences of inventory differences obtained from an example materials control unit, a sub-area of a hypothetical UF 6 -to-U 3 O 8 conversion process. The study examined detection probability and run length curves obtained from different loss scenarios. 12 refs., 10 figs., 2 tabs

  17. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  18. Assessing potential forest and steel inter-industry residue utilisation by sequential chemical extraction

    Energy Technology Data Exchange (ETDEWEB)

    Makela, M.

    2012-10-15

    Traditional process industries in Finland and abroad are facing an emerging waste disposal problem due recent regulatory development which has increased the costs of landfill disposal and difficulty in acquiring new sites. For large manufacturers, such as the forest and ferrous metals industries, symbiotic cooperation of formerly separate industrial sectors could enable the utilisation waste-labeled residues in manufacturing novel residue-derived materials suitable for replacing commercial virgin alternatives. Such efforts would allow transforming the current linear resource use and disposal models to more cyclical ones and thus attain savings in valuable materials and energy resources. The work described in this thesis was aimed at utilising forest and carbon steel industry residues in the experimental manufacture of novel residue-derived materials technically and environmentally suitable for amending agricultural or forest soil properties. Single and sequential chemical extractions were used to compare the pseudo-total concentrations of trace elements in the manufactured amendment samples to relevant Finnish statutory limit values for the use of fertilizer products and to assess respective potential availability under natural conditions. In addition, the quality of analytical work and the suitability of sequential extraction in the analysis of an industrial solid sample were respectively evaluated through the analysis of a certified reference material and by X-ray diffraction of parallel sequential extraction residues. According to the acquired data, the incorporation of both forest and steel industry residues, such as fly ashes, lime wastes, green liquor dregs, sludges and slags, led to amendment liming capacities (34.9-38.3%, Ca equiv., d.w.) comparable to relevant commercial alternatives. Only the first experimental samples showed increased concentrations of pseudo-total cadmium and chromium, of which the latter was specified as the trivalent Cr(III). Based on

  19. Accelerating Sequential Gaussian Simulation with a constant path

    Science.gov (United States)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  20. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    Science.gov (United States)

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  1. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  2. Portable, parallel, reusable Krylov space codes

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B.; Gropp, W. [Argonne National Lab., IL (United States)

    1994-12-31

    Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.

  3. Development of a parallelization strategy for the VARIANT code

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Khalil, H.S.; Palmiotti, G.; Tatsumi, M.

    1996-01-01

    The VARIANT code solves the multigroup steady-state neutron diffusion and transport equation in three-dimensional Cartesian and hexagonal geometries using the variational nodal method. VARIANT consists of four major parts that must be executed sequentially: input handling, calculation of response matrices, solution algorithm (i.e. inner-outer iteration), and output of results. The objective of the parallelization effort was to reduce the overall computing time by distributing the work of the two computationally intensive (sequential) tasks, the coupling coefficient calculation and the iterative solver, equally among a group of processors. This report describes the code's calculations and gives performance results on one of the benchmark problems used to test the code. The performance analysis in the IBM SPx system shows good efficiency for well-load-balanced programs. Even for relatively small problem sizes, respectable efficiencies are seen for the SPx. An extension to achieve a higher degree of parallelism will be addressed in future work. 7 refs., 1 tab

  4. OpenMP parallelization of a gridded SWAT (SWATG)

    Science.gov (United States)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  5. The Galley Parallel File System

    Science.gov (United States)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  6. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  7. Parallelization of the FLAPW method

    Science.gov (United States)

    Canning, A.; Mannstadt, W.; Freeman, A. J.

    2000-08-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.

  8. Spatial updating grand canonical Monte Carlo algorithms for fluid simulation: generalization to continuous potentials and parallel implementation.

    Science.gov (United States)

    O'Keeffe, C J; Ren, Ruichao; Orkoulas, G

    2007-11-21

    Spatial updating grand canonical Monte Carlo algorithms are generalizations of random and sequential updating algorithms for lattice systems to continuum fluid models. The elementary steps, insertions or removals, are constructed by generating points in space either at random (random updating) or in a prescribed order (sequential updating). These algorithms have previously been developed only for systems of impenetrable spheres for which no particle overlap occurs. In this work, spatial updating grand canonical algorithms are generalized to continuous, soft-core potentials to account for overlapping configurations. Results on two- and three-dimensional Lennard-Jones fluids indicate that spatial updating grand canonical algorithms, both random and sequential, converge faster than standard grand canonical algorithms. Spatial algorithms based on sequential updating not only exhibit the fastest convergence but also are ideal for parallel implementation due to the absence of strict detailed balance and the nature of the updating that minimizes interprocessor communication. Parallel simulation results for three-dimensional Lennard-Jones fluids show a substantial reduction of simulation time for systems of moderate and large size. The efficiency improvement by parallel processing through domain decomposition is always in addition to the efficiency improvement by sequential updating.

  9. Cross-sectional versus sequential quality indicators of risk factor management in patients with type 2 diabetes

    NARCIS (Netherlands)

    Voorham, Jaco; Denig, Petra; Wolffenbuttel, Bruce H. R.; Haaijer-Ruskamp, Flora M.

    Background: The fairness of quality assessment methods is under debate. Quality indicators incorporating the longitudinal nature of care have been advocated but their usefulness in comparison to more commonly used cross-sectional measures is not clear. Aims: To compare cross-sectional and sequential

  10. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  11. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  12. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  13. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  14. An in vitro biomechanical comparison of equine proximal interphalangeal joint arthrodesis techniques: an axial positioned dynamic compression plate and two abaxial transarticular cortical screws inserted in lag fashion versus three parallel transarticular cortical screws inserted in lag fashion.

    Science.gov (United States)

    Sod, Gary A; Riggs, Laura M; Mitchell, Colin F; Hubert, Jeremy D; Martin, George S

    2010-01-01

    To compare in vitro monotonic biomechanical properties of an axial 3-hole, 4.5 mm narrow dynamic compression plate (DCP) using 5.5 mm cortical screws in conjunction with 2 abaxial transarticular 5.5 mm cortical screws inserted in lag fashion (DCP-TLS) with 3 parallel transarticular 5.5 mm cortical screws inserted in lag fashion (3-TLS) for the equine proximal interphalangeal (PIP) joint arthrodesis. Paired in vitro biomechanical testing of 2 methods of stabilizing cadaveric adult equine forelimb PIP joints. Cadaveric adult equine forelimbs (n=15 pairs). For each forelimb pair, 1 PIP joint was stabilized with an axial 3-hole narrow DCP (4.5 mm) using 5.5 mm cortical screws in conjunction with 2 abaxial transarticular 5.5 mm cortical screws inserted in lag fashion and 1 with 3 parallel transarticular 5.5 mm cortical screws inserted in lag fashion. Five matching pairs of constructs were tested in single cycle to failure under axial compression, 5 construct pairs were tested for cyclic fatigue under axial compression, and 5 construct pairs were tested in single cycle to failure under torsional loading. Mean values for each fixation method were compared using a paired t-test within each group with statistical significance set at Pcycle to failure, of the DCP-TLS fixation were significantly greater than those of the 3-TLS fixation. Mean cycles to failure in axial compression of the DCP-TLS fixation was significantly greater than that of the 3-TLS fixation. The DCP-TLS was superior to the 3-TLS in resisting the static overload forces and in resisting cyclic fatigue. The results of this in vitro study may provide information to aid in the selection of a treatment modality for arthrodesis of the equine PIP joint.

  15. Treatment planning in radiosurgery: parallel Monte Carlo simulation software

    Energy Technology Data Exchange (ETDEWEB)

    Scielzo, G [Galliera Hospitals, Genova (Italy). Dept. of Hospital Physics; Grillo Ruggieri, F [Galliera Hospitals, Genova (Italy) Dept. for Radiation Therapy; Modesti, M; Felici, R [Electronic Data System, Rome (Italy); Surridge, M [University of South Hampton (United Kingdom). Parallel Apllication Centre

    1995-12-01

    The main objective of this research was to evaluate the possibility of direct Monte Carlo simulation for accurate dosimetry with short computation time. We made us of: graphics workstation, linear accelerator, water, PMMA and anthropomorphic phantoms, for validation purposes; ionometric, film and thermo-luminescent techniques, for dosimetry; treatment planning system for comparison. Benchmarking results suggest that short computing times can be obtained with use of the parallel version of EGS4 that was developed. Parallelism was obtained assigning simulation incident photons to separate processors, and the development of a parallel random number generator was necessary. Validation consisted in: phantom irradiation, comparison of predicted and measured values good agreement in PDD and dose profiles. Experiments on anthropomorphic phantoms (with inhomogeneities) were carried out, and these values are being compared with results obtained with the conventional treatment planning system.

  16. Rubus: A compiler for seamless and extensible parallelism

    Science.gov (United States)

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been

  17. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  18. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  19. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985556 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/kalina-0474065.pdf

  20. Decoding restricted participation in sequential electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Knaut, Andreas; Paschmann, Martin

    2017-06-15

    Restricted participation in sequential markets may cause high price volatility and welfare losses. In this paper we therefore analyze the drivers of restricted participation in the German intraday auction which is a short-term electricity market with quarter-hourly products. Applying a fundamental electricity market model with 15-minute temporal resolution, we identify the lack of sub-hourly market coupling being the most relevant driver of restricted participation. We derive a proxy for price volatility and find that full market coupling may trigger quarter-hourly price volatility to decrease by a factor close to four.

  1. THE DEVELOPMENT OF SPECIAL SEQUENTIALLY-TIMED

    Directory of Open Access Journals (Sweden)

    Stanislav LICHOROBIEC

    2016-06-01

    Full Text Available This article documents the development of the noninvasive use of explosives during the destruction of ice mass in river flows. The system of special sequentially-timed charges utilizes the increase in efficiency of cutting charges by covering them with bags filled with water, while simultaneously increasing the effect of the entire system of timed charges. Timing, spatial combinations during placement, and the linking of these charges results in the loosening of ice barriers on a frozen waterway, while at the same time regulating the size of the ice fragments. The developed charges will increase the operability and safety of IRS units.

  2. Pass-transistor asynchronous sequential circuits

    Science.gov (United States)

    Whitaker, Sterling R.; Maki, Gary K.

    1989-01-01

    Design methods for asynchronous sequential pass-transistor circuits, which result in circuits that are hazard- and critical-race-free and which have added degrees of freedom for the input signals, are discussed. The design procedures are straightforward and easy to implement. Two single-transition-time state assignment methods are presented, and hardware bounds for each are established. A surprising result is that the hardware realizations for each next state variable and output variable is identical for a given flow table. Thus, a state machine with N states and M outputs can be constructed using a single layout replicated N + M times.

  3. Estimation After a Group Sequential Trial.

    Science.gov (United States)

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why

  4. Boundary conditions in random sequential adsorption

    Science.gov (United States)

    Cieśla, Michał; Ziff, Robert M.

    2018-04-01

    The influence of different boundary conditions on the density of random packings of disks is studied. Packings are generated using the random sequential adsorption algorithm with three different types of boundary conditions: periodic, open, and wall. It is found that the finite size effects are smallest for periodic boundary conditions, as expected. On the other hand, in the case of open and wall boundaries it is possible to introduce an effective packing size and a constant correction term to significantly improve the packing densities.

  5. Sequential extraction of uranium metal contamination

    International Nuclear Information System (INIS)

    Murry, M.M.; Spitz, H.B.; Connick, W.B.

    2016-01-01

    Samples of uranium contaminated dirt collected from the dirt floor of an abandoned metal rolling mill were analyzed for uranium using a sequential extraction protocol involving a series of five increasingly aggressive solvents. The quantity of uranium extracted from the contaminated dirt by each reagent can aid in predicting the fate and transport of the uranium contamination in the environment. Uranium was separated from each fraction using anion exchange, electrodeposition and analyzed by alpha spectroscopy analysis. Results demonstrate that approximately 77 % of the uranium was extracted using NH 4 Ac in 25 % acetic acid. (author)

  6. Characterization of a sequential pipeline approach to automatic tissue segmentation from brain MR Images

    International Nuclear Information System (INIS)

    Hou, Zujun; Huang, Su

    2008-01-01

    Quantitative analysis of gray matter and white matter in brain magnetic resonance imaging (MRI) is valuable for neuroradiology and clinical practice. Submission of large collections of MRI scans to pipeline processing is increasingly important. We characterized this process and suggest several improvements. To investigate tissue segmentation from brain MR images through a sequential approach, a pipeline that consecutively executes denoising, skull/scalp removal, intensity inhomogeneity correction and intensity-based classification was developed. The denoising phase employs a 3D-extension of the Bayes-Shrink method. The inhomogeneity is corrected by an improvement of the Dawant et al.'s method with automatic generation of reference points. The N3 method has also been evaluated. Subsequently the brain tissue is segmented into cerebrospinal fluid, gray matter and white matter by a generalized Otsu thresholding technique. Intensive comparisons with other sequential or iterative methods have been carried out using simulated and real images. The sequential approach with judicious selection on the algorithm selection in each stage is not only advantageous in speed, but also can attain at least as accurate segmentation as iterative methods under a variety of noise or inhomogeneity levels. A sequential approach to tissue segmentation, which consecutively executes the wavelet shrinkage denoising, scalp/skull removal, inhomogeneity correction and intensity-based classification was developed to automatically segment the brain tissue into CSF, GM and WM from brain MR images. This approach is advantageous in several common applications, compared with other pipeline methods. (orig.)

  7. Using parallel computing in modeling and optimization of mineral ...

    African Journals Online (AJOL)

    Then to solve ultimate pit limit problem it is required to find such a sub graph in a graph whose sum of weights will be maximal. One of the possible solutions of this problem is using genetic algorithms. We use a ... Details of implementation parallel genetic algorithm for searching open pit limits are provided. Comparison with ...

  8. A Parallel Algebraic Multigrid Solver on Graphics Processing Units

    KAUST Repository

    Haase, Gundolf; Liebmann, Manfred; Douglas, Craig C.; Plank, Gernot

    2010-01-01

    -vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster

  9. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  10. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  11. A New Approach of Parallelism and Load Balance for the Apriori Algorithm

    Directory of Open Access Journals (Sweden)

    BOLINA, A. C.

    2013-06-01

    Full Text Available The main goal of data mining is to discover relevant information on digital content. The Apriori algorithm is widely used to this objective, but its sequential version has a low performance when execu- ted over large volumes of data. Among the solutions for this problem is the parallel implementation of the algorithm, and among the parallel implementations presented in the literature that based on Apriori, it highlights the DPA (Distributed Parallel Apriori [10]. This paper presents the DMTA (Distributed Multithread Apriori algorithm, which is based on DPA and exploits the parallelism level of threads in order to increase the performance. Besides, DMTA can be executed over heterogeneous hardware platform, using different number of cores. The results showed that DMTA outperforms DPA, presents load balance among processes and threads, and it is effective in current multicore architectures.

  12. Parallelized preconditioned BiCGStab solution of sparse linear system equations in F-COBRA-TF

    International Nuclear Information System (INIS)

    Geemert, Rene van; Glück, Markus; Riedmann, Michael; Gabriel, Harry

    2011-01-01

    Recently, the in-house development of a preconditioned and parallelized BiCGStab solver has been pursued successfully in AREVA’s advanced sub-channel code F-COBRA-TF. This solver can be run either in a sequential computation mode on a single CPU, or in a parallel computation mode on multiple parallel CPUs. The developed procedure enables the computation of several thousands of successive sparse linear system solutions in F-COBRA-TF with acceptable wall clock run times. The current paper provides general information about F-COBRA-TF in terms of modeling capabilities and application areas, and points out where the relevance arises for the efficient iterative solution of sparse linear systems. Furthermore, the preconditioning and parallelization strategies in the developed BiCGStab iterative solution approach are discussed. The paper is concluded with a number of verification examples. (author)

  13. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  14. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  15. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1991-01-01

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  16. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  17. Event monitoring of parallel computations

    Directory of Open Access Journals (Sweden)

    Gruzlikov Alexander M.

    2015-06-01

    Full Text Available The paper considers the monitoring of parallel computations for detection of abnormal events. It is assumed that computations are organized according to an event model, and monitoring is based on specific test sequences

  18. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  19. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  20. Time scale of random sequential adsorption.

    Science.gov (United States)

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.